UFDC Home  myUFDC Home  Help 



Full Text  
AIRCRAFT STRUCTURAL SAFETY: EFFECTS OF EXPLICIT AND IMPLICIT SAFETY MEASURES AND UNCERTAINTY REDUCTION MECHANISMS By ERDEM ACAR A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2006 Copyright 2006 by Erdem Acar This dissertation is dedicated to my family: my father Zuhuri Acar, my mother Serife Acar, and my sister Asiye Acar. ACKNOWLEDGMENTS I would like to express special thanks and appreciation to Dr. Raphael T. Haftka, chairman of my advisory committee. I am grateful to him for providing me with this excellent opportunity and financial support to complete my doctoral studies under his exceptional guidance. He encouraged me to attend several conferences and assisted in finding an internship during my studies. Through our weekly meetings and his open door policy, which I definitely overexploited, he greatly contributed to this dissertation. His limitless knowledge and patience are inspiration to me. During the past three years, he was more than my PhD supervisor; he was a friend, and sometimes like a father. I sincerely hope we will remain in contact in the future. I would also like to thank the members of my advisory committee, Dr. Bhavani V. Sankar, cochair of the committee, Dr. Nagaraj Arakere, Dr. NamHo Kim and Dr. Stanislav Uryasev. I am grateful for their willingness to review my Ph.D. research and to provide me with the constructive comments which helped me to complete this dissertation. In particular, I would like to extend special thanks to Dr. Bhavani V. Sankar for his guidance on the papers we coauthored, and Dr. NamHo Kim for his comments and suggestions during the meetings of the Structural and Multidisciplinary Group. I also wish to express my gratitude to my M.Sc. advisor, Dr. Mehmet A. Akgun, who provided a large share of motivation for pursuing a doctorate degree. The experience he supplied me during my master's degree studies contributed to this dissertation. I also wish to thank to my colleagues at the Structural and Multidisciplinary Group of the Mechanical and Aerospace Engineering Department of the University of Florida for their support, friendship and many technical discussions. In particular, I would like to thank Dr. Melih Papila, Dr. Jaco Schutte, my soul sister Lisa Schutte, Tushar Goel and Ben Smarslok for their friendship (in the order of meeting with them). Financial support provided by NASA Cooperative Agreement NCC3994, NASA University Research, Engineering and Technology Institute and NASA Langley Research Center Grant Number NAG103070 is gratefully acknowledged. Finally, my deepest appreciation goes to my family: my father Zuhuri Acar, my mother Serife Acar and my sister Asiye Acar. The initiation, continuation and final completion of this thesis would not have happened without their continuous support, encouragement and love. I am incredibly lucky to have them in my life. TABLE OF CONTENTS A C K N O W L E D G M E N T S ................................................................................................. iv LIST OF TABLES .............. ............................................... ........ xi LIST OF FIGURES .................................................... ............... ......... ... xv N O M E N C L A T U R E .................................................. ................................................ xix A B STR A C T .............................. ........................................................ xxv CHAPTER 1 IN TR OD U CTION ............................................... .. ......................... .. M o tiv atio n .......................................................... 1 O bj ectiv e s ................................................................... ................................. . 5 M methodology ............................................................................................................. 6 O u tlin e ............................................................................ . 7 2 LITERATURE REVIEW .............................................................. ...............12 Probabilistic vs. D eterm inistic D esign ....................................................... 12 Structural Safety A analysis ............................................................. 14 Probability of Failure Estimation ..................................... .......... 15 Analytical calculation of probability of failure ...........................................15 Momentbased techniques ........................................... ..................16 Sim ulation techniques ............................................................................ 17 Separable Monte Carlo simulations ..................................................18 Response surface approximations .......................................................... 19 ReliabilityBased Design Optimization .......................................... .......... .........20 Double loop (Nested) RBDO ..................................... ...............20 Single loop RB D O .................................. ....................................... 21 Error and V ariability ...................... ................. ..... .......................... 22 U uncertainty Classification ........................................................ ...............22 Reliability Improvement by Error and Variability Reduction..............................23 Testing and Probabilistic D esign ................................................... ............... ... 24 3 WHY ARE AIRPLANES SO SAFE STRUCTURALLY? EFFECT OF V ARIOU S SAFETY M EA SURES ................................................. .....................28 In tro du ctio n ...................................... ................................................ 2 8 Structural U uncertainties ........................................... ................................. 30 Safety M measures .......................................... ........... ...... ....... .......... 32 D esign of a G eneric Com ponent........................................... .......................... 33 Design and Certification Testing.......... ............ ......... .............. 33 Effect of Certification Tests on Distribution of Error Factor e ..........................36 Probability of Failure Calculation by Analytical Approximation.....................38 Effect of Three Safety Measures on Probability of Failure.................... ........ 41 S u m m a ry .......................... ............... .... .. .................... ................ 5 1 4 COMPARING EFFECTIVENESS OF MEASURES THAT IMPROVE AIRCRAFT STRUCTURAL SAFETY .......................................... ............... 53 In tro d u ctio n ........................................................................................5 3 L oad Safety Factor ............................ .. .... .............................. ......... 54 Conservative M material Properties ............................................. ............... 54 T e st s ............................................................................................................... 5 4 R e d u n d a n cy ................................................................................................... 5 5 Error R education .................................. ... .. ......... ...............55 Variability Reduction ................................ ..... .. ....... .... .. ............. 55 Errors, Variability and Total Safety Factor ..................................... ............... 56 E rrors in D esign ............................................... ................ 56 Errors in Construction ................... .... .......... ............. .... ........58 T total E rror F actor ...................................................... ...... .................. .... 59 T otal Safety F actor ......................... .. .................... ......... ........... 60 V ariability ................... ................................................................................. 6 1 C ertificatio n T ests............. ........................................................................... .. ...... .. 6 2 Probability of Failure Calculation ................................................... .....................65 Probability of Failure Calculation by Separable MCS .......................................65 In clu ding R edu n dan cy ............................ .........................................................70 R e su lts ............................................................................................ 7 0 E effect of E errors ............................ ................... ................................... 70 Weight Saving Due to Certification Testing and Error Reduction................. 73 Effect of R edundancy ............................ ..... ............. ................................. 74 Additional Safety Factor Due to Redundancy...............................................77 Effect of V ariability Reduction ..........................................................................78 S u m m ary ................................ ........................................................ 8 1 5 INCREASING ALLOWABLE FLIGHT LOADS BY IMPROVED STRUCTURAL MODELING............................................... 82 Introduction ........................................................... ..... ......................... 82 Structural Analysis of a Sandwich Structure.............................................................85 Analysis of Error and Variability ................................................... ..................89 Deterministic Design and Bbasis Value Calculations.............................................93 A ssessm ent of Probability of Failure..................................................... ............... 96 Analyzing the Effects of Improved Model on Allowable Flight Loads via P rob ab ilistic D e sig n ...................................................................... .......... .. .. 9 9 S u m m a ry .......................................................................................1 0 1 6 TRADEOFF OF UNCERTAINTY REDUCTION MECHANISMS FOR REDUCING STRUCTURAL WEIGHT......................................................103 Introduction...................................................... ................. ............ ..... 104 Design of Composite Laminates for Cryogenic Temperatures ............................ 106 Calculation of Probability of Failure .................................. ........... ................... 108 Probabilistic Design Optim ization...................................................................... 112 Probabilistic Sufficiency Factor (PSF).............................................................112 D esign Response Surface (DR S)................. ..................... ....................... 113 Weight Savings by Reducing Error and Employing Manufacturing Quality C o n tro l .................... ................. ........... ... .............. ............... 1 14 Choosing Optimal Uncertainty Reduction Combination .............. .... ...............118 Sum m ary .............. ........................................ ..............................119 7 OPTIMAL CHOICE OF KNOCKDOWN FACTORS THROUGH PROBABILISTIC DESIGN ......................................................... ............. 121 Introduction .................................122................................................ Testing of A aircraft Structures .............................................................. ............... 123 Quantification of Errors and Variability................................ .... ............... 125 Errors in Estimating Material Strength Properties from Coupon Tests ............125 Errors in Structural Elem ent Tests ..................... .. ..... ..... .................... 127 Allowable stress updating and the use of explicit knockdown factors............129 Current industrial practice on updating allowable stresses using worst case conditions (implicit knockdown factors) ..................................129 Proposal for a better way to update allowable stresses: Using the average failure stress measured in the tests and using optimal explicit knockdow n factors...................... .. .. ............................. ............... 130 Error updating via elem ent tests............................... .. ............. ........... 134 Errors in D design .................................. ..............................................135 E rrors in C construction .............................................. ............................. 137 Total Error Factor ........... ........... ........ ................. ............. 138 Total Safety Factor ............ ........................... ........... ...... ........ .. 138 Variability ......................................139 Simulation of Certification Test and Probability of Failure Calculation.................. 141 Simulation of Certification Test ...... ................. ...............141 Calculation of Probability of Failure................................. ............. ........... 142 R results ............................................. ............. ................... 144 Optimal Choice of Explicit Knockdown Factors for Minimum Weight and M minimum Certification Failure Rate ............ ....... ............. ..........145 Optimal Choice of Explicit Knockdown Factors for Minimum Weight and M inim um Probability of Failure ................................................................148 Effect of Coupon Tests and Structural Element Tests on Error in Failure Prediction .................... ... .. ...... ........ ... ..... .......... ..................... 150 Effect of number of coupon tests alone (for a fixed number of element te sts, en = 3 ) .................................... .... .... .. ............................. 1 5 0 Effect of number of element tests alone (for a fixed number of coupon te sts, n c= 4 0 ) .................... .. .. .................... .........................15 1 Advantage of Variable Explicit Knockdown Factors ............. ... .................153 Effect of Other Uncertainty Reduction Mechanisms ....................................... 157 Effect of variability reduction ............... ............................................. 157 Effect of error reduction ................................... ............................. ....... 159 Effect of Number of Coupon Tests ............... .............................................. 161 Effect of Number of Structural Element Tests .............................................162 Sum m ary ................ ................................... ...........................164 8 RELIABILITY BASED AIRCRAFT STRUCTURAL DESIGN PAYS EVEN WITH LIMITED STATISTICAL DATA............................................ ..........165 Introduction ......... .. ...... ......... ...... .. ....... ........ ......................... 165 Demonstration of Gains from ReliabilityBased Structural Design Optimization of a R representative W ing and Tail System ........................................ ....................167 Problem Formulation and Simplifying Assumptions ............ ... .................167 Probabilistic Optimization with Correct Statistical Data ...............................169 Effect of Errors in Information about Deterministic Design.................................. 174 Errors in Coefficient of Variation of Stresses ..................................................174 Erroneous M ean Stresses........................................... ....... ........................ 177 Errors in Probability of Failure Estimates of Deterministic Design................ 179 Effect of Using Wrong Probability Distribution Type for the Stress ..............181 Approximate Probabilistic Design Based on Failure Stress Distributions .............182 Application of Characteristic Stress Method to Wing and Tail Problem...............86 Sum m ary ............................................................... ..... ..... ......... 189 9 CONCLUDING REM ARKS ........................................................ ............. 192 APPENDIX A ABASIS AND BBASIS VALUE CALCULATION...........................................197 B PROBABILITY CALCULATIONS FOR CHAPTER 3 ......................................199 Calculation of Pr(CTle), the Probability of Passing Certification Test ....................199 Calculations of Mean and Standard Deviation of Probability of Failure .................200 C CONFLICTING EFFECTS OF ERROR AND VARIABILITY ON PROBABILITY OF FAILURE IN CHAPTER 3 ................................... .........202 D COMPARISON OF RESULTS OF SINGLE ERROR FACTOR AND MULTIPLE ERROR FACTOR CASES ....................... ......... .........2...............204 E DETAILS OF SEPARABLE MONTE CARLO SIMULATIONS FOR PROBABILITY OF FAILURE CALCULATIONS IN CHAPTER 4..................209 F CALCULATION OF THE SYSTEM FAILURE PROBABILITY USING BIVARIATE NORMAL DISTRIBUTION .................................. ............... 212 G TEMPERATURE DEPENDENT MATERIAL PROPERTIES FOR THE CRYOGENIC LAMINATES IN CHAPTER 6 ............................. ................214 H DETAILS OF CONSERVATIVE CUMULATIVE DISTRIBUTION FUN CTION (CDF) FITTIN G ........................................................ ............... 216 I DETAILS OF DESIGN RESPONSE SURFACE FITTING FOR THE PROBABILITY SUFFICIENCY FACTOR FOR THE CRYOGENIC LA M IN A TE S IN CH A PTER 6..................................................... .....................218 J ASSESSMENT OF THE ERROR DUE TO LIMITED NUMBER OF COUPON T E S T S ..................................................... .... ................. 2 2 2 K PROBABILITY OF FAILURE CALCULATIONS FOR CHAPTER 7 USING SE P A R A B L E M C S ......................................................................... ...................224 L CHANGE IN COST DUE TO INCREASE OF THE STRUCTURAL WEIGHT ..230 M RESPONSE SURFACE APPROXIMATIONS FOR RELIABILITY INDEX OF CERTIFICATION FAILURE RATE, RELIABILITY INDEX OF PROBABILITY OF FAILURE AND BUILT SAFETY FACTOR IN CHAPTER 7 ............................................................................... 2 3 2 N CALCULATION OF THE MEAN AND THE C.O.V. OF THE STRESS DISTRIBUTION USING PROBABILITY OF FAILURE INFORMATION .........233 O RELATION OF COMPONENT WEIGHTS AND OPTIMUM COMPONENT FAILURE PROBABILITIES IN CHAPTER 8 ............................. ................236 P HISTORICAL RECORD FOR AIRCRAFT PROBABILITY OF FAILURE ........241 L IST O F R E F E R E N C E S ...................................................................... .....................243 B IO G R A PH IC A L SK E T C H ........................................ ............................................256 LIST OF TABLES Table p 31 U uncertainty classification......................................................... ............... 31 32 Distribution of random variables used for component design and certification ......36 33 Comparison of probability of failures for components designed using safety factor of 1.5, mean value for allowable stress and error bound of 50% ...................40 34 Probability of failure for different bounds on error e for components designed using safety factor of 1.5 and Abasis property for allowable stress .....................42 35 Probability of failure for different bounds on error e for components designed using safety factor of 1.5 and mean value for allowable stress.............................44 36 Probability of failure for different bounds on error e for safety factor of 1.0 and A basis allow able stress ........................ ......... .... ...... ............ 46 37 Probability of failure for different error bounds for safety factor of 1.0 and mean v alu e for allow ab le stress .............................................................. .....................4 6 38 Probability of failure for different uncertainty in failure stress for the components designed with safety factor of 1.5, 50% error bounds e and Abasis allow able stress. .................................................... ................. 47 39 Probability of failure for different uncertainty in failure stress for the components designed with safety factor of 1.5, 30% error bound e and Abasis allow able stress. .................................................... ................. 47 3.10 Probability of failure for uncertainty in failure stress for components designed using safety factor of 1.5, 10% error bounds e and Abasis properties....................48 41 Distribution of error factors and their bounds ........... ............ .......................59 42 Distribution of random variables having variability ...........................................61 43 Mean and standard deviations of the built and certified distributions of the error factor etotal and the total safety factor SF........... .............................. ...............64 44 Average and coefficient of variation of the probability of failure for the structural parts designed with Bbasis properties and SFL=1.5................................ 72 45 Reduction of the weight of structural parts by certification testing for a given probability of failure ............................................. .................. .... 74 46 Effect of redundancy on the probabilities of failure...............................................75 47 Effect of redundancy on the effectiveness of certification testing ...........................76 48 Effect of correlation coefficient p on system failure probabilities and effectiveness of certification testing ......... ................... ....... .... ..................... 77 49 Additional safety factor due to redundancy .................................. ............... 78 410 Comparison of system failure probabilities corresponding to different variability in failure stress of. .......... ..... ................. ................................. 79 51 Deviations between measured and fitted values of"average Ge" and "Go with m ode m ixity" for different designs ............................................... ............... 90 52 The mean and Bbasis values of the fracture toughness of the designs analyzed....94 53 Allowable flight load of failure of the sandwich panels designed using determ inistic approach ....................... .. ...................... .... ...... .... ............ 96 54 Corresponding probabilities of failure of the sandwich panels designed using determ inistic approach ....................... .. ...................... .... ...... .... ............ 99 55 Allowable flight loads of the sandwich panels calculated via probabilistic ap p ro ach ...........................................................................10 1 61 A llow able strains for IM 600/133 ........................................ ....................... 107 62 D eterm inistic optim um design .......................................................... ............... 108 63 Coefficients of variation of the random variables............................. 108 64 Evaluation of the accuracy of the analysis response surface ..............................110 65 Comparison of probability of failure estimations for the deterministic optimum.. 111 66 Probabilistic optimum designs for different error bounds when only error reduction is applied .................. .............................. ........ .. ........ .. 114 67 Probabilistic optimum designs for different error bounds when both error and variability reduction are applied................................................................. ...... 116 71 Distribution of error factors and their bounds....................................................... 138 72 Distribution of random variables having variability .............................................140 73 Mean and standard deviations of the built and certified distribution of the total safety factor SF ..................................................................................142 74 Comparing explicit knockdown factors for minimum built safety factor for a specified certification failure rate.................................. .............................. ........ 148 75 Comparing explicit knockdown factors for minimum built safety factor for a specified probability of failure ................................... .............................. ........ 150 76 Comparison of constant and variable explicit knockdown factors case and corresponding area ratios, A/Ao. ........................................ ........................ 154 77 Comparison of constant (i.e., test independent) implicit and explicit knockdown factors and corresponding area ratios A/A o.................................... .....................156 78 Comparison of mean and coefficient of variation of total knockdown reduction at the element test level for the cases of implicit constant knockdown factor and explicit variable knockdow n factors ........................................... ............... 157 79 Optimal explicit knockdown factors for minimum CFR when variability in failure stress is reduced by half........................................ ........... ............... 159 710 Optimal explicit knockdown factors for minimum CFR when all errors reduced by half. ................................... ............................ ........ .......... 161 711 Optimal explicit knockdown factors for minimum CFR different number of coupon tests, n ........................................................................................... 162 712 Optimal explicit knockdown factors for different number of structural element tests, n ......................... .......... ........................................... 163 81 Probabilistic structural design optimization for safety of a representative wing and tail system ................................................................... ..........171 82 Probabilistic structural optimization of wing, horizontal tail and vertical tail sy stem ...................................... .....................................................17 3 83 Errors in the ratios of failure probabilities of the wing and tail system when the c.o.v. of the stresses underestimated by 50%.................................. ........... 175 84 Errors in the ratios of failure probabilities of the wing and tail system when the m ean stresses are underestim ated by 20% ...........................................................178 85 Errors in the ratios of failure probabilities of the wing and tail system when the probability of failure of the deterministic design is underpredicted.....................180 86 Errors in the ratios of failure probabilities of wing and tail system when the probability of failure of the deterministic design is overpredicted .......................181 87 Errors in the ratios of failure probabilities of the wing and tail system if the optimization is performed using wrong probability distribution type for the stress ............................................................... ...... ...... ........ 182 88 Probabilistic design optimization for safety of the representative wing and tail system using the characteristicstress m ethod ............... .............. ..................... 188 89 Effect of 20% underestimate of k on the ratios of probability of failure estimate 188 D1 Equivalent error bounds for the SEF model corresponding to the same standard deviation in the M EF model ........... .... .................... .................. .... 205 D2 Comparison of system failure probabilities for the SEF and MEF models ..........206 D3 Comparison of the total safety factor SF used in the design of structural parts for the SEF and M EF m odels............................................. .............................. 207 E1 Comparison of the probability of failure estimations ................ ...............211 I1 The ranges of variables for the three DRS constructed for PSF calculation..........218 I2 Accuracies of DRS fitted to PSF and Pf in terms of four design variables (ti, t2, 01 and 02) for error bounds, be, of 0, 10% and 20% .............................................219 13 Ranges of design variables for the three DRS constructed for probability of failure estimation for the error and variability reduction case .............................221 M 1 Accuracy of response surfaces ...................................................... ....................232 P1 Aircraft accidents and probability of failure of aircraft structures.........................242 LIST OF FIGURES Figure page 21 B building block approach ................................................ .............................. 26 31 Flowchart for Monte Carlo simulation of component design and failure ..............35 32 Initial and updated probability distribution functions of error factor e ..................38 33 Design thickness variation with low and high error bounds ..................................45 34 Influence of effective safety factor, error, and variability on the probability ratio (3 D v iew ) ........................................................................... 50 35 Influence of effective safety factor, error and variability on the probability ratio (2 D p lot) ................................................................................................ .... 5 0 36 Influence of effective safety factor, error and variability on the probability difference (3D view ) ....................................................... .... .. ...... .... 51 37 Influence of effective safety factor, error and variability on the probability difference (2D plot)........................................... .........51 41 Comparing distributions of built and certified total error etotal of SEF and MEF m o d els .............................................................................. 6 3 42 Initial and updated distribution of the total safety factor SF............... .................64 43 The variation of the probability of failure with the built total safety factor............. 68 44 Flowchart for MCS of component design and failure............................................69 45 Total safety factors for MEF model for the structural part and system after c ertific atio n .................................................... ................ 7 8 46 Effect of variability on failure probability ...................................................79 51 The model of facesheet/core debonding in a onedimensional sandwich panel w ith pressure load ............. ........................................................................ . ...... 86 52 Critical energy release rate as a function of mode mixity ........................................88 53 Comparison of actual and fitted cumulative distribution functions of variability, dMM, of G .. ...........................................................................92 54 Comparison of actual and fitted cumulative distribution functions of total uncertainty (error and variability, Jd ) of G ................................. .................92 55 Fitted least square lines for fracture toughness, and derived Bbasis allowables ....95 61 Geometry and loading of the laminate with two ply angles................................107 62 Comparison of CDF obtained via 1,000 MCS, the approximate normal distribution and conservative approximate normal distributions for 82 on 01 corresponding to the deterministic optimum ................................ .................. 111 63 Reducing laminate thickness (hence weight) by error reduction (no variability reduction) .............. .................................................. .. .. .. ....... .. 115 64 Reducing laminate thickness by error reduction (ER) and quality control (QC)...116 65 Tradeoff plot for the probability of failure, design thickness and uncertainty reduction m measures .............................................. .. ...... ................ 117 66 Tradeoff of probability of failure and uncertainty reduction .............................119 71 Buildingblock approach for aircraft structural testing ............... ................123 72 Sim plified three level of tests ................................................................................. 124 73 Current use of knockdown factors based on worstcase scenarios ........................131 74 Shrinkage of the failure surface ........................................ ......................... 132 75 The variation of the explicit knockdown factors with ratio of the failure stress measured in the test and calculated failure stress with and without transition interval ....................................................................... ........... 133 76 Proposed use of explicit knockdown factors dependent on test results ...............134 77 Initial and updated distribution of the total safety factor SF with and without structural elem ent test ..................................... .................. .......... ... ...... .... 142 78 The variation of probability of failure of a structural part built by a single aircraft com pany ............................................. .... .. ........ .. ........ .... 144 79 Optimal choice of explicit knockdown factors kcl and kch for minimum built safety factor for specified certification failure rate .............. ...... ....................146 710 Comparing CFR and PF of the structures designed for minimum CFR and minimum PF ................... ... ... ..................... ............ ........... 149 711 Effect of number of coupon tests on the error in failure prediction for a fixed number of element tests (3 element tests) .............. ...................... ......... ...... 151 712 Effect of number of element tests on the error in failure prediction for a fixed number of coupon tests (40 coupon tests).................. .. ............ ...... ....... 152 713 Evolution of the mean failure stress distribution with and without Bayesian u p d atin g ................................................................................................... 1 5 3 714 Comparison of variable and constant explicit knockdown factor.......................154 715 Comparison of Pareto fronts of certification failure rate and built safety factor for two different approaches while updating the allowable stress based on failure stresses m measured in elem ent tests .............................................. ............... 155 716 Reducing probability of failure and certification failure rate using variability reduction ............... ........... .......................... ............................159 717 Reducing certification failure rate using error reduction, variability reduction and combination of error and variability reduction.......................................... 160 718 Optimal explicit knockdown factors for different number of coupon tests for minimum CFR and PF ........................................... ...................... 162 719 Effect of number of structural element tests, ne ............................................163 81 Stress distribution s(a) before and after redesign in relation to failurestress distributionJf(a) ............ ...... .................. ........ .... ........ 168 82 The change of the ratios of probabilities of failure of the probabilistic design of Table 81 versus the error in c.o.v.(o)........................................... .................. 176 83 Two different stress distributions at the wing leading to the same probability of failure of lx 107. ............ .............................. ....... ............177 84 The change of the ratios of probabilities of failure with respect to the error in m ean stress ................................................................ 179 85 Calculation of characteristic stress o* from probability of failure ....................185 86 Comparison of approximate and exact A and A* and the resulting probabilities of failure for lognormal failure stress.............................................. .............186 87 The variation of the ratios of probabilities of failure with respect to error in k .....189 D1 System failure probabilities for the SEF and MEF models after certification .......206 D2 Total safety factors for the SEF and MEF model after certification................208 E1 Comparison of numerical CDF with the assumed lognormal CDF for the distribution of the required safety factor ................ ......... ..... ........ .......... 210 G1 Material properties El, E2, G12 and v12 as a function of temperature.....................214 G2 Material properties a, and a2 as a function of temperature...............................215 K1 The variation of probability of failure with built total safety factor ......................227 K2 Flowchart for MCS of component design and failure................. ....................228 xviii NOMENCLATURE ARS = Analysis response surface Aeq = Minimum required cross sectional area for the component to carry the service loading without failure Ao = Load carrying area if there is no variability and no safety measures c a2 = Coefficient of thermal expansion along and transverse to fiber direction be = Bound of error P = Reliability index C = Capacity of structure, for example, failure stress CFD = Cumulative distribution function CFR = Certification failure rate CLT = Classical lamination theory c.o.v. = Coefficient of variation DRS = Design response surface A = Relative change in the characteristic stress corresponding to a relative change of A in stress a e = Error factor efc = Error in failure prediction at the coupon level ec = Error in capacity calculation efe = Error in failure prediction at the element level ef, = Error in failure prediction at the structural level ejp = Total error in failure prediction em = Error in material property prediction ep = Error in load calculation eR = Error in response calculation e, = Error in stress calculation et = Error in thickness calculation total = Total error factor ew = Error in width calculation eA = Error in facture toughness assessment if traditional (averaging) method is used eM = Error in facture toughness assessment if traditional (averaging) method is used ER = Error reduction El, E2 = Young's modulus along and transverse to fiber direction E1, 2 = Strains in the fiber direction and transverse to the fiber direction f() = Probability density function of the failure stress F() = Cumulative distribution function of the failure stress FAA = Federal Aviation Administration G = Strain energy release rate Gc = Fracture toughness G12 = Shear modulus 712 = Shear strain k = Error multiplier kA, kB = Tolerance coefficients for Abasis and Bbasis value calculation kdc Ki, K11 M MCS MEF model N Nx, Ny fle iHe allow P PSF Pd Pf Pf* Pfd PF Pc Pnc QC ret R Knockdown factor used to calculate allowable stress Model I and II stress intensity factors, respectively Number of simulations in the first stage of MCS Monte Carlo simulation Multiple error factor model Number of simulations in the second stage of MCS Mechanical loading in x and y directions, respectively Number of coupon tests Number of structural element tests Allowable flight load Load Probability density function Probability sufficiency factor Design load according to the FAA specifications Probability of failure of a component Approximate probability of failure of probabilistic design Probability of failure of deterministic design Probability of failure of a system Average probability of failure after certification test Average probability of failure before certification test Quality control for manufacturing Ratio of failure stresses measured in test and its predicted value Response of a structure, for example, stress RMSE RSA R2adj P s( ) SEF model S, sc' Sch Sfe SFL SF aU a"* ca t Vt Vw VR w W Root mean square error Response surface approximation Adjusted coefficient of multiple determination Coefficient of correlation = Probability density function of the stress Single error factor model Additional company safety factor Additional company safety factor if the failure stress measured in element tests are lower than the predicted failure stress Additional company safety factor if the failure stress measured in element tests are higher than the predicted failure stress Total safety factor added during structural element tests Load safety factor of 1.5 (FAA specification) Total safety factor Stress Characteristic stress Allowable stress Failure stress Thickness Variability in built thickness Variability in built width Coefficient of variation Width Weight Weight of the deterministic design Cumulative distribution function of the standard normal distribution Modemixity angle Subscripts act built calc cert d design spec target true worst W T Subscripts ave ini upd xxiii The value of the relevant quantity in actual flight conditions Built value of the relevant quantity, which is different than the design value due to errors in construction Calculated value of the relevant quantity, which is different from the true value due to errors The value of the relevant quantity after certification test Deterministic design The design value of the relevant quantity Specified value of the relevant qunatity Target value of the relevant quantity The true value of the relevant quantity The worst value of the relevant quantity Wing Tail Average value of the relevant quantity Initial value of the relevant quantity Updated value of the relevant quantity U = Upper limit of the relevant quantity L = Lower limit of the relevant quantity xxiv Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy AIRCRAFT STRUCTURAL SAFETY: EFFECTS OF EXPLICIT AND IMPLICIT SAFETY MEASURES AND UNCERTAINTY REDUCTION MECHANISMS By Erdem Acar August 2006 Chair: Raphael T. Haftka Cochair: Bhavani V. Sankar Major Department: Mechanical and Aerospace Engineering Aircraft structural safety is achieved by using different safety measures such as safety and knockdown factors, tests and redundancy. Safety factors or knockdown factors can be either explicit (e.g., load safety factor of 1.5) or implicit (e.g., conservative design decisions). Safety measures protect against uncertainties in loading, material and geometry properties along with uncertainties in structural modeling and analysis. The two main objectives of this dissertation are: (i) Analyzing and comparing the effectiveness of structural safety measures and their interaction. (ii) Allocating the resources for reducing uncertainties, instead of living with the uncertainties and allocating the resources for heavier structures for the given uncertainties. Certification tests are found to be most effective when error is large and variability is small. Certification testing is more effective for improving safety than increased safety factors, but it cannot compete with even a small reduction in errors. Variability reduction is even more effective than error reduction for our examples. The effects of structural element tests on reducing uncertainty and the optimal choice of additional knockdown factors are explored. We find that instead of using implicit knockdown factors based on worstcase scenarios (current practice), using test dependent explicit knockdown factors may lead weight savings. Surprisingly, we find that a more conservative knockdown factor should be used if the failure stresses measured in tests exceeds predicted failure stresses in order to reduce the variability in knockdown factors generated by variability in material properties. Finally, we perform probabilistic optimization of a wing and tail system under limited statistical data for the stress distribution and show that the ratio of the probabilities of failure of the probabilistic design and deterministic design is not sensitive to errors in statistical data. We find that the deviation of the probabilistic design and deterministic design is a small perturbation, which can be achieved by a small redistribution of knockdown factors. xxvi CHAPTER 1 INTRODUCTION Motivation Traditionally, the design of aerospace structures relies on a deterministic design (codebased design) philosophy, in which safety factors (both explicit and implicit), conservative material properties, redundancy and certification testing are used to design against uncertainties. An example of explicit safety factor is the load safety factor of 1.5 (FAR 25303), while the conservative decisions employed while updating the failure stress allowables based on structural element tests are examples for implicit safety factors. In the past few years, however, there has been growing interest in applying probabilistic methods to design of aerospace structures (e.g., Lincoln 1980, Wirsching 1992, Aerospace Information Report of SAE 1997, Long and Narciso 1999) to design against uncertainties by effectively modeling them. Even though probabilistic design is a more efficient way of improving structural safety than deterministic design, many engineers are skeptical of probability of failure calculations of structural designs for the following reasons. First, data on statistical variability in material properties, geometry and loading distributions are not always available in full (e.g., joint distributions), and it has been shown that insufficient information may lead to large errors in probability calculations (e.g., BenHaim and Elishakoff 1990, Neal et al. 1992). Second, the magnitude of errors in calculating loads and predicting structural response is not known precisely, and there is no consensus on how to model these errors in a probabilistic setting. As a result of these concerns, it is possible that transition to probability based design will be gradual. An important step in this transition is to understand the way safety is built into aircraft structures now, via deterministic design practices. One step taken in the transition to probabilistic design is in the definition of conservative material properties (Abasis or Bbasis material property values depending on the failure path in the structure) by the Federal Aviation Administration (FAA) regulation (FAR 25.613). Abasis material property is one in which 99 percent of the material property distribution is better than the design value with a 95 percent level of confidence, and Bbasis material property is one in which 90 percent of the material property distribution is better than the design value with a 95 percent level of confidence. The use of conservative material properties is intended to protect against variability in material properties. In deterministic design the safety of a structure is achieved through safety factors. Even though some safety factors are explicitly specified, others are implicit. Examples of explicit safety factors are the load safety factor and material property knockdown values. The FAA regulations require a load safety factor equal to 1.5 for aircraft structures (FAR 25303). The load safety factor compensates for uncertainties such as uncertainty in loading and errors in load calculations, structural stress analysis, accumulated damage, variations in material properties due to manufacturing defects and imperfections, and variations in fabrication and inspection standards. Safety factors are generally developed from empirically based design guidelines established from years of structural testing of aluminum structures. Muller and Schmid (1978) review the historical evolution of the load safety factor of 1.5 in the United States. Similarly, the use of Abasis or Bbasis material properties leads to a knockdown factor from the average values of the material properties measured in the tests. Note that these knockdown factors depend on the number of tests, because they compensate for both variability in material properties and uncertainty due to a finite number of tests. As noted earlier, an important step in transition to probabilistic design is to analyze the probabilistic impact of the safety measures used in deterministic design. This probabilistic analysis requires quantification of uncertainties encountered in design, manufacturing and actual service conditions of the aircraft structures. A good analysis of different sources of uncertainty in engineering modeling and simulations is provided by Oberkampf et al. (2000, 2002). These papers also supply good literature reviews on uncertainty quantification and divide the uncertainty into three types: variability, uncertainty, and error. In this distinction, variability refers to aleatory uncertainty (inherent randomness), uncertainty refers to epistemic uncertainty (due to lack of knowledge), and error is defined as a recognizable deficiency in any phase or activity of modeling and simulation that is not due to lack of knowledge. To simplify the treatment of uncertainty control, in this dissertation we combine the unrecognized epistemicc) and recognized error in the classification of Oberkampf et al. and name it error. That is, we use a simple classification that divides the uncertainty in the failure of a structural member into two types: errors and variability. Errors reflect inaccurate modeling of physical phenomena, errors in structural analysis, errors in load calculations, or deliberate use of materials and tooling in construction that are different from those specified by the designer. Errors affect all the copies of the structural components made and are therefore fleetlevel uncertainties. Variability, on the other hand, reflects the departure of material properties, geometry parameters or loading of an individual component from the fleetaverage values and hence are individual uncertainties. Modeling and quantification of variability are much easier compared to that of error. Improvements in tooling and construction or application of tight quality control techniques can reduce variability. Quantification of variability control can be easily done by statistical analysis of records taken throughout process of quality control. However, quantification of errors is not as easy, because errors are largely not known before a structure is built. So, errors can only be quantified after the structure has been built. Errors can be controlled by improving accuracy of load and stress calculations, by using more sophisticated analysis and failure prediction techniques or by testing of structural components. Testing of aircraft structural components is performed in a building block type of approach starting with material characterization tests, followed by testing of structural elements and including a final certification test. Testing of structures is discussed in detail in the next chapter. The comparison of deterministic design and probabilistic design can be performed in many views. First of all, input and output variables of deterministic design are all deterministic values, while input and output variables of probabilistic design are random (along with some deterministic variables, of course). Here, on the other hand, we compare probabilistic design and deterministic design in terms of use of safety factors. In deterministic design uniform safety factors are used; that is, the same safety factor is used for all components of a system. However, probabilistic design allows using variable safety factors through allowing risk and reliability allocation between different components. That is, instead of using the same safety factor for all components, probabilistic design allows to use higher factors for components or failure modes that can be controlled with low weight expenditure (Yang, 1989). This means the failure modes with small scatter and lightweight components. In addition, probabilistic design allows a designer to trade off uncertainty control for lower safety factors. That is, by reducing uncertainty, the designer can avoid using high safety factors in the design and thereby can reduce the weight of the structural system. This design paradigm allows the designer to allocate risk and reliability between different components in a rational way to achieve a safer design for a fixed weight compared to the deterministic design. Objectives There are two main objectives of this dissertation. The first is to analyze and compare the effectiveness of safety measure that improve structural safety such as safety factors (explicit or implicit), structural tests, redundancy and uncertainty reduction mechanisms (e.g., improved structural analysis and failure prediction, manufacturing quality control). The second objective is to explore the advantage of uncertainty reduction mechanisms (e.g., improved structural analysis and failure prediction, tighter manufacturing quality control) versus safety factors. That is, we consider the possibility of allocating the resources for reducing uncertainties, instead of living with the uncertainties and allocating the resources for designing the aircraft structures for the given uncertainties. We aim to analyze the effectiveness of safety measures taken in deterministic design methodology and investigate the interaction and effectiveness of these safety measures with one another and also with uncertainties. In particular, the effectiveness of uncertainty reduction mechanisms is analyzed and compared. The uncertainty reduction mechanisms considered in this dissertation are reduction of errors by improving the accuracy of structural analysis and failure prediction (analytically or through tests), and reduction of variability in failure stress as a result of tighter quality control. We explore the optimal choice of additional company safety factors used on top of the FAA regulation safety factors by using probabilistic design, which provides a rational way in the analysis. Additional company safety factors we consider are the conservative decisions of aircraft companies while updating the allowable stresses based on the results of structural element tests. We perform probabilistic design optimization for the case of limited statistical data on stress distribution and show that when the probabilistic design is achieved by taking the deterministic design as a starting point, the ratio of probabilities of failure of the probabilistic design and deterministic design is not sensitive to errors due to limited statistical data, which would lead to substantial errors in the probabilistic design if the probabilistic design starts from scratch. In addition, we propose a probabilistic design methodology in which the probability of failure calculation is confined only to stress limits, thereby eliminating the necessity for assessment of stress distribution that usually requires computationally expensive finite element analyses. Methodology Probability of failure calculation of structures can be performed by using either analytical techniques or simulation techniques. Analytical methods are more accurate but for complex systems they may not be practical. Simulation techniques include direct Monte Carlo simulation (MCS) as well as many variancereduction methods including stratified sampling, importance sampling, and adaptive importance sampling (Ayyub and McCuen 1995). In probabilistic design of structures, the use of inverse reliability measures helps a designer to have an easy estimate of the change in structural weight from the values of probabilistic performance measure and its target value as well as computational advantages (Ramu et al. 2004). Amongst those measures we use probabilistic sufficiency factor (PSF) developed by Qu and Haftka (2003). Here we consider a simplified design problem for illustration purposes, so that the reliability analysis can be performed by analytical means. The effect of testing then can be analyzed by using Bayesian approach. The Bayesian approach has special importance in engineering design where the available information is limited and it is often necessary to make subjective decisions. Bayesian updating is used to obtain the updated (or posterior) distribution of a random variable upon combining the initial (or prior) distribution with new information about the random variable. The detailed theory and procedures for applying Bayesian methods in reliability and risk analysis can be found in texts by Morgan (1968) and Martz and Waller (1982). Outline A literature survey on the historical evolution of probabilistic design, comparison of deterministic design and probabilistic design practices, uncertainty control measures and testing of aircraft structures is given in Chapter 2. Chapter 3 investigates the effects of error, variability, safety measures and tests on structural safety of aircraft. A simple example of point stress design and a simple error model are used to illustrate the effects of several safety measures taken in aircraft design: safety factors, conservative material properties, and certification tests. This chapter serves as the opening chapter; therefore the analysis and the number of safety measures are kept at a minimum level. For instance, only certification tests are included in the analysis. The effects of coupon tests and structural element tests are delayed until Chapter 7. The simplifying assumptions in Chapter 3 allow us to perform analytical calculations for probability of failure and Bayesian updating. The interactions of the safety measures with one another and also with errors and variabilities are investigated. For instance, we find that the certification tests are most effective when errors are large and variabilities are small. We also find that as safety measures combine to reduce the probability of failure, our confidence in the probability of failure estimates is reduced. Chapter 4 extends the analysis presented in Chapter 3 by delivering the following refinements. The effectiveness of safety measures is compared with one another in terms of safety improvement and weight savings. Structural redundancy, a safety measure which is omitted in Chapter 3, is also included in the analysis. The simple error model used in Chapter 3 is replaced with a more detailed error model in which we consider individual error components in load calculation, stress calculation, material properties and geometry parameters including the effect of damage. The analysis in Chapter 4 enables us to discover that while certification testing is more effective than increased safety factors for improving safety, it cannot compete with even a small reduction in errors. We also find that variability reduction is even more effective than error reduction. Realizing in Chapter 4 how powerful uncertainty reduction mechanisms are, we analyze the tradeoffs of uncertainty reduction mechanisms, structural weight and structural safety in Chapters 5 and 6. The effect of error reduction (due to improved failure prediction model) on increasing the allowable flight loads of existing aircraft structures is investigated in Chapter 5. The analysis is performed for a sandwich panel because the improved model is developed by Prof. Bhavani Sankar (cochair of the advisory committee for this dissertation) so that we had good access to the details of experiments and computations. We find that the improved modeling can increase the allowable load of a sandwich panel on average by about 13 percent without changing the safety level of the panel when deterministic design principles is followed. The use of probabilistic design is found to double the load increase. Similarly to improvements of accuracy in failure predictions, the improvements in the accuracy of structural analysis also lead to error reduction. The improved structural analysis through taking the chemical shrinkage of composite laminates is considered as the error reduction mechanism in Chapter 6. The work by Qu et al. (2003), which explored the effect of variability reduction through quality control, is extended in Chapter 6 to investigate the tradeoffs of error and variability reduction mechanisms for reducing the weight of the composite laminates at cryogenic temperatures. Tradeoff plots of uncertainty reduction mechanisms, probability of failure and weight are generated that enable a designer to choose the optimal uncertainty control mechanism combination to reach a target probability of failure with minimum cost. Chapter 7 finalizes the analysis of effects of explicit and implicit knockdown factors and uncertainty control mechanisms. In particular, Chapter 7 analyzes the optimal choice of the knockdown factors. These knockdown factors refer to conservative decisions of aircraft companies in choice of material properties and while updating the allowable stresses based on the results of structural element tests. We find that instead of using implicit knockdown factors based on worstcase scenarios (current practice), using testdependent explicit knockdown factors may lead weight savings. Surprisingly, we find that a more conservative knockdown factor should be used if the failure stresses measured in tests exceeds predicted failure stresses in order to reduce the variability in knockdown factors generated by variability in material properties. In addition, the effects of coupon tests, structural element tests and uncertainty control mechanisms (such as error reduction by improved structural modeling or improved failure prediction, variability reduction by tighter quality control) on the choice of company safety factors are investigated. Using a simple cost function in terms of structural weight, we show that decisions can be made whether to invest resources on coupon tests, structural element tests, uncertainty reduction mechanisms or extra structural weight. The analyses presented in Chapters 37 show how probabilistic design can be exploited to improve aircraft structural safety by allowing a rational analysis of interactions of safety and knockdown factors and uncertainty reduction mechanisms. There are, however, two main reasons for reluctance of engineers for pursuing the probabilistic design: the sensitivity of probabilistic design to limited statistical data and computational expense associated to the probabilistic design. Besides, Chapters 37 include analyses of a single aircraft structural component, so in Chapter 8 the probabilistic design of an aircraft structural system is presented. We show in Chapter 8, by use of probabilistic design of a representative wing and tail system, that errors due to limited statistical data affect the probability of failure of both probabilistic and deterministic designs, but the ratio of these probabilities is quite insensitive to even very large errors. In addition, to alleviate the problem of computational expense, a probabilistic design optimization method is proposed in which the probability of failure calculation is limited to failure stresses to dispense with most of the expensive structural response calculations (typically done via finite element analysis). The proposed optimization methodology is illustrated with the design of the wing and tail system. Chapter 8 reveals that the difference between probabilistic design and deterministic design is a small perturbation, which can be achieved by choosing the additional knockdown factors through probabilistic design, instead of choosing them based on experience. In addition, the proposed approximate method is found to lead to similar re distribution of material between structural components and similar system probability of failure. Finally, the dissertation culminates with Chapter 9, where the concluding remarks are listed. CHAPTER 2 LITERATURE REVIEW The literature review in this chapter first compares deterministic and probabilistic design methodologies. Then, we review structural safety analysis, followed by probability of failure estimation techniques. Next, reliabilitybased design optimization is reviewed. Then, uncertainty classifications available in the literature are discussed followed by our simplified classification based on simplifying the analysis of uncertainty reduction measures. Finally, the utilization of structural tests in probabilistic design is reviewed. Probabilistic vs. Deterministic Design Aircraft structural design still relies on the Federal Aviation Administration (FAA) deterministic design code. In deterministic design, conservative material properties are used and safety factors are introduced to protect against uncertainties. The FAA regulations (FAR25.613) state that conservative material properties are characterized as Abasis or Bbasis values. Detailed information on these values was provided in Chapter 8 of Volume 1 of Composite Materials Handbook (2002). The safety factor compensates for uncertainties such as uncertainty in loading and errors in load calculations, errors in structural stress analysis and accumulated damage, variations in material properties due to manufacturing defects and imperfections, and variations in fabrication and inspection standards. Safety factors are generally developed from empirically based design guidelines established from years of structural testing and flight experience. In transport aircraft design, the FAA regulations state the use of safety factor of 1.5 (FAR25.303). Muller and Schmid (1978) reviewed the historical evolution of the 1.5 factor of safety in the United States. On the other hand, probabilistic design methodology deals with uncertainties by the use of statistical characterization of uncertainties and attempts to provide a desired reliability in the design. The uncertainties of individual design parameters and loads are modeled by appropriate probability density functions. The credibility of this approach depends on several factors such as the accuracy of the analytical model used to predict the structural response, the accuracy of the data and the probabilistic techniques employed. Examples of the use of probabilistic design in aerospace applications include the following. Pai et al. (1990, 1991 and 1992) performed probabilistic structural analysis of space truss structures for a typical space station. Murthy and Chamis (1995) performed probabilistic analysis of composite aircraft structure based on first ply failure using FORM The probabilistic methodology has shown some success in the design of composite structures where parameter uncertainties are relatively well known. For example, the IPACS (Integrated Probabilistic Assessment of Composite Structures) computer code was developed at NASA Glenn Research Center (Chamis and Murthy 1991). Fadale and Sues (1999) performed reliabilitybased design optimization of an integral airframe structure lap joint. A probabilistic stability analysis for predicting the buckling loads of compression loaded composite cylinders was developed at Delft University of Technology (Arbocz et al. 2000). The FORM method is discussed later in this chapter. Although probabilistic design methodology offers the potential of safer and lighter designs than deterministic design, transition from deterministic design to probabilistic design is difficult to achieve. Zang et al. (2002) discussed the reasons for this difficulty, and some of these reasons are given below. * Industry feels comfortable with traditional design methods. * Few demonstrations of the benefits of probabilistic design methods are available. * Current probabilistic design methods are more complex and computationally expensive than deterministic methods. * Characterization of structural imperfections and uncertainties necessary to facilitate accurate analysis and design of the structure is timeconsuming and is highly dependent on structural configuration, material system, and manufacturing processes. * Effective approaches for characterizing model form error are lacking. * Researchers and analysts lack training in statistical methods and probabilistic assessment. Structural Safety Analysis In probabilistic design, the safety of a structure is evaluated in terms of its probability of failure Pf. The structures are designed such that the probability of failure of the structure is kept below a prespecified level. The term reliability is defined in terms of probability of failure such that Reliability = 1 P (2.1) A brief history of development of the methods for probability of failure calculation for structures was presented in a report by Wirsching (1992). As Wirsching noted, the development of theories goes back some 50 to 60 years. The modern era of probabilistic design started with the paper by Fruedenthal (1947). Most of the ingredients of structural reliability such as probability theory, statistics, structural analysis and design, quality control existed prior to that time; however, Fruedenthal was the first to put them together in a definitive and compressive manner. The development of reliability theory progressed in 1950s and 1960s. There are three cornerstone papers in 1960's. The first one is the paper by Cornell (1967), who suggested the use of a second moment method and demonstrated that Cornell's safety index could be used to derive set of factors on loads and resistance. However, Cornell's safety index had a problem of invariance in that it was not constant when the problem was reformulated in a mechanically equivalent way. Hasofer and Lind (1974) defined a generalized safety index which was invariant to mechanical formulation. The third paper is the one by Turkstra (1970), who presented structural design as a problem of decision making under uncertainty and risk. More recent papers are sophisticated extensions of these papers, and some of them are referenced in the following sections. Probability of Failure Estimation This section reviews the literature on probability of failure estimation. First, analytical calculation of probability of failure is discussed, followed by momentbased methods and simulation techniques. Analytical calculation of probability of failure In its most general form, the probability of failure can be expressed as Pf = Gix f fx(x)dx (2.2) G(x) where G(x) is the limitstate function whose negative values corresponds to failure and fx (x) is the joint probability density function for the vector X of random variables. The analytical calculation of this expression is challenging due to the following reasons (Melchers 1999). First, the joint probability density function fx (x) is not always readily obtainable. Second, for the cases when fx (x) is obtainable, the integration over the failure domain is not easy. The calculation of probability of failure can be made more tractable by simplifying (1) the limitstate definition, (2) the integration process, and (3) the integrand fx (x) . Momentbased techniques When the calculation of limitstate is expensive, momentbased techniques such as First Order Reliability Method (FORM) or Second Order Reliability Method (SORM) are used (Melchers, 1999). The basic idea behind these techniques is to transform the original random variables into a set of uncorrelated standard normal random variables, and then approximate the limitstate function linearly (FORM) or quadratically (SORM) about the most probable failure point (MPP). The probability of failure of the component is estimated in terms of reliability index P such that Pf = D( /) (2.3) where D is the cumulative distribution function of a standard normal variable. The first paper on the use of FORM is probability of failure calculation appears to be Hasofer and Lind's (1974). There exist enormous amount of papers on the use of FORM. The pioneer papers include Rackwitz and Fiessler (1978), Hohenbichler and Rackwitz (1983), Gollwitzer and Rackwitz (1983). FORM is usually accurate for limit state functions that are not highly nonlinear. SORM has been proposed to improve the reliability estimation by using a quadratic approximation of the limit state surface. Some papers on the use of SORM include Fiessler et al. (1979), Breitung (1984), Der Kiureghian et al. (1987), Hohenbichler et al. (1987), Der Kiureghian and De Stefano (1991), Koyluoglu and Nielsen (1994) and Zhao and Ono (1999). Simulation techniques For most problems the number of variables in the problem definition is high, so the analytical calculation of the integral in Eq. (2.2) requires challenging multidimensional integration. Also the moment based approximations gives inaccurate results for high number of random variables (Melchers 1999). Under such conditions, simulation techniques such as Monte Carlo simulations (MCS) are used to compute the probability of failure. In MCS technique, samples of the random variables are generated according to their probabilistic distributions and then failure condition is checked. The probability of failure Pfcan be estimated by N Pf= (2.4) where Nf is the number of simulations leading to failure and N is the total number of simulations. The statistical accuracy of the probability of failure estimation is commonly measured by its coefficient of variation c.o.v.(Pf) as Pf (1 Pf ) c.o.v.(Pf) N ) (2.5) Pf N Pf From Eqs. (2.4) and (2.5) it is seen that a small probability of failure will require a very large number of simulations for acceptable accuracy. This usually results in an increase in computational cost. When limitstate function calculations are obtained directly from analysis, then computational cost of MCS is not sensitive to the number of variables. When surrogate models are used, on the other hand, the computational cost of MCS is dependent on the number of variables. To overcome the deficiency of MCS, several more efficient alternative sampling methods are introduced. Ayyub and McCuen (1995) supplied basic information and good references for these sampling techniques. Some useful references taken from Ayyub and McCuen (1995) are the followings: Importance sampling (Madsen et al., 1986, Melchers, 1989), stratified sampling (Law and Kelton 1982, Schuller et al. 1989), Latin hypercube sampling (Iman and Canover 1980, Ayyub and Lai 1989), adaptive importance sampling (Busher 1988, Karamchandani et al. 1989, Schuller et al. 1989), conditional expectation (Law and Kelton 1982, Ayyub and Haldar 1984), antithetic variates (Law and Kelton 1982, Ayyub and Haldar 1984). In this study, we mainly deal with problems with simple limitstate functions. For these simple cases the integrand fx (x) can easily be obtained when the random variables are statistically independent. The beneficial properties of normal and lognormal distributions are utilized for the variables with small coefficients of variations. Approximate analytical calculations of probability of failure are checked with Monte Carlo simulations to validate the acceptability of assumptions. When limitstate functions are complex, Monte Carlo simulations are used to calculate the probability of failure. Separable Monte Carlo simulations As noted earlier, when estimating very low probabilities, the number of required samples for MCS can be high, thus MCS becomes a costly process. In most structural problems, the failure condition may be written as response exceeding capacity. When the response and capacity are independent, it may be possible to analyze them separately with a moderate sample size, and still be able to estimate very low probabilities of failure. This is due to the fact that most failures do not involve extreme values of response or capacity but instead moderately high response along with moderately low capacity. Therefore, to bypass the requirement of sampling the extreme tail of the limitstate function, the variables could be considered independently, by separating the response and the capacity, as discussed by Melchers (1999, Chapter 3). A good analysis of efficiency and accuracy of separable Monte Carlo simulations can be found in Smarslok et al. (2006). The common formulation of the structural failure condition is in the form of a stress exceeding the material limit. This form, however, does not satisfy the separability requirement. For example, the stress depends on variability in material properties as well as design area, which reflects errors in the analysis process. In that case, the limitstate function can still be reformulated in a separable form. In this dissertation we rewrite the limitstate in terms of the required area (depends only on variabilities) and built area (depends only on errors) to bring the limit state to separable form (see Chapter 4). Response surface approximations Response surface approximations (RSA) can be used to obtain a closedform approximation to the limit state function to facilitate reliability analysis. Response surface approximations usually fit loworder polynomials to the structural response in terms of random variables. The probability of failure can then be calculated inexpensively by Monte Carlo simulation or by FORM or SORM using the fitted polynomials. Response surface approximations can be applied in different ways. One approach is to construct local response surfaces in the MPP region that contributes most to the probability of failure of the structure. Bucher and Bourgund (1990), Rajashekhar and Ellingwood (1993), Koch and Kodiyalam (1999), Das and Zheng (2000a, 2000b) and Gayton, Bourinet and Lemaire (2003) used local response surfaces. Another approach is to construct global response surface over the entire range of random variables. The examples include Fox (1994, and 1996), Romero and Bankston (1998), Qu et al. (2003), Youn and Choi (2004) and Kale et al. (2005). ReliabilityBased Design Optimization Design optimization under a probability of failure constraint is usually referred as reliabilitybased design optimization (RBDO). The basic structure of an RBDO problem is stated as min f minf (2.6) s.t. P < Ptarget wherefis the objective function (for most problems it is weight), and P and Ptarge are the probabilistic performance function and the target value for it. The probabilistic performance function can be probability of failure Pf, reliability index /f or an inverse reliability measure such as probabilistic sufficiency factor, PSF. Double loop (Nested) RBDO Conventional RBDO approach is formulated as a doubleloop optimization problem, where an outer loop performs the design optimization, while an inner loop optimization is also used for estimating probability of failure (or another probabilistic performance function). The reliability index approach (RIA) is the most straightforward approach. In RIA, the probability of failure is usually calculated via FORM, which is an iterative process an so computationally expensive and sometimes troubled by convergence problems (Tu et al. 1999). To reduce the computational cost of double loop approach, various techniques has been proposed, which can be divided into two categories: (i) techniques that improve the efficiency of uncertainty analysis methods, such as the methods of fast probability integration (Wu 1994) and twopoint adaptive nonlinear approximations (Grandhi and Wang 1998); (ii) techniques that modify the formulation of probabilistic constraints, for instance, using inverse reliability measures, such as the performance measure approach (Tu et al. 1999), probabilistic sufficiency factor (Qu and Haftka 2003). Inverse reliability measures are based on margin of safety or safety factors, which are safety measures in deterministic design. The safety factor is usually defined as the ratio of structural resistance (e.g., failure stress) to structural response (e.g., stress). Safety factors permit the designer to estimate the change in structural weight to satisfy a target safety factor requirement. In probabilistic design, however, the difference between the probabilistic performance measure and its target value does not provide the designer with an estimate of the required change in structural weight. Inverse safety measures thus help the designer to easily estimate the change in structural weight from the values of probabilistic performance measure and its target value and the inverse safety measures also improve the computational efficiency (Qu and Haftka 2004). A good analysis and survey on safety factor and inverse reliability measures was presented by Ramu et al. (2004). Single loop RBDO Single loop formulation avoids nested loops of optimization and reliability assessment. Some single loop formulations are based on formulating the probabilistic constraints as deterministic constraints by either approximating the KarushKuhnTucker conditions at the MPP or defining a relationship between probabilistic design and safety factors of deterministic design (e.g., Chen et al. 1997, Kuschel and Rackwitz 2000, Wu et al. 2001, Qu et al. 2004, Liang et al. 2004). Single loop formulation increases the efficiency by allowing the solution to be infeasible before convergence and satisfying the probability constraints only at the optimum. There exist also singe loop formulations that performs optimization and reliability assessment sequentially (e.g., Royset et al. 2001, Du and Chen 2004). Sequential optimization and reliability assessment (SORA) of Du and Chen (2004), for instance, decouples the optimization and reliability assessment by separating each random design variable into a deterministic component, which is used in a deterministic optimization, and a stochastic component, which is used in reliability assessment. Error and Variability Uncertainty Classification Over years researchers proposed many different classifications for uncertainty. For instance, Melchers (1999) divided uncertainty into seven types: phenomenological uncertainty, decision uncertainty, modeling uncertainty, prediction uncertainty, physical uncertainty, statistical uncertainty and human error. Haimes et al. (1994) and Hoffman and Hammonds (1994) distinguished uncertainty into two types: uncertainty epistemicc part) and variability aleatoryy part). Epistemic uncertainties arise from lack of knowledge about the behavior of a phenomenon. They may be reduced by review of literature, expert consultation, close examination of data and further research. Tools such as scoring system, expert system and fishbone diagram can also help in reducing epistemic uncertainties. Aleatory uncertainties arise from possible variation and random errors in the values of the parameters and their estimates. They can be reduced by using reliable manufacturing tools and quality control measures. Oberkampf et al. (2000, 2002) provided a good analysis of different sources of uncertainty in engineering modeling and simulations, supply good literature review on uncertainty quantification and divide the uncertainty into three types: variability, uncertainty and error. The classification provided by Oberkampf et al. is discussed in the Motivation section of Chapter 1. Reliability Improvement by Error and Variability Reduction Before designing a new structure, material properties and loading conditions are assessed. The data is collected to constitute the probability distributions of material properties and loads. The data on material properties is obtained by performing tests on batches of materials and also from the material manufacturer. To reduce the variability in material properties quality controls may be applied. Qu et al. (2003) analyzed the effect of application of quality controls over material allowables in the design of composite laminates for cryogenic environments. They found that employing quality control reduces the probability of failure significantly, allowing substantial weight reduction for the same level of safety. Similarly, before a newly designed structure is put into service, its performance under predicted operational conditions is evaluated by collecting data. The data is used to validate the initial assumptions being made through the design and manufacturing processes to reduce error in those assumptions. This can be accomplished by the use of Bayesian statistical methods to modify the assumed probability distributions of error. The present author will investigate this issue on following the chapters. After the structure is put into service inspections are performed to detect the damage developed in the structure. Hence, the inspections are another form of uncertainty reduction. The effect of inspections in the safety of structures was analyzed (among others) by Harkness et al. (1994), Provan et al. (1994), Fujimoto et al. (1998), Kale et al. (2003) and Acar et al. (2004b). Testing and Probabilistic Design In probabilistic design, models for predicting uncertainties and performance of structures are employed. These models involve idealizations and approximations; hence, validation and verification of these models is necessary. The validation is done by testing of structures, and verification is done by using more detailed models. Historical development of testing of structures was given in the papers by Pugsley (1944) and Whittemore (1954). A literature survey of load testing by Hall and Lind (1979) presented many uses for load testing in design and safety validation of structures. Conventional "design by calculation" relies upon tensile coupon tests to estimate material strength (Hall and Tsai, 1989). Coupon testing is a destructive test to measure loads and displacements at failure. On the other hand, proof load testing is not a destructive test in which the structure is tested at a fixed load to measure resistance level of the structure. Proof load testing in a variety of applications was studied by several authors such as Barnett and Herman (1965), Shinozuka (1969), Yang (1976), Fujino and Lind (1977), Rackwitz and Schurpp (1985) and Herbert and Trilling (2006). Jiao and Moan (1990) illustrated a methodology for probability density function updating of structural resistance by additional events such as proof loading and non destructive inspection by utilizing FORM or SORM methods. Ke (1999) proposed an approach that specifically addressed the means to design component tests satisfying reliability requirements and objectives by assuming that the component life distribution follows Weibull distribution. Zhang and Mahadevan (2001) developed a methodology that utilizes Bayesian updating to integrate testing and analysis for test plan determination of a structural components. They considered two kinds of tests: failure probability estimation and life estimation tests. Soundappan et al. (2004) presented a method for designing targeted analytical and physical tests to validated reliability of structures obtained from reliability based designs. They found that the optimum number of tests for a component is nearly proportional to the square root of probability of failure. Guidelines for testing of composite materials were presented in Volume 1, Chapter 2 of Composite Materials Handbook (2002). The following are quoted from this source (pages 21 and 22). Analysis alone is generally not considered adequate for substantiation of composite structural designs. Instead, the "buildingblock approach" to design development testing is used in concert with analysis. This approach is often considered essential to the qualification/certification of composite structures due to the sensitivity of composites to outofplane loads, the multiplicity of composite failure modes and the lack of standard analytical methods. The buildingblock approach is also used to establish environmental compensation values applied to fullscale tests at room temperature ambient environment, as it is often impractical to conduct these tests under the actual moisture and temperature environment. Lowerlevel tests justify these environmental compensation factors. Similarly, other buildingblock tests determine truncation approaches for fatigue spectra and compensation for fatigue scatter at the full scale level. The buildingblock approach is shown schematically in Figure 2.1. COMPONENTS w u c uI SUBCOMPONENTS c  z CDETAILl \ 0  ELEMENTS    n  Figure 21. Building block approach (Reprinted, with permission, from MIL 17 The Composite Materials Handbook, Vol. 1, Chapter 2, copyright ASTM International, 100 Barr Harbor Drive, West Conshohocken, PA 19428) The approach can be summarized in the following steps: 1. Generate material basis values and preliminary design allowables. 2. Based on the design/analysis of the structure, select critical areas for subsequent test verification. 3. Determine the most strengthcritical failure mode for each design feature. 4. Select the test environment that will produce the strengthcritical failure mode. Special attention should be given to matrixsensitive failure modes (such as compression, outofplane shear, and bondlines) and potential "hotspots" caused by outofplane loads or stiffness tailored designs. 5. Design and test a series of test specimens, each one of which simulates a single selected failure mode and loading condition, compare to analytical predictions, and adjust analysis models or design allowables as necessary. 6. Design and conduct increasingly more complicated tests that evaluate more complicated loading situations with the possibility of failure from several potential failure modes. Compare to analytical predictions and adjust analysis models as necessary. 7. Design (including compensation factors) and conduct, as required, fullscale component static and fatigue testing for final validation of internal loads and structural integrity. Compare to analysis. As noted earlier, validation is done by testing of structures, and verification is done by using more detailed models. Detailed models may reduce the errors in analysis models; however errors in the uncertainty models cannot be reduced by this approach. In 27 addition, very detailed models can be computationally prohibitive. Similarly, while testing of structures reduces both the errors in response models and uncertainty models, it is expensive. Therefore, the testing of structures needs to be performed simultaneously with the structural design to reduce cost while still keeping a specified reliability level. CHAPTER 3 WHY ARE AIRPLANES SO SAFE STRUCTURALLY? EFFECT OF VARIOUS SAFETY MEASURES This chapter investigates the effects of error, variability, safety measures and tests on the structural safety of aircraft. A simple point stress design problem and a simple uncertainty classification are used. Since this chapter serves as the opening chapter, the level of analysis and the number of safety measures are kept at a minimum level. Safety measures considered in this chapter are the load safety factor of 1.5, the use of conservative material properties and certification test. Other safety measures such as structural redundancy, coupon and structural element tests will be included in the following chapters. Interaction of the considered safety measures with one another and their effectiveness with respect to uncertainties are also explored. The work given in this chapter was also published in Acar et al. (2006a). My colleague Dr. Amit Kale's contribution to this work is acknowledged. Introduction In the past few years, there has been growing interest in applying probability methods to aircraft structural design (e.g., Lincoln 1980, Wirsching 1992, Aerospace Information Report of Society of Automotive Engineers 1997, Long and Narciso 1999). However, many engineers are skeptical of our ability to calculate the probability of failure of structural designs for the following reasons. First, data on statistical variability in material properties, geometry and loading distributions are not always available in full (e.g., joint distributions), and it has been shown that insufficient information may lead to large errors in probability calculations (e.g., BenHaim and Elishakoff 1990, Neal et al. 1992). Second, the magnitude of errors in calculating loads and predicting structural response is not known precisely, and there is no consensus on how to model these errors in a probabilistic setting. As a result of these concerns, it is possible that transition to probability based design will be gradual. In such circumstances it is important to understand the impact of existing design practices on safety. This chapter is a first attempt to explore the effects of various safety measures taken during aircraft structural design using the deterministic design approach based on FAA regulations. The safety measures that we include in this chapter are (i) the use of safety factors, (ii) the use of conservative material properties (Abasis), and (iii) the use of final certification tests. These safety measures are representative rather than all inclusive. For example, the use of Abasis properties is a representative measure for the use of conservative material properties. The safety measures (e.g., structural redundancy) are discussed in the following chapters. We use ABasis value rather than Bbasis because we did not include redundancy in this chapter. FAA suggests that (FAR 25.613) when there is a single failure path, ABasis properties should be employed, but in case of multiple failure paths, BBasis properties are to be used. In next chapter, for instance, we include structural redundancy in our analysis, so we use Bbasis values in Chapter 4. The effect of the three individual safety measures and their combined effect on the probability of structural failure of the aircraft are demonstrated. We use Monte Carlo simulations to calculate the effect of these safety measures on the probability of failure of a structural component. We start with a structural design employing all considered safety measures. The effects of variability in geometry, loads, and material properties are readily incorporated by the appropriate random variables. However, there is also uncertainty due to various errors such as modeling errors in the analysis. These errors are fixed but unknown for a given airplane. To simulate these epistemic uncertainties, we transform the error into a random variable by considering the design of multiple aircraft models. As a consequence, for each model the structure is different. It is as if we pretend that there are hundreds of companies (Airbus, Boeing, Bombardier, Embraer, etc.) each designing essentially the same airplane, but each having different errors in their structural analysis. This assumption is only a device to model lack of knowledge or errors in probabilistic setting. However, pretending that the distribution represents a large number of aircraft companies helps to motivate the probabilistic setting. For each model we simulate certification testing. If the airplane passes the test, then an entire fleet of airplanes with the same design is assumed to be built with different members of the fleet having different geometry, loads, and material properties based on assumed models for variability in these properties. That is, the uncertainty due to variability is simulated by considering multiple realizations of the same design, and the uncertainty due to errors is simulated by designing different structures to carry the same loads. Structural Uncertainties A good analysis of different sources of uncertainty is provided by Oberkampf et al. (2000, 2002). Here we simplify the classification, with a view to the question of how to control uncertainty. We propose in Table 31 a classification that distinguishes between errors (uncertainties that apply equally to the entire fleet of an aircraft model) and variabilities (uncertainties that vary for the individual aircraft). The distinction is important because safety measures usually target one or the other. While variabilities are random uncertainties that can be readily modeled probabilistically, errors are fixed for a given aircraft model (e.g., Boeing 737400) but they are largely unknown. Errors reflect inaccurate modeling of physical phenomena, errors in structural analysis, errors in load calculations, or use of materials and tooling in construction that are different from those specified by the designer. Systemic errors affect all the copies of the structural components made and are therefore fleetlevel uncertainties. They can reflect differences in analysis, manufacturing and operation of the aircraft from an ideal. The ideal aircraft is an aircraft designed assuming that it is possible to perfectly predict structural loads and structural failure for a given structure, that there are no biases in the average material properties and dimensions of the fleet with respect to design specifications, and that there exists an operating environment that on average agrees with the design specifications. The other type of uncertainty reflects variability in material properties, geometry, or loading between different copies of the same structure and is called here individual uncertainty. Table 31. Uncertainty classification Type of Spread Cause Remedies uncertainty Errors in predicting Ente ft o structural failure and Testing and Entire fleet of Systemic error cs d differences between simulation to improve components designed i i .1 (modeling errors) properties used in design math model and the using the model and average fleet solution. properties. Variability in tooling, Improve tooling and ., Individual component Variability individual component manufacturing process, construction. levelironments. Quality control. and flying environments. Quality control. Safety Measures Aircraft structural design is still done, by and large, using codebased design rather than probabilistic approaches. Safety is improved through conservative design practices that include use of safety factors and conservative material properties. It is also improved by tests of components and certification tests that can reveal inadequacies in analysis or construction. In the following we detail some of these safety measures. Load Safety Factor: Traditionally all aircraft structures are designed with a load safety factor to withstand 1.5 times the limitload without failure. ABasis Properties: In order to account for uncertainty in material properties, the Federal Aviation Administration (FAA) states the use of conservative material properties. This is determined by testing a specified number of coupons selected at random from a batch of material. The Abasis property is determined by calculating the value of a material property exceeded by 99% of the population with 95% confidence. Component and Certification Tests: Component tests and certification tests of major structural components reduce stress and material uncertainties for given extreme loads due to inadequate structural models. These tests are conducted in a building block procedure. First, individual coupons are tested, and then a sub assembly is tested followed by a fullscale test of the entire structure. Since these tests cannot apply every load condition to the structure, they leave uncertainties with respect to some loading conditions. It is possible to reduce the probability of failure by performing more tests to reduce uncertainty or by extra structural weight to reduce stresses. If certification tests were designed together with the structure, it is possible that additional tests would become cost effective because they would allow reduced structural weight. We simulate the effect of these three safety measures by assuming the statistical distribution of the uncertainties and incorporating them in approximate probability calculations and Monte Carlo simulation. For variability the simulation is straightforward. However, while systemic errors are uncertain at the time of the design, they will not vary for a single structural component on a particular aircraft. Therefore, to simulate the uncertainty, we assume that we have a large number of nominally identical aircraft being designed (e.g., by Airbus, Boeing, Bombardier, Embraer, etc.), with the errors being fixed for each aircraft. This creates a twolevel Monte Carlo simulation, with different aircraft models being considered at the upper level, and different instances of the same aircraft at the lower level. To illustrate the procedure we consider point stress design of a small part of an aircraft structure. Aircraft structures have more complex failure modes, such as fatigue and fracture, which require substantially different treatment and the consideration of the effects of inspections (See Kale et al., 2003). However, this simple example serves to further our understanding of the interaction between various safety measures. The procedure is summarized in Fig. 31, which is described in detail in the next section. Design of a Generic Component Design and Certification Testing We assume that we have N different aircraft models, i.e., we have N different companies producing a model with errors. We consider a generic component to represent the entire aircraft structure. The true stress (ot ) is found from the equation Ore = (3.1) wt where P is the applied load on the component of width w and thickness t. In a more general situation, Eq. (3.1) may apply to a small element in a more complex component. When errors are included in the analysis, the true stress in the component is different from the calculated stress. We include the errors by introducing an error factor e while computing the stress as caic = (1+ e) atre (3.2) Positive values of e yield conservative estimates of the true stress and negative values yield unconservative stress estimation. The other random variables account for variability. Combining Eqs. (3.1) and (3.2), the stress in the component is calculated as calc = (1+ e) (3.3) w t The design thickness is determined so that the calculated stress in the component is equal to material allowable stress for a design load Pd multiplied by a safety factor SF, hence the design thickness of the component is calculated from Eq. (3.3) as tdesgn = (1+ e) SFPd (3.4) Wdeslgn'a where the design component width, wdesgn, is taken here to be 1.0, and oa is the material stress allowable obtained from testing a batch of coupons according to procedures that depend on design practices. Here, we assume that Abasis properties are used (see Appendix A). During the design process, the only random quantities are o and e. The thickness obtained from Eq. (3.4), step A in Fig. 31, is the nominal thickness for a given aircraft model. The actual thickness will vary due to individuallevel manufacturing uncertainties. FF Select random e and create a new design A t mB h a Reject SPerform certification test Fafled Des Passed Build a copy of the aircraft and Apply service loads D I Check if the aircraft fails under the service loads and Count the number of aircraft failed E 1 N Check if M number of copies are built No SYes  Check if N number of different designs are created No Yes Calculate Probability of Failure Figure 31. Flowchart for Monte Carlo simulation of component design and failure After the component has been designed (that is, thickness is determined from Eq. (3.4)), we simulate certification testing for the aircraft. Here we assume that the component will not be built with complete fidelity to the design due to variability in geometry (width and thickness). The component is then loaded with the design axial force of SF times Pd, and the stress in the component is recorded. If this stress exceeds the failure stress (itself a random variable, see Table 32) then the design is rejected, otherwise it is certified for use. That is, the airplane is certified (step B in Fig. 31) if the following inequality is satisfied a a = SFPd < 0 (3.5) wt and we can build multiple copies of the airplane. We subject the component in each airplane to actual random maximum (over a lifetime) service loads (step D in Fig. 31) and decide whether it fails using Eq. (3.6). P > C =twcf (3.6) Here, P is the applied load, and C is the load carrying capacity of the structure in terms of the width w, thickness t and failure stress of. A summary of the distributions for the random variables used in design and certification is listed in Table 32. Table 32. Distribution of random variables used for component design and certification Variables Distribution Mean Scatter Plate width (w) Uniform 1.0 (1%) bounds Plate thickness (t) Uniform design (3%) bounds Failure stress (cf) Lognormal 150.0 8 % coefficient of variation Service Load (P) Lognormal 100.0 10 % coefficient of variation Error factor (e) Uniform 0.0 10% to 50% This procedure of design and testing is repeated (steps AB) for N different aircraft models. For each new model, a different random error factor e is picked for the design, and different allowable properties are generated from coupon testing (Appendix A). Then in the testing, different thicknesses and widths, and different failure stresses are generated at random from their distributions. Effect of Certification Tests on Distribution of Error Factor e One can argue that the way certification tests reduce the probability of failure is by changing the distribution of the error factor e. Without certification testing, we assume symmetric distribution of this error factor. However, designs based on unconservative models are more likely to fail certification, and so the distribution of e becomes conservative for structures that pass certification. In order to quantify this effect, we calculated the updated distribution of the error factor e. The updated distribution is calculated analytically by Bayesian updating by making some approximations, and Monte Carlo simulations are conducted to check the validity of those approximations. Bayesian updating is a commonly used technique to obtain updated (or posterior) distribution of a random variable upon obtaining new information about the random variable. The new information here is that the component has passed the certification test. Using Bayes' Theorem, the updated (posterior) distribution f/ (0) of a random variable 0is obtained from the initial (prior) distribution f1 () based on new information as f (0) = Pr()f() (3.7) f Pr(e 10) f (O)dO where Pr(E 0) is the conditional probability of observing the experimental data E given that the value of the random variable is 0. For our case, the posterior distribution fU (e) of the error factor e is given as fU (e)= b Pr(CT e) f (e) (3.8) SPr(CT I e) f (e)de b where CT is the event of passing certification, and Pr(CT e) is the probability of passing certification for a given e. Initially, e is assumed to be uniformly distributed. The procedure of calculation of Pr(CT e) is described in Appendix B, where we approximate the distribution of the geometrical variables, t and w as lognormal, taking advantage of the fact that their coefficient of variation is small compared to that of the failure stress (see Table 32). We illustrate the effect of certification tests for the components designed with A Basis material properties. An initial and updated distribution plot of error factor e with 50 % bound is shown in Fig. 32. Monte Carlo simulation with 50,000 aircraft models is also shown. Figure 32 shows that the certification tests greatly reduce the probability of negative error, hence eliminating most unconservative designs. As seen from the figure, the approximate distribution calculated by the analytical approach matches well the distribution obtained from Monte Carlo simulations. Initial and Updated Distribution of Error Factor e and Comparison of Analytical Approx. with Monte Carlo Simulations 16 S4 Monte Carlo, initial S Monte Carlo, updated PAnalytical Approx initial S 1 2 XAnalyt ca Approx, updated 08 06 04 02 0 05 04 03 02 01 00 01 02 03 04 05 error factor, e Figure 32. Initial and updated probability distribution functions of error factor e. Error bound is 50% and Monte Carlo simulation done with sample size of 50,000. Probability of Failure Calculation by Analytical Approximation The stress analysis represented by Eq. (3.1) is trivial, so that the computational cost of Monte Carlo simulation of the probability of failure is not high. However, it is desirable to obtain also analytical probabilities that may be used for more complex stress analysis and to check the Monte Carlo simulations. In order to take advantage of simplifying approximations of the distribution of the geometry parameters, it is convenient to perform the probability calculation in two stages, corresponding to the inner and outer loops of Fig. 31. That is, we first obtain expressions for the probability of failure of a single aircraft model (that is, given e and allowable stress). We then calculate the probability of failure over all aircraft models. The mean value of the probability of failure over all aircraft models is calculated as f= f P(tdesign) f (design) dtdesign (3.9) where tdesign is the nondeterministic distribution parameter, and f(tdesign) is the probability density function of design . It is important to have a measure of variability in this probability from one aircraft model to another. The standard deviation of failure probability gives a measure of this variability. In addition, it provides information on how accurate is the probability of failure obtained from Monte Carlo simulations. The standard deviation can be calculated from ( 1/2 Pf1 = [P designg) Pf f Qdesign) dtdeign (3.10) Probability of Failure Calculation by Monte Carlo Simulations The inner loop in Fig. 31 (steps CE) represents the simulation of a population of M airplanes (hence components) that all have the same design. However, each component is different due to variability in geometry, failure stress, and loading (step D). We subject the component in each airplane to actual random maximum (over a lifetime) service loads (step E) and calculate whether it fails using Eq. (3.6). For airplane model that pass certification, we count the number of components failed. The failure probability is calculated by dividing the number of failures by the number of airplane models that passed certification, times the number of copies of each model. The analytical approximation for the probability of failure suffers due to the approximations used, while the Monte Carlo simulation is subject to sampling errors, especially for low probabilities of failure. Using large samples, though, can reduce the latter. Therefore, we compared the two methods for a relatively large sample of 10,000 aircraft models with 100,000 instances of each model. In addition, the comparison is performed for the case where mean material properties (rather than Abasis properties) are used for the design, so that the probability of failure is high enough for the Monte Carlo simulation to capture it accurately. Table 33 shows the results for this case. Table 33. Comparison of probability of failures for components designed using safety factor of 1.5, mean value for allowable stress and error bound of 50% Vae Analytical Monte Carlo % ValueApproximation Simulation error Average Value of Pf without certification (Pnt) 1.715x101 1.726x 101 0.6 Standard Deviation of Pnt 3.058x 101 3.068x 101 0.3 Average Value of Pf with certification (Pt) 3.166x 104 3.071 x 104 3.1 Standard Deviation of Pt 2.285x 103 2.322x 103 1.6 Average Value of Initial error factor (e') 0.0000 0.00024  Standard Deviation of e' 0.2887 0.2905 0.6 Average Value of Updated error factor (eup) 0.2468 0.2491 0.9 Standard Deviation of e1p 0.1536 0.1542 0.4 N = 10,000 and M = 100,000 is used in the Monte Carlo Simulations The last column of Table 33 shows the percent error of the analytical approximation compared to Monte Carlo simulations. It is seen that the analytical approximation is in good agreement with the values obtained through Monte Carlo simulations. It is remarkable that the standard deviation of the probability of failure is almost twice the average value of the probability (the ratio, the coefficient of variation, is about 178%) before certification, and about seven times larger after certification. This indicates huge variability in the probability of failure for different aircraft models, and this is due to the large error bound, be=50%. With 10,000 different aircraft models (N), the standard deviation in the Monte Carlo estimates is about 1%, and the differences between the Monte Carlo simulation and the analytical approximation are of that order. Effect of Three Safety Measures on Probability of Failure We next investigate the effect of other safety measures on failure probability of the components using Monte Carlo simulations. We performed the simulation for a range of variability in error factor e for 5000 airplane models (N samples in outer loop) and 20,000 copies of each airplane model (M samples in inner loop). Here, we compare the probability of failure of a structure designed with three safety measures (safety factor, conservative material property and certification testing) to that of a structure designed without safety measures. Table 34 presents the results when all safety measures are used for different bounds on the error. The second column shows the mean and standard deviation of design thicknesses generated for components that passed certification. These components correspond to the outer loop of Fig. 31. The variability in design thickness is due to the randomness in the error e and in the stress allowable. The average thickness before certification was 1.269, so that the column shows the conservative effect of certification testing. When the error bound is 10%, 98.8% of the components pass certification (third column in Table 34), and the average thickness is increased by only 0.24% due to the certification process. On the other hand, when the error bound is 50%, 29% of the components do not pass certification, and this raises the average thickness to 1.453. Thus, the increase in error bound has two opposite effects. Without certification testing, increasing the error bound greatly increases the probability of failure. For example, when the error bound changes from 30% to 50%, the probability of failure without certification changes from 0.00091 to 0.0449, or by a factor of 49. On the other hand, with the increased average thickness, after certification the probability increases only from 1.343x]04 to 1.664x104. Table 34. Probability of failure for different bounds on error e for components designed using safety factor of 1.5 and Abasis property for allowable stress. Numbers in parenthesis denote the coefficient of variation of the quantity. Average design thickness without certification is 1.271. Average r Probability Probability of Probabilit Error n C of failure failure Pobl Bound thickness cation after without Probability y Bound after without be after failure certification certification ratio (Pt/Pnt) difference certificate rate % (p 4 (Pt) 4 (PntPt) on 1.453 50% (0.19 29.3 1.664 (7.86) 449.0 (2.74) 3.706x 10 4.473 102 (0.19) 1.389 40%389 24.3 1.586 (6.92) 89.77 (3.22) 1.767x102 8.818 103 (0.17) 1.329 30% (0.1 16.3 1.343 (5.28) 9.086 (3.46) 1.479x101 7.742x104 (0.15) 1.283 20% .12 6.2 0.304 (4.81) 0.477 (3.51) 6.377x10 1.727x10 (0.12) 1.272 10% .07 1.2 0.027 (4.71) 0.029 (4.59) 9.147x101 2.490x107 (0.07) *Average over N=5000 models The effectiveness of the certification tests can be expressed by two measures of probability improvement. The first measure is the ratio of the probability of failure with the test, Pt, to the probability of failure without tests, Pnt. The second measure is the difference of these probabilities. The ratio is a more useful indicator for low probabilities of failure, while the difference is more meaningful for high probabilities of failure. However, when Pt is high, the ratio can mislead. That is, an improvement from a probability of failure of 0.5 to 0.1 is more substantial than an improvement in probability of failure of 0.1 to 0.01, because it "saves" more airplanes. However, the ratio is more useful when the probabilities are small, and the difference is not very informative. Table 34 shows that certification testing is more important for large error bounds e. For these higher values the number of components that did not pass certification is higher, thereby reducing the failure probability for those that passed certification. While the effect of component tests (building block tests) is not simulated, their main effect is to reduce the error magnitude e. This is primarily due to the usefulness of component tests in improving analytical models and revealing unmodeled failure modes. With that in mind, we note that the failure probability for the 50% error range is 1.7x104, and it reduces to 2.7x 106 for the 10% error rangethat is, by a factor of 63. The actual failure probability of aircraft components is expected to be of the order of 108 per flight, much lower than the best number in the fourth column of Table 34. However, the number in Table 34 is for a lifetime for a single structural component. Assuming about 10,000 flights in the life of a component and 100 independent structural components, this 105 failure probability for a component will translate to a per flight probability of failure of 107 per airplane. This factor of 10 discrepancy is exacerbated by other failure modes like fatigue that have not been considered. However, other safety measures, such as conservative load specifications may account for this discrepancy. Table 35 shows results when average rather than conservative material properties are used. It can be seen from Table 35 that the average thickness determined using the mean value of allowable stress is lower than that determined using the Abasis value of allowable stress (Table 34). This is equivalent to adding an additional safety factor over an already existing safety factor of 1.5. For the distribution (lognormal with 8% coefficient of variation) and number of batch tests (40 tests) considered here, a typical value of the safety factor due to ABasis property is around 1.27. Table 35. Probability of failure for different bounds on error e for components designed using safety factor of 1.5 and mean value for allowable stress. Numbers in parenthesis denote the coefficient of variation of the quantity. Average design thickness without certification is 1.000. Average Probability of TAverage Certificat Probability of Probability of Error design F a failure Probability Probability ion Failure after bound thickness ailue without ratio difference failure certification be after e c% P) 14 certification (Pt/Pnt) (PntPt) certification rate () 1 (Pnt) x 104 50% 1.243 (0.13) 50.1 3.420 (5.82) 1681 (1.81) 2.035x103 1.677x101 40% 1.191 (0.11) 50.1 4.086 (6.78) 969.0 (1.99) 4.217x103 9.649x102 30% 1.139 (0.09) 50.8 5.616 (5.45) 376.6 (2.00) 1.495x 102 3.700x102 20% 1.086 (0.07) 50.7 6.253 (3.19) 92.67 (1.83) 6.748x102 8.642x103 10% 1.029 (0.05) 51.0 9.209 (1.70) 19.63 (1.25) 4.690x101 1.043 x 103 *Average over N=5000 models +With only 5000 models, the standard deviation in the certification failure rate is about 0.71%. Thus, all the numbers in this column are about 50% as may be expected when mean material properties are used. Without the Abasis properties, the stress in the certification test is approximately equal to the average ultimate service stress, so that about 50% of the components fail certification. When the errors are large, this raises substantially the average thickness of the components that pass certification, so that for an error bound of 50% the certification test is equivalent to a safety factor of 1.243. Large errors produce some superstrong and some superweak components (see Fig. 33b). The superweak components are mostly caught by the certification tests, leaving the superstrong components to reduce the probability of failure. Another way of looking at this effect is to note that when there are no errors, there is no point to the tests. Indeed, it can be seen that the probability of failure without certification tests improves with reduced error bound e, but that the reduced effect of the certification tests reverses the trend. Thus for this case we obtain the counterintuitive results that larger errors produce safer designs. Comparing the first row of Table 35 to that of Table 33 we see the effect of the smaller sample for the Monte Carlo simulations. Table 33 was obtained with 10,000 models and 100,000 copies per model, while Table 35 was obtained with 5000 models, and 20,000 copies per model. The difference in the probability of failure after certification between the two tables is about 11 percent. However, the two values straddle the analytical approximation. The effects of building block type of tests that are conducted before certification are not included in this study. These tests reduce the errors in analytical models. For instance, if there is 50% error in the analytical model the building block type of tests may reduce this error to lower values. Hence, the difference between the rows of Table 34, may be viewed as indicating the benefits of reducing the error by building block tests. (b) Designs of high error bound (a) Designs of Design low error bound Thickness I Minimum thickness that can I pass certification test Figure 33. Design thickness variation with low and high error bounds. Note that after certification testing only the designs above the minimum thickness are built and flown. Those on the right have a much higher average design thickness than those on the left. Table 36 shows the effect of not using a safety factor. Although certification tests improve the reliability, again in a general trend of high improvement with high error, the lack of safety factor of 1.5 limits the improvement. Comparing Tables 34 and 36 it can be seen that the safety factor reduces the probability of failure by two to four orders of magnitudes. It is interesting to note that the effect of the error bound on the probability of failure after certification is not monotonic, and this phenomenon is discussed in Appendix C. Table 36. Probability of failure for different bounds on error e for safety factor of 1.0 and Abasis allowable stress. Numbers in parenthesis denote the c.o.v. of the quantity. Average design thickness without certification is 0.847. Average Certificat Failure Failure Error design probability probability Probabity Probability ion Probability bound thickness n after with no difference S ae failure ratio (Pt/Pnt) be after certification certification (PntPt) certification rate (P) x 102 (Pt) x 102 50% 0.969 (0.19) 29.4 6.978 (2.12) 29.49 (1.31) 2.366x101 2.251x101 40% 0.929 (0.17) 25.0 7.543 (1.98) 24.56 (1.38) 3.071x101 1.702x101 30% 0.886 (0.15) 16.6 8.923 (1.73) 17.11 (1.43) 5.216x101 8.184x102 20% 0.855 (0.11) 5.7 8.171 (1.40) 9.665 (1.34) 8.454x10' 1.494x102 10% 0.847 (0.06) 1.3 4.879 (0.97) 4.996 (0.97) 9.767x101 1.163x103 *Average over N=5000 models Table 37, shows results when the only safety measure is certification testing. Certification tests can reduce the probability of failure of components by 38%, the highest improvement corresponds to the highest error. As can be expected, without certification tests and safety measures, the probability of failure is near 50%. Tables 34 through 37 illustrate the probability of failure for a fixed 8 % coefficient of variation in failure stress. The general conclusion that can be drawn from these results is that the error bound e is one of the main parameters affecting the efficacy of certification tests to improve reliability of components. Table 37. Probability of failure for different error bounds for safety factor of 1.0 and mean value for allowable stress. Average design thickness without certification is 0.667. Certific Probability of Probability of Error Average design Certific Probability of Probability of Probability Probability bound t s ar ation Failure after failure without ro bound thickness after ratio difference S failure certification certification (Pt/Pnt) (PntPt) be certification (P % PP) (PntPt) rate % (Pr) (Pt) 50% 0.830 (0.12) 50.1 0.125 (1.39) 0.505 (0.83) 2.463 x 10' 3.808x101 40% 0.796 (0.11) 50.2 0.158 (1.20) 0.504 (0.79) 3.140x101 3.459x101 30% 0.761 (0.09) 50.4 0.205 (0.92) 0.503 (0.72) 4.075 x 10' 2.981x101 20% 0.727 (0.08) 50.9 0.285 (0.64) 0.503 (0.58) 5.653x10' 2.189x101 10% 0.686 (0.05) 50.7 0.412 (0.34) 0.500 (0.34) 8.228x101 8.869x102 *Average over N=5000 models Next, we will explore how another parameter, variability, influences the efficacy of tests. This is accomplished by changing the coefficient of variation of failure stress of between 016% and keeping the error bound constant. Table 38. Probability of failure for different uncertainty in failure stress for the components designed with safety factor of 1.5, 50% error bounds e and A basis allowable stress. Coeffi Average Average Probability Probability cient design design Certific of failure of failure Probability Probability r. Probability Probability of thickness thickness ation after without ratio difference variati without after failure certification certification ( ) (P (Pt/Pnt) (PntPt) on of certificate certificate rate % n n f on on *(Pt) 104 (Pnt) x 104 0.998 1.250 0.017 1699 0% 0.998 1.250 50.2 0.017 1699 1.004x105 1.698x101 (0.29) (0.11) (6.85) (1.87) 4 1.127 1.347 380.087 970.4 8.9731 9.703x102 (0.29) (0.15) (7.20) (2.35) 8o% 1.269 1.453 293 1.664 449.0 3.706x 13 4.473 102 (0.29) (0.19) (7.86) (2.74) 1.431 1.574 13.33 206.1 2 2 12% 1.431 1.574 20.9 13.33 206.1 6.469x 102 1.927x 102 (0.29) (0.22) (7.71) (3.08) 1 1.616 1.723 14.1 22.52 107.3 2.100x 10 8.476103 (0.30) (0.25) (5.54) (3.24) *Average over N=5000 models Table 39. Probability of failure for different uncertainty in failure stress for the components designed with safety factor of 1.5, 30% error bound e and Abasis allowable stress. Coeffi Average Average Probability Probability cient design design Certific of failure of failure Probability Probability of thickness thickness ation after without ratio difference variati without after failure certification certification (P/) ( (Pt/Pnt) (PntPt) on of certificate certificate rate % n n f on on *(Pt) 104 (Pnt) x 104 1.001 1.148 0.026 223.8 0% 1.001 1.148 50.1 0.026 223.8 1.163x104 2.238x 102 (0.17) (0.08) (4.79) (2.50) 4% 1.126 1.232 316 0.146 35.25 4.149 10 3 3.511 103 (0.17) (0.11) (6.03) (2.97) 8 1.269 1.329 161.343 9.086 1.479x10 7742104 (0.17) (0.15) (5.28) (3.46) 12 1.431 1.459 7.2 2.404 4.314 5.572x101 1.911x104 (0.18) (0.17) (3.87) (3.45) 1.617 1.630 2.513 3.102 16% 1.617 1.630 3.3 2.513 3.102 8.099x101 5.896x105 (0.18) (0.18) (3.73) (3.54) Average over N=5000 models Table 3.10. Probability of failure for uncertainty in failure stress for components designed using safety factor of 1.5, 10% error bounds e and Abasis properties Probabili Coeffi Average Average Probability ty of cient design design Certific fa e of failure Pro failure Probability of thickness thickness action without Probability after difference variati without after failure certification ratio (P/Pnt) certificate (PntPt) on of certificate certificate rate % n On ( 104on of on on (Pt)x 104 (Pnt)10 000 1.000 1.048 50.3 0.075 1.745 4.304 102 1.669 104 (0.06) (0.03) (2.91) (1.78) 4% 1.126 1.131 0.053 0.070 7.548x101 1.716 106 (0.06) (0.06) (3.85) (3.56) 8% 1.269 1.272 12 0.027 0.029 9147101 2490x107 (0.06) (0.07) (4.71) (4.59) 12 1.431 1.432 8 0.049 0.051 9.623101 1.926x107 (0.07) (0.07) (4.30) (4.23) 16% 1.623 1.624 05 0.085 0.083 9.78110 1 1.853107 (0.08) (0.08) (3.50) (3.55) *Average over N=5000 models The increase in the variability in failure stress has a large effect on the allowable stress because Abasis properties specify an allowable that is below 99% of the sample. Increased variability reduces the allowable stress and therefore increases the design thickness. It is seen from Tables 38 through 310 that when the variability increases from 0% to 16%, the design thickness increases by more than 60%. This greatly reduces the probability of failure without certification. However, the probability of failure with certification still deteriorates. That is, the use of Abasis properties fails to fully compensate for the variability in material properties. This opposite behavior of the probability of failure before and after certification is discussed in more detail in Appendix C. The variability in failure stress greatly changes the effect of certification tests. Although the average design thicknesses of the components increase with the increase in variability, we see that when the variability is large, the value of the tests is reduced because the tested aircraft can be greatly different from the airplanes in actual service. We indeed see from the Tables 38, 39 and 310 that the effect of certification tests is reduced as the variability in the failure stress increases. Recall that the effect of certification tests is also reduced when the error e decreases. Indeed, Table 38 shows a much smaller effect of the tests than Table 310. Comparing the second and third columns of Tables 38, 39 and 310 we see that as the bound of error decreases, the change in the average value of design thicknesses of the components become less which is an indication of loss in the efficacy of certification tests. Up to now, both the probability difference (PntPt) and the probability ratio (Pt/Pnt) seem to be good indicators of efficacy of tests. To allow easy visualization, we combined the errors and the variability in a single ratio (Bound of e) / VR(o/cf) ratio (ratio of error bound e to the coefficient of variation of the stress ratio). The denominator accounts for the major contributors to the variability. The value in the denominator is a function of four variables; service load P, width w, thickness t, and failure stress af. Here, P and af have lognormal distributions but w and t are uniformly distributed. Since the coefficient of variations of w and t is very small, they can also be treated as lognormally distributed to make calculation of the denominator easy while plotting the graphs. Since the standard deviations of the variables are small, the denominator is now the square root of the sum of the squares of coefficient of variations of the four variables mentioned above, that is VR (;I ) V V(P) + V] (w) + V] (t) + V] (f ) (3.11) The effective safety factor is the ratio of the design thickness of the component when safety measures (such as usage of Abasis values for material properties and safety factor) are applied to the thickness of the component when no safety measures are taken. Figures 34 and 35, present the Pt/Pnt ratio in visual formats. It can be seen that as expected, the ratio decreases as the (Bounds on e)/VR(oC/f) ratio increases. However, these two figures do not give a clear indication of how certification tests are influenced by the effective safety factor. o 0 10o 0,, : 10" 10 SI 10 10 VR 10 s 2 102 SFE 10S'" Bound of VR ("/,) VR(/f) Figure 34. Influence of effective safety Figure 35. Influence of effective safety factor, error, and variability on the factor, error and variability on the probability ratio (3D view) probability ratio (2D plot) Figures 36 and 37 show the probability difference, PntPt. In these cases, the dependence on the effective safety factor is monotonic. As expected, it is seen that as the effective safety factor increases, the improvement in the safety of component decreases; meaning that the certification tests become less useful. The probability difference is more descriptive as it is proportional to the number of aircraft failures prevented by certification testing. The probability ratio lacks such clear physical interpretation, even though it is a more attractive measure when the probability of failure is very small. Considering the results presented by Figures 34 through 37, the probability difference (PntPt) is the more appropriate choice for expressing the effectiveness of tests. 035 03 ; 025 0* 035 03 055 02PflP, SFE 25 2 01 0.35 03 025 0. P55 P 0.15 "' i Y1 SF.E 2 Bound ofe 0 05 0 15 2 25 3 3,5 4 45 5 VR (brd) Bound ofe Figure 36. Influence of effective safety Figure 37. Influence of effective safety factor, error and variability on the factor, error and variability on probability difference (3D view) probability difference (2D plo Summary We have used a simple example of point stress design for yield to illustrate the effects of several safety measures taken in aircraft design: safety factors, conservative material properties, and certification tests. Analytical calculations and Monte Carlo simulation were performed to account for both fleetlevel uncertainties (such as errors in analytical models) and individual uncertainties (such as variability in material properties). It was seen that an increase of the systemic errors in the analysis causes an increase in the probability of failure. We found that the systemic errors can be reduced by the use of certification tests, thereby reducing the probability of failure. Also we found that design thicknesses of the components increased as the bounds of systemic errors increased. We found that the effect of certification tests is most important when errors in analytical models are high and when the variability between airplanes is low. This leads to the surprising result that in some situations larger error variability in analytical models reduces the probability of failure if certification tests are conducted. For the simple the t) r example analyzed here, the use of conservative (Abasis) material properties was equivalent to a safety factor of up to 1.6, depending on the scatter in failure stresses. The effectiveness of the certification tests is expressed by two measures of probability improvement. The ratio of the probability of failure with the test, Pt, to the probability of failure without tests, Pnt, is useful when Pt is small. The difference is more meaningful when the probability is high. Using these measures we have shown that the effectiveness of certification tests increases when the ratio of error to variability is large and when the effective safety factor is small. The effect of buildingblock type tests that are conducted before certification was not assessed here. However, these tests reduce the errors in the analytical models, and on that basis we determined that they can reduce the probability of failure by one or two orders of magnitude. The calculated probabilities of failure with all the considered safety margins explain why passenger aircraft are so safe structurally. They were still somewhat high  about 107compared to the probability of failure of actual aircraft structural componentsabout 108. This may be due to additional safety measures, such as conservative design loads or to the effect of design against additional failure modes. CHAPTER 4 COMPARING EFFECTIVENESS OF MEASURES THAT IMPROVE AIRCRAFT STRUCTURAL SAFETY Chapter 3 explored how safety measures compensate for errors and variability. The major finding of that chapter was that certification tests are most effective when errors are large, variability is low, and the overall safety factor is low. Chapter 3 mainly focused on the effectiveness of certification testing, but the relative effectiveness of safety measures was not addressed. The present chapter takes a further step and aims to discover how measures that improve aircraft structural safety compare with one another in terms of weight effectiveness. In addition, structural redundancyanother safety measureis included in the analysis. In addition the simple error model of Chapter 3 is replaced by a more detailed error model. Comparison of the effectiveness of error and variability reduction with other safety measures is also given. The research presented in this chapter is submitted for publication (Acar et al. 2006d). My colleague Dr. Amit Kale's contribution to this work is acknowledged. Introduction As noted earlier, aircraft structural design is still carried out by using codebased design, rather than probabilistic design. Safety is improved through conservative design practices that include the use of safety factors and conservative material properties. Safety is also improved by testing of components, redundancy, improved modeling to reduce errors and improved manufacturing to reduce variability. The following gives brief description of these safety measures. Load Safety Factor In transport aircraft design, FAA regulations state the use of a load safety factor of 1.5 (FAR 25.303). That is, aircraft structures are designed to withstand 1.5 times the limit load without failure. Conservative Material Properties In order to account for uncertainty in material properties, FAA regulations state the use of conservative material properties (FAR 25.613). The conservative material properties are characterized as Abasis and Bbasis material property values, and the use of Abasis or Bbasis values depends on the redundancy. If there is single failure path in the structure, Abasis values are used, while for the case of multiple failure paths (i.e., redundant structures), Bbasis values are used. Detailed information on these values is provided in Chapter 8 of Volume 1 of the Composite Materials Handbook (2000). The basis values are determined by testing a number of coupons selected at random from a material batch. The Abasis value is determined by calculating the value of a material property exceeded by 99% of the population with 95% confidence, while the Bbasis value is the value of a material property exceeded by 90% of the population with 95% confidence. Here, we take the redundancy of the structure into account, so we use Bbasis values (see Appendix A for the Bbasis value calculation). The number of coupon tests is assumed to be 40. Tests Tests of major structural components reduce stress and material uncertainties for given extreme loads due to inadequate structural models. These tests are conducted in a building block procedure (Composite Materials Handbook (2000), Volume 1, Chapter 2). First, individual coupons are tested, and then a subassembly is tested, followed by a full scale test of the entire structure. Here, we only consider the final certification test for an aircraft. Other tests are assumed to be error reduction measures and their effect is analyzed indirectly by considering the effect of error reduction. Redundancy Transport airliners are designed with double and triple redundancy features in all major systems to minimize the failure probability. Redundancy is intended to ensure that a single component failure does not lead to catastrophic failure of the system. In the present work, we assume that an aircraft structure will fail if two local failures occur in the structure. Error Reduction Improvements in the accuracy of structural analysis and failure prediction of aircraft structures reduce errors and enhance the level of safety of the structures. These improvements may be due to better modeling techniques developed by researchers, more detailed finite element models made possible by faster computers, or more accurate failure predictions due to extensive testing. Variability Reduction Examples of mechanisms that reduce variability in material properties include quality control and improved manufacturing processes. Variability in damage and ageing effects is accomplished through inspections and structural health monitoring. Variability in loads may be reduced by better pilot training and information that allows pilots to more effectively avoid regions of high turbulence. Here we investigate only the effect of reduced variability in material properties. The next section of this chapter discusses the more detailed error model used in this chapter, along with variability and total safety factor. Next, the effect of certification tests on error distribution is analyzed. Then, details of the calculation of the probability of failure via separable Monte Carlo simulations (MCS) are given. Finally, the chapter finalizes with the results and summary. Errors, Variability and Total Safety Factor The simplified uncertainty classification used in Chapter 3 is also used in this chapter, where errors are uncertainties that apply equally to the entire fleet of an aircraft model and variabilities are uncertainties that vary for the individual aircraft (see Table 3 1, Chapter3). This section first discusses the errors in design and construction. Next, total error factor and total safety factor are introduced, finally, simulation of variability is discussed. Errors in Design We consider static point stress design for simplicity. Other types of failures such as fatigue, corrosion or crack instability are not taken into account. We assume that an aircraft structure will fail only if two local failure events occur. For example, we assume that the wing will fail structurally if two local failures occur at the wing panels. The correlation coefficient between the probabilities of these two events is assumed to be 0.5. Before starting the structural design, aerodynamic analysis needs to be performed to determine the loads acting on the aircraft. However, the calculated design load value, Pcalc, differs from the actual loading Pd under conditions corresponding to FAA design specifications (e.g., guststrength specifications). Since each company has different design practices, the error in load calculation, ep, is different from one company to another. The calculated design load Pcal, is expressed in terms of the true design load Pd as Pcal = (l+ep)Pd (4.1) Besides the error in load calculation, an aircraft company may also make errors in stress calculation. We consider a small region in a structural part, characterized by a thickness t and width w, that resists the load in that region. The value of the stress in a structural part calculated by the stress analysis team, ocalc, can be expressed in terms of the load values calculated by the load team Pcalc, the design width Wdes gn, and the thickness t of the structural part by introducing the term e, representing error in the stress analysis acalc =(1+e) calc (4.2) Design t Equation (4.3) is used by a structural designer to calculate the design thickness design required to carry the calculated design load times the safety factor SFL. That is, design (1+e) SFPcalc =(1+e)(1+e) SFLPd (4.3) Design ()a )calc w design (0a )calc where (oa)calc is the value of allowable stress for the structure used in the design, which is calculated based on coupon tests using failure models such as Tresca or von Mises. Since these failure theories are not exact, we have (a )calc = 1e (a )true (4.4) where ef is the error associated with failure prediction. Moreover, the errors due to the limited amount of coupon testing to determine the allowables, and the differences between the material properties used by the designer and the average true properties of the material used in production are included in this error. Note that the formulation of Eq. (4.4) is different to that of Eqs. (4.1) and (4.2) in that the sign in front of the error factor ef is negative, because we consistently formulate the expressions such that positive error implies a conservative decision. Combining Eqs. (4.3) and (4.4), we can express the design value of the load carrying area as ( +e,)(1+ep) SFLPd Adesig~, = t. ,.," design (4.5) 1 ( a )true Errors in Construction In addition to the above errors, there will also be construction errors in the geometric parameters. These construction errors represent the difference between the values of these parameters in an average airplane (fleetaverage) built by an aircraft company and the design values of these parameters. The error in width, ew, represents the deviation of the design width of the structural part, wdesgn, from the average value of the width of the structural part built by the company, iw ,,,, Thus, Built =(1 +ew)Wdesign (4.6) Similarly, the built thickness value will differ from its design value such that built (1+ et) design (4.7) Then, the built load carrying area Ab,,lt can be expressed using the first equality of Eq. (4.5) as Abuilt = ( + e)( + ew)Adesign (4.8) Table 41 presents nominal values for the errors assumed here. In the results section of this chapter we will vary these error bounds and investigate the effects of these changes on the probability of failure. As seen in Table 42, the error having the largest bound in its distribution is the error in failure prediction ef, because we use it to model also the likelihood of unexpected failure modes. Table 41. Distribution of error factors and their bounds Error factors Distribution Type Mean Bounds Error in stress calculation, e, Uniform 0.0 + 5% Error in load calculation, ep Uniform 0.0 + 10% Error in width, ew Uniform 0.0 + 1% Error in thickness, e, Uniform 0.0 + 2% Error in failure prediction, ef Uniform 0.0 + 20% The errors here are modeled by uniform distributions, following the principle of maximum entropy. For instance, the error in the built thickness of a structural part (et) is defined in terms of the error bound (bt)built via Eq. (4.9). et = U[O,(bt)built (4.9) Here 'U' indicates that the distribution is uniform and '0 (zero)' is the average value of et. Table 41 shows that (bt)built = 0.02. Hence, the lower bound for the thickness value is the average value minus 2% of the average and the upper bound for the thickness value is the average value plus 2% of the average. Commonly available random number generators provide random numbers uniformly distributed between 0 and 1. Then, the error in the built thickness can be calculated from Eq. (4.10) using such random numbers r as et = (2r 1)(bt)built (4.10) Total Error Factor The expression for the built load carrying area, Abwlt, of a structural part can be reformulated by combining Eqs. (4.5) and (4.8) as Abuilt =(1+e"tot S (4.11) (a true where et(tal +e)(1+ep)(+et)(+ ) 1 (4.12) total = 1 (4.12) 1e Here etotal represents the cumulative effect of the individual errors (er, ep, ...) on the load carrying capacity of the structural part. Total Safety Factor The total safety factor, SF, of a structural part represents the effects of all safety measures and errors on the built structural part. Without safety measures and errors, we would have a load carrying area, Ao, required to carry the design load Ao:= d (4.13) Of where 5f is the average value of the failure stress. Then, the total safety factor of a built structural component can be defined as the ratio of Abuilt/Ao (S built Abuilt = (+ total ) SFL (4.14) Ao (Ca)true Here we take SFL = 1.5 and conservative material properties are based on Bbasis values. Certification tests add another layer of safety. Structures with large negative total unconservativee) fail certification, so the certification process adds safety by biasing the distribution of etotal. Denoting the built area after certification (or certified area) by Acert, the total safety factor of a certified structural part is (SF) Acei (4.15) A0c Variability In the previous sections, we analyzed the different types of errors made in the design and construction stages, representing the differences between the fleet average values of geometry, material and loading parameters and their corresponding design values. For a given design, these parameters vary from one aircraft to another in the fleet due to variabilities in tooling, construction, flying environment, etc. For instance, the actual value of the thickness of a structural part, tact, is defined in terms of its fleet average built value, tbuilt, by tact = (1+ v) tbuilt (4.16) We assume that vt has a uniform distribution with 3% bounds (see Table 42). Then, the actual load carrying area Aact can be defined as Aact = act act = (1 + v) tbuilt (1 + Vw builtt = (l + v )(l + w ) Abuilt (4.17) where Vw represents effect of the variability on the built width. Table 42 presents the assumed distributions for variabilities. Note that the thickness error in Table 41 is uniformly distributed with bounds of 2%. Thus the difference between all thicknesses over the fleets of all companies is up to 5%. However, the combined effect of the uniformly distributed error and variability is not uniformly distributed. Table 42. Distribution of random variables having variability Variables Distribution Mean Scatter Actual service load, Pact Lognormal Pd = 100 10% c.o.v. Actual structural part width, wact Uniform 'o,,, 1% bounds Actual structural part thickness, tact Uniform tbuilt 3% bounds Failure stress, of Lognormal 150 8% c.o.v. Variability in built width, Vw Uniform 0 1% bounds Variability in built thickness, vt Uniform 0 3% bounds c.o.v.= coefficient of variation Certification Tests After a structural part has been built with random errors in stress, load, width, allowable stress and thickness, we simulate certification testing for the structural part. Recall that the structural part will not be manufactured with complete fidelity to the design due to variability in the geometric properties. That is, the actual values of these parameters wact and tact will be different from their fleetaverage values inw,,,,, and tbuilt due to variability. The structural part is then loaded with the design axial force of SFL times Pcalc, and if the stress exceeds the failure stress of the structure of, then the structure fails and the design is rejected; otherwise it is certified for use. That is, the structural part is certified if the following inequality is satisfied  SFLcalc f < 0 (4.18) Wacttact The total safety factor (see Eq. (4.14)) depends on the load safety factor, the ratio of the failure stress to the Bbasis allowable stress and the total error factor. Note that the B basis properties are affected by the number of coupon tests. As the number of tests increases, the Bbasis value is also increases, so a lower total safety factor is used. Amongst the terms in the total safety factor expression, the error term is subject to the largest change due to certification testing. Certification tests reduce the probability of failure by mainly changing the distribution of the error factor ettal. Without certification testing, we assume uniform distributions for all the individual errors. However, since designs based on unconservative models are more likely to fail certification, the distribution of total becomes conservative for structures that pass certification. In order to quantify this effect, we calculated the updated distribution of the error factor total by Monte Carlo Simulation (MCS) of a sample size of 1,000,000. In Chapter 3, we represented the overall error with a single error factor e, hereinafter termed the "Single Error Factor model (SEF model)", and we used uniform distribution for the initial (i.e., built) distribution of this error. In the present work, we use a more complex representation of error with individual error factors, hereinafter termed the "Multiple Error Factor model (MEF model)", and we represent the initial distribution of each individual error factor with uniform distribution. In this case, the distribution of the total error is no longer uniform. Figure 41 shows how certification tests update the distribution of the total error for the SEF and MEF models. For both models the initial distribution is updated such that the likelihood of conservative values of the total error is increased. This is due to the fact that structures designed with unconservative (negative) errors are likely to be rejected in certification tests. Notice that the SEF model exaggerates the effectiveness of certification testing. The reader is referred to Appendix D for a detailed comparison of the two error models. 3  7 * MEF_e_buit 2.5 MEFe_certified S SEFebuilt 5 SEF_e_certified 0 .5 I 0.5 0.4 0.2 0 0.2 0.4 0.6 error Figure 41. Comparing distributions of built and certified total error etotal of SEF and MEF models. The distributions are obtained from simulation of 1,000,000 structural parts. The lower and upper bounds for the single error are taken as  22.3% and 25.0%, respectively, to match the mean and standard deviation of the total error factor in the MEF model (see Table D1 of Appendix D). Figure 42 shows the distributions of the built and certified total safety factors of the MEF model. Notice that the structural parts designed with low total safety factors are likely to be rejected in the certification testing. The mean and standard deviations of built and certified distributions of the error factor and the total safety factor are listed in Table 43. Comparing the mean and standard deviation of the built and certified total error (and similarly the total safety factor), we see that the mean is increased and the standard deviation is reduced due to certification testing. 1.6 1.2    built certified P d f 0 .8  <  Pdf 0.8 .... ...... 0. 4. ... *. 0 1 1.5 2 2.5 3 Safety factor Figure 42. Initial and updated distribution of the total safety factor SF. The distributions are obtained via Monte Carlo Simulations with 1,000,000 structural part models. Table 43. Mean and standard deviations of the built and certified distributions of the error factor total and the total safety factor SF shown in Figures 41 and 42. The calculations are performed with 1,000,000 MCS. Mean Std. dev. Built total error 0.0137 0.137 Certified total error 0.0429 0.130 Built safety factor 1.747 0.237 Certified safety factor 1.799 0.226 Probability of Failure Calculation As noted earlier, we assume that structural failure requires the failure of two structural parts. In this section, we first describe the probability of failure calculations of a single structural part by using separable MCS. Then, we discuss the calculation of the system probability of failure. Probability of Failure Calculation by Separable MCS To calculate the probability of failure, we first incorporate the statistical distributions of errors and variability in a Monte Carlo simulation. Errors are uncertain at the time of design, but do not change for individual realizations (in actual service) of a particular design. On the other hand, all individual realizations of a particular design are different from each other due to variability. In Chapter 3, we implemented this through a twolevel Monte Carlo simulation. At the upper level we simulated different aircraft companies by assigning random errors to each, and at the lower level we simulated variability in dimensions, material properties, and loads related to manufacturing variability and variability in service conditions. This provided not only the overall probability of failure, but also its variation from one company to another (which we measured by the standard deviation of the probability of failure). This variation is important because it is a measure of the confidence in the value of the probability of failure due to the epistemic uncertainty (lack of knowledge) in the errors. However, the process requires trillions of simulations for good accuracy. In order to address the computational burden, we turned to the separable Monte Carlo procedure (e.g., Smarslok and Haftka (2006)). This procedure applies when the failure condition can be expressed as gl(xl)>g2(x2), where xl and x2 are two disjoint sets of random variables. To take advantage of this procedure, we need to formulate the failure condition in a separable form, so that gi will depend only on variabilities and g2 only on errors. The common formulation of the structural failure condition is in the form of a stress exceeding the material limit. This form, however, does not satisfy the separability requirement. For example, the stress depends on variability in material properties as well as design area, which reflects errors in the analysis process. To bring the failure condition to the right form, we instead formulate it as the required cross sectional area Aeq being larger than the built area Abuilt, as given in Eq. (4.19) Are Abuilt < ( Aeq (4.19) (1+vt)(1+vj) where Areq is the crosssectional area required to carry the actual loading conditions for a particular copy of an aircraft model, and Aeq is what the built area (fleetaverage) needs to be in order for the particular copy to have the required area after allowing for variability in width and thickness. Areq = P f (4.20) The required area depends only on variability, while the built area depends only on errors. When certification testing is taken into account, the built area, Abu lt, is replaced by the certified area, Acert, which is the same as the built area for companies that pass certification. However, companies that fail are not included. That is, the failure condition is written as Failure without certification tests: Aeq Abuilt > 0 (4.21a) Failure with certification tests: Areq Acert > 0 (4.21b) Equation (4.21) can be normalized by dividing the terms with Ao (load carrying area without errors or safety measures, Eq. (4.13)). Since AbudtlAo or Acert/Ao are the total safety factors, Eq. (4.21) is equivalent to the requirement that failure occurs when the required safety factor is larger than the built one. Failure without certification tests: (SF)req (SF)bilt > 0 (4.22a) Failure with certification tests: (SF)req (SF)cert > 0 (4.22b) where (SF)built and (SF)cer are the built and certified total safety factors given in Eqs. (4.14) and (4.15), and the required total safety factor (SF)req is calculated from (SF)r = req (4.23) ( r )req ./10 For a given (SF)built we can calculate the probability of failure, Eq. (4.22.a), by simulating all the variabilities with MCS. Figure 43 shows the dependence of the probability of failure on the total safety factor using MCS with 1,000,000 variability samples. The zigzagging in Figure 43 at high safety factor values is due to the limited MCS sample. Note that the probability of failure for a given total safety factor is one minus the cumulative distribution function (CDF) of the total required safety factor. This required safety factor depends on the four random variables Pact, af, Vt and vw. Among them Pact and f have larger variabilities compared to vt and vw (see Table 42). We found that (SF) q is accurately represented with a lognormal distribution, since Pact and af follow lognormal distributions. Figure 43 also shows the probability of failure from the lognormal distribution with the same mean and standard deviation. Note that the nominal load safety factor of 1.5 is associated with a probability of failure of about 103, while the probabilities of failure observed in practice (about 107) correspond to a total safety factor of about two. 101 r "  Lognormal approx. 10.2 10, 104 10" 10" 10 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 SF =Abuilt'AO Figure 43. The variation of the probability of failure with the built total safety factor. Note that Pf is one minus the cumulative distribution function of (SF )req Figure 44 represents flowchart of a separable MCS procedure. Stage1 represents the simulation of variabilities in the actual service conditions to generate the probability of failure as shown in Figure 43. This probability of failure is one minus the cumulative distribution function (CDF) of the required safety factor (SF)req In Stage1, M=1,000,000 simulations are performed and CDF of (SF),,q is assessed. A detailed discussion on CDF assessment for (SF)req is given in Appendix E. Simulate M different realizations of the variabilities related to the actual service conditions Calculate the required safety factor, Are q/A0 G STAGE 1 Generate the CDF of the (see Fig. 3) required safety factor Simulate N different error and variabilities related to design and construction phases Calculate the built safety factor Abuilt /A0 STAGE 2 Perform certification testing  Reject the design in case of failure Calculate probability of failure for each design Calculate the average and coefficient of variation of probability of failure Figure 44. Flowchart for MCS of component design and failure In Stage2, N=1,000,000 designs are generated for N different aircraft companies. For each new design, different random error factors e6, ep, ew, et and ef are picked from their corresponding distributions to generate the built safety factor, (SF)buil Then, each design is subjected to certification testing. If it passes, we obtain the probability of failure from the distribution obtained in Stage1 (Figure 43). We calculate the average and coefficient of variation (c.o.v.) of the failure probability over all designs and explore the effects of error, variability, and safety measures on these values in Results section. The separable Monte Carlo procedure reduces the computational burden greatly. For instance, if the probability of failure is 2.5x 105, a million simulations varying both errors and variability simultaneously estimate this probability with 20% error. We found for our problem that the use of the separable Monte Carlo procedure requires only 20,000 simulations (10,000 simulations for Stage1 and 10,000 for Stage2) for the same level of accuracy. Including Redundancy The requirement of two failure events is modeled here as a parallel system. We assume that the limitstates of the both failure events follow normal distribution to take advantage of known properties of the bivariate normal distribution. For a parallel system of two elements with equal failure probabilities, Eq. (4.24) is used to calculate the system probability of failure PFS (see Appendix F for details) PFS = P + 1 exp dz (4.24) where Pf is the probability of failure of a single structural part, p is the correlation coefficient of the two limitstates and 8 is the reliability index for a single structural part, which is related to Pf through Eq. (4.25) Pf = (fp) (4.25) Results In this section, the effectiveness of safety measures is investigated and the results are reported. First, we discuss the effects of error reduction. Then, the relative effectiveness of error reduction and certification is compared. Next, the effectiveness of redundancy is explored. Finally, the effectiveness of variability reduction is investigated. Effect of Errors We first investigate the effect of errors on the probability of failure of a single structural part. For the sake of simplicity, we scale all error components with a single multiplier, k, replacing Eq. (4.12) by (etoal (l+ke,)(l+ke +ke)(l+ke,,) (4.26) etotal = 1 (4.26) 1key and explore the effect of k on the probability of failure. Table 44 presents the average and coefficient of variation of the probability of failure of a single structural part. The coefficient of variation of the failure probability is computed to explore our confidence in the probability of failure estimate, since it reflects the effect of the unknown errors. Columns 5 and 6 of Table 44 show a very high coefficient of variation for the failure probabilities (variability in the probability of failure for different aircraft models). We see that as the error grows (i.e., k increases), the coefficient of variation of failure probabilities after certification also grows. Comparing the failure probabilities before certification (column 5) and after certification (column 6), we notice that even though certification tests reduce the mean failure probability, they increase the variability in failure probability. Table 44 shows that for nominal error (i.e., k=l) the total safety factor before certification is 1.747, which is translated into a probability of failure of 8.83 x 104. When the certification testing is included, the safety factor is increased to 1.799, which reduces the probability of failure to 3.79x104. Notice also that the coefficient of variation of the safety factor is reduced from 13.6% to 12.5%, which is a first glimpse of an indication that the certification testing is more effective than simply increasing the safety factor with an increased built area. A detailed analysis of the effectiveness of certification testing is given in the next subsection. Column 2 of Table 44 shows a rapid increase in the certification failure rate with increasing error. This is reflected in a rapid increase in the average safety factor of certified designs in column 4, (SF)cert This increased safety factor manifests itself in the last column of Table 44 that presents the effect of certification tests on failure probabilities. As we can see from that column, when the error increases, the ratio of the two failure probabilities decreases, demonstrating that the certification tests become more effective. This trend of the increase of the design areas and the probability ratios is similar to the one observed in Chapter 3. Note, however, that even the average safety factor before certification ((SF)built in column 3) increases with the error due to the asymmetry of the initial total error distribution (see Figure 41). Table 44. Average and coefficient of variation of the probability of failure for the structural parts designed with Bbasis properties and SFL=1.5. The numbers inside the parentheses represent the coefficient of variation of the relevant quantity. CFR(a) (bc (b) )/104 p c7 / P /P (%) (SF )built (b) SFcert ) Pnc ()/104 ()/104 Pc 0.25 6.4 1.725 (4.2%) 1.728 (4.1%) 0.244 (148%) 0.227 (148%) 0.930 0.50 9.3 1.730 (6.9%) 1.741 (6.7%) 0.763 (247%) 0.609 (257%) 0.798 0.75 13.4 1.737 (10.2%) 1.764 (9.7%) 2.70 (324%) 1.66 (357%) 0.616 0.82 14.7 1.740 (11.2%) 1.773 (10.6%) 3.79(340%) 2.13 (384%) 0.561 1 18.0 1.747 (13.6%) 1.799 (12.5%) 8.83 (371%) 3.79 (450%) 0.430 1.5 26.0 1.779 (20.5%) 1.901 (17.8%) 60.0 (385%) 11.5 (583%) 0.191 (a) CFR: Certification failure rate. (b) (SF )built and (SF )cert are the total safety factors before and after certification testing, respectively. (c) Pn and Pc are the probabilities of failure before and after certification testing, respectively. Table 44 shows the huge waste of weight due to errors. For instance, for the nominal error (i.e., k=1.0), an average built total safety factor of 1.747 corresponds to a probability of failure of 8.83 x 104 according to Table 44, but we see from Figure 43 that a safety factor of 1.747 approximately corresponds to a probability of failure of 7x106, two orders of magnitude lower. This discrepancy is due to the high value of the coefficient of variation of the safety factor. For the nominal error, the coefficient of variation of the total safety factor is 14%. Two standard deviations below the mean safety factor is 1.272, and two standard deviations above the mean safety factor is 2.222. The probability of failure corresponding to the safety factor of 1.272 (from Figure 43) is about 2.98x102, while the safety of 1.985 the probability of failure is essentially zero. So even though about 0.8% of the designs a have safety factor below 1.272 (Figure 42), these designs have a huge impact on the probability of failure. Reducing the error by half (i.e., k=0.50), reduces the weight by 1%, while at the same time the probability of failure is reduced by a factor of 3. Weight Saving Due to Certification Testing and Error Reduction We have seen in Table 44 that since structures built with unconservative errors are eliminated by certification testing; the tests increase the average safety factor of the designs and therefore reduce the average probability of failure. Since certification testing is expensive, it is useful to check if the same decrease in the probability of failure can be achieved by simply increasing the load carrying area by the same amount (i.e., by increasing the safety factor) without certification testing. Column 2 of Table 45 shows that the required area with no certification testing, Arnc, is greater than the certified area, Acert, (i.e., area after certification testing) shown in column 3. The last column shows that the weight saving by using certification test instead of a mere increase of the safety factor. We notice that weight saving increases rapidly as the error increases. For instance, when k=0.25 the weight saving is very small. Columns 4 and 5 show that even though we match the average probability of failure, there are small differences in the coefficients of variation. Table 45. Reduction of the weight of structural parts by certification testing for a given probability of failure. The numbers inside the parentheses represent the coefficient of variation of the relevant quantity. k Ar, ncAo(a) AcerAo Pnc ()/104 P (b)/104 AA (c) 0.25 1.7285 (4.2%) 1.7283 (4.1%) 0.227 (148%) 0.227 (148%) 0.01 0.50 1.743 (6.9%) 1.741 (6.7%) 0.609 (252%) 0.609 (257%) 0.14 0.75 1.770 (10.3%) 1.764 (9.7%) 1.66 (342%) 1.66 (357%) 0.36 1 1.815 (13.7%) 1.799 (12.5%) 3.79 (416%) 3.79 (450%) 0.87 1.5 1.961 (20.7%) 1.901 (17.8%) 11.5 (530%) 11.5 (583%) 3.09 (a) A,nc is the required area with no certification testing, the area required to achieve the same probability of failure as certification. (b) Pnc and Pc are the probabilities of failure before and after certification testing, respectively. (c) AA = (Ace,,A,nc)/A,nc indicates weight saving due to testing while keeping the same level of safety We notice from Table 45 that, for the nominal error (i.e., k=.0), certification testing reduces the weight by 0.87% for the same probability of failure (3.79x104). The same probability of failure could have been attained by reducing the error bounds by 18%, that is by reducing k from 1.0 to 0.82. This reduction would be accompanied by an (SF)built =1.740 (see Table 44). Compared to the 1.799 reduction (SF)built this represents a reduction of 4.13% in average weight, so error reduction is much more effective than certification testing in reducing weight. Effect of Redundancy To explore the effect of redundancy, we first compare the failure probability of a single structural part to that of a structural system that fails due to failure of two structural parts. Certification testing is simulated by modeling the testing of one structural part and certifying the structural system based on this test. Table 46 shows that while the average failure probability is reduced through structural redundancy, the coefficients of variation of the failure probabilities are increased. That is, even though the safety is improved, our confidence in the failure probability estimation is reduced. This behavior is similar to the 