UFDC Home  UF Theses & Dissertations   Help 
Material Information
Thesis/Dissertation Information
Subjects
Notes
Record Information

Full Text 
PAGE 1 1 EFFECTIVE SAFETY MEASURES WITH TESTS F OLLOWED BY DESIGN CORRECTION FOR AEROS PACE STRUCTURE S By TAIKI MATSUMURA A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2013 PAGE 2 2 2013 Taiki Matsumura PAGE 3 3 To my wife Mayumi, and my children Misa and Kohki PAGE 4 4 ACKNOWLEDGEMENTS First of all, I would like to thank my main advisors Dr. Raphael T. Haftka and Dr. Nam Ho Kim for giving me the opportunity and support to complete my doctoral studies I am very grateful for their availability and excellent guidance. All the discussions during the weekly meetings an d the group meetings were more than academic interactions. I was privileged to experience their broad and deep perspectives into the research topics and their never ending enthusiasm to guide the students. I am very grateful to Dr. Bhavani Sankar for his advice, patience and generous support His thoughtful questions during the research meetings guided me in the right direction. I would also like to thank Dr. Kurtis R. Gurley for his willingness to serve as a member of my advisory committee and to Dr. Volodymyr Bilotkach for reviewing my papers. Financial support by the National Science Foundation ( CMMI 0927790 and 1131103 ) and the U. ( FA9550 09 1 0153 ) are greatly acknowledged. I wish to thank all my colleagues for their friendship and support, especially to current and former members of the Structural and Multidisciplinary Optimization Group Our many technical discussions as well as daily conversations were very fruitful for me. Finally, I am immensely thankful to my family, Mayumi, Misa, and Kohki, for having devoted time to me all these years This thesis would not have been completed without their love encouragement, and support PAGE 5 5 TABLE OF CONTENTS Page ACKNOWLEDGEMENTS ................................ ................................ ............................... 4 LIST OF TABLES ................................ ................................ ................................ 8 LIST OF FIGURES .. ................................ ................................ ............................... 10 ABSTRACT ................................ ................................ ................................ 12 CHAPTER 1 INTRODUCTION ................................ ................................ ............................. 14 1.1 Background and Motivation ................................ ................................ .............. 14 1.2 Objectives ................................ ................................ ............................. 16 1.2.1 Tests for Failure Criterion Characterization ................................ ........... 16 1.2.2 Tests for Design Acceptance ................................ ................................ 16 1.2.3 Accident Investigation ................................ ................................ ............ 17 1.3 Outline ................................ ................................ ................................ 17 2 LITERATURE REVIEW ................................ ................................ .......................... 18 2.1 Probabilistic Design Approach ................................ ................................ .......... 1 8 2.1.1 Deterministic Design vs. Probabilistic Design ................................ ........ 18 2.1.2 Reliability Based Design Optimization ................................ ................... 20 2.1.3 Uncertainty Classification ................................ ................................ ...... 21 2.1.4 Uncertainty Reduction ................................ ................................ ........... 24 2.1.5 Quantifying the Contribution of Tests ................................ ..................... 25 2.2 Surrogate Models ................................ ................................ .............................. 26 2.2.1 Surrogate Models ................................ ................................ .................. 26 2.2.2 Surrogate Models for Uncertainty Quantification and RBDO ................. 28 2.2.3 Surrogate Models for Smoothing Noisy Data ................................ ......... 29 3 EFFECTIVE TEST STRAT EGY FOR STRUCTURAL F AILURE CRITERION CHARACTERIZATION ................................ ................................ ........................... 34 3.1 Background and Motivation ................................ ................................ .............. 34 3.2 Surrogate Models ................................ ................................ .............................. 36 3.2.1 Po lynomial Response Surface (PRS) ................................ .................... 36 3.2.2 Gaussian Process Regression (GPR) ................................ ................... 37 3.2.3 Support Vector Regression (SVR) ................................ ......................... 39 3.3 Example Problems ................................ ................................ ............................ 42 3.3.1 Support Bracket ................................ ................................ ..................... 42 3.3.2 Composite Laminate Plate ................................ ................................ ..... 43 3.3.3 Test Matrix and Fitting Strategy ................................ ............................. 44 PAGE 6 6 3.3.4 Error Evaluation ................................ ................................ ..................... 44 3.4 Results ................................ ................................ ................................ 45 3.4.1 Support Bracket ................................ ................................ ..................... 45 3.4.2 Composite Laminate Plate ................................ ................................ ..... 50 3.4.3 Selection of Best PRS for Composite Laminate Plate ........................... 52 3.5 Summary ................................ ................................ ......................... 53 4 DESIGN OPTIMIZATION ACCOUNTING FOR THE EFFECTS OF TEST FOLLOWED BY REDESIGN ................................ ................................ .................. 68 4.1 Background and Motivation ................................ ................................ .............. 68 4.2 Modeling the Effects of Future Test Followed by Redesign .............................. 69 4.2.1 Epistemic Uncertainty Corresponding to Future ................................ .... 69 4.2.2 Procedure of Future Simulation ................................ ............................. 70 4.2.2.1 STEP 1: Initial design evaluation ................................ .............. 71 4.2.2.2 STEP 2: Test observation ................................ ......................... 72 4.2.2.3 STEP 3: Error calibration ................................ .......................... 73 4.2.2.4 STEP 4: Redesign decision ................................ ...................... 73 4.2.2.5 STEP 5: Redesign ................................ ................................ .... 73 4.2.2.6 STEP 6: Post simulation evaluation ................................ ......... 74 4.2.3 RBDO Incorporating Simulated Future ................................ .................. 75 4.3 Example Problem ................................ ................................ .............................. 77 4.3.1 Integrated Thermal Protectio n System ................................ .................. 77 4.3.2 Demonstration of Future Simulation Considering Risk Allocation .......... 77 4.3.3 Design Optimization with Simulated Future ................................ ........... 81 4.4 Summary ................................ ................................ ............................... 84 5 COST EFFECTIVENESS OF ACCIDENT INVESTIGATION ................................ .. 96 5.1 Background and Motivation ................................ ................................ .............. 96 5.2 Cost Effective Measures ................................ ................................ ................... 98 5.2.1 Cost Effective Measures ................................ ................................ ........ 98 5.2.2 Estimating Probability of Accident ................................ ........................ 101 5.3 Demonstration of a Cost Effectiveness Study ................................ ................. 102 5.3.1 American Airlines Flight 587 Accident ................................ ................. 103 5.3.2 Alaska Airlines Flight 261 Accident ................................ ...................... 105 5.3.3 Space Shuttle Accidents ................................ ................................ ...... 106 5.4 Summary .. ................................ ................................ ............................... 110 6 CONCLUSION S ................................ ................................ ........................... 115 APPEND ICES MATLAB LEAST SQUARE FIT ................................ ................................ ............. 118 A BUCKLING CRITERION ................................ ................................ ....................... 120 B PARAMETER ESTIMATE FOR SPACE SHUTTLE ................................ .............. 122 C PAGE 7 7 LIST OF REFERENCES ................................ ................................ ............................. 124 BIOGRAPHICAL SKETCH ................................ ................................ .......................... 136 PAGE 8 8 LIST OF TABLES Table Page 3 1 Properties of support bracket ................................ ................................ .............. 63 3 2 Properties of composite laminate plate ................................ ............................... 63 3 3 Test matrix and total number of tests ................................ ................................ 64 3 4 Errors fitted to noise free data and in sensitive errors (support bracket) .......... 64 3 5 Ratio between the number of training data points and the number of parameters for about 50 tests (support bracket) ................................ ................. 64 3 6 Best polynomial functions for PRS ba sed on NRMSE for composite laminate plate ................................ ................................ ................................ .................... 65 3 7 Errors of surrogate models fitted to noise free data (composite laminate plate) ................................ ................................ ................................ .................. 65 3 8 Ratio between the number of training data points and the number of parameters for about 50 tests (composite laminate plate) ................................ 65 3 9 Performance comparison between test matrices for each fitting iteration for about 100 tests (composite laminate plat e) ................................ ........................ 66 3 10 Surrogate selection by PRESS (composite laminate plate) ................................ 66 4 1 Geometry and material properties of ITPS and their variability (aleatory uncertainties). ................................ ................................ ................................ ..... 93 4 2 Surrogate models for structural responses ................................ ......................... 94 4 3 Geometry of d esign candidate ................................ ................................ ............ 94 4 4 Error assumption for calculation ................................ ................................ ......... 94 4 5 Error assumption for test observation ................................ ................................ 94 4 6 Risk allocation by future redesign ................................ ................................ ....... 95 4 7 Surrogate models used for optimization ................................ ............................. 95 4 8 Optimal solutions from the standard RBDO ................................ ........................ 95 5 1 Fatalities per billion passenger miles and regulation cost per fatality in millions (Year 2002 2009) ................................ ................................ ................ 113 PAGE 9 9 5 2 Effec t ive cost threshold with respect to probability of accident (American Airlines) ................................ ................................ ................................ ............ 113 5 3 Parameters estimated for Alaska Airlines case ................................ ................ 113 5 4 Effective cost threshold with respect to probability of accident (Alaska Airlines) ................................ ................................ ................................ ............ 114 5 5 Parameters estimated for the cost effectiveness study ................................ .... 114 5 6 Cost effectiveness measures ......... 114 5 7 ident investigations considering the value of vehicle ................................ ................................ ........ 114 PAGE 10 10 LIST OF FIGURES Figure Page 2 1 Deterministic design vs. probabilistic design ................................ ...................... 32 2 2 Probability of failure calculation. Epistemic and aleatory uncertainties are treated eq ually ................................ ................................ ................................ .... 32 2 3 Probability of failure estimation. Epistemic and aleatory uncertainties are treated differently ................................ ................................ ................................ 33 2 4 Nested reliability design optimization (RBDO) ................................ .................... 33 2 5 Layered/Nested reliability design optimization (RBDO) ................................ ...... 33 3 1 Tradeoff between replication and exploration given 50 tests .............................. 55 3 2 insensitive loss function for SVR ................................ ................................ ...... 55 3 3 S upport bracket ................................ ................................ ................................ .. 56 3 4 Critical failure modes of support bracket ................................ ............................ 56 3 5 Failure load surface of support bracket initiated at point D ................................ 56 3 6 Composite laminate plate ................................ ................................ ................... 57 3 7 Critical failure modes of composite laminate plate ................................ .............. 57 3 8 Failure load surface of composite laminate plate due to axial strain ................... 57 3 9 Accuracy of SVR with various combinations of the regularization parameter C and the error tolerance (4x4 matrix with 3 replications) ................................ ... 58 3 10 Error comparison for support bracket: NRMSE for all at once fitting strategy .... 58 3 11 Error comparison for support bracket: NMAE for all at once fitting strategy ....... 59 3 12 Standard errors predicted by PRS for support bracket for abou t 50 tests ......... 59 3 13 Performance of SVR with various combinations of C and R ............................... 60 3 14 Comparison of fitting strategy: NRMSE of GPR for support bracket ................... 60 3 15 Comparison of fitting strategy: NRMSE of SVR for support bracket ................... 61 3 16 Error comparison for composite laminate plate: NRMSE for all at once fitting strategy ................................ ................................ ................................ ............... 61 PAGE 11 11 3 17 Error comparison for composite laminate plate: NMAE for all at once fitting strategy ................................ ................................ ................................ ............... 62 3 18 Comparison of fitting strategy: NRMSE of GPR for composite laminate plate .... 62 3 19 Comparison of fitting strategy: NRMSE of SVR for composite laminate plate .... 63 4 1 Probability of failure calculation considering epistemic uncertainty (possible error realization) ................................ ................................ ................................ 86 4 2 Illustration that each realization of error corresponds to different futures ........... 86 4 3 Illustration of Bayesian inference ................................ ................................ ........ 87 4 4 Possible e ffects of redesign on the distribution of probability of failure and objective function ................................ ................................ ................................ 87 4 5 Flowchart of future simulation ................................ ................................ ............. 88 4 6 Integrated thermal protection system (ITPS) ................................ ...................... 89 4 7 Effects of uncertainty reduction after tests on the probability of failure estimate ................................ ................................ ................................ .............. 89 4 8 Histogram of true probability of failure (a) Before redesign, and (b) after redesign. ................................ ................................ ................................ ............. 90 4 9 H istograms of true probability of failure. (a) Temperature,(b) stress, and (c) buckling. ................................ ................................ ................................ ............. 90 4 10 Histogram of mass after re design ................................ ................................ ....... 91 4 11 Optimal designs from RBDO FT using the mean of probability of failure ........... 91 4 12 Mass penalty for conservative design (Comparison between the 95 th percentile design and the mean design) ................................ ............................. 92 4 13 Mass and probability of redesign tradeoff (Using safety factor vs. using the mean of probability of failure) ................................ ................................ ............. 92 4 14 Difference in error calibration between Bayesian approach and safety factor approach ................................ ................................ ................................ ............ 93 5 1 Number of accidents and cost for accident investigation by NTSB in US (2002 2009) ................................ ................................ ................................ ...... 112 5 2 System reliability diagram including direct accident cause ............................... 112 5 3 Risk progress of the Space Shuttle ................................ ................................ .. 112 PAGE 12 12 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy EFFECTIVE SAFETY MEASURES WITH TESTS F OLLOWED BY DESIGN CORRECTION FOR AEROSPACE STRUCTURE S By Taiki Matsumura December 2013 Chair: Raphael T. Haftka Cochair: Nam Ho Kim Major: Aerospace Engineering A nalytical and computational prediction tools enable us to design aircraft and spacecraft components with high degree of confidence While the accuracy of such prediction s ha s been improved over the years uncertainty continues to be added by n ew materials and new technology introduced in order to improve performance This requires us to have reality checks, such as tests in order to make sure that the prediction tools are reliable enough to ensure safety While t ests can reveal unsafe designs and lead to design correction these tests are v ery costly Therefore, i t is important to manage such a design test correction cycle effectively In this dissertation, we consider three important test stages in the lifecycle of an aviation system First we dealt with characterization tests that reveal failure modes of new materials or new geometrical arrangements W e investigated the challenge associated with getting the best characterization with a limited number of tests. W e have found that rep licating tests to atten uate the effect of nois e in observation is not necessary because some surrogate models can serve as a noise filter without having rep licated data PAGE 13 13 Instead, we should focus on e xplor ing the design space with different structural configuration s in order to d iscover unknown failure modes. Next, we examined post design tests for design acceptance followed by possible redesign We looked at the question of how to balance the desire for better performance achieved by redesign against the cost of redesign We proposed a design optimization framework that provides tradeoff information between the expected performance improvement by redesign and the probability of redesign equivalent to the cost of redesign We also demonstrated that the proposed method can redu ce the performance loss due to a conservative reliability estimate The ultimate test, finally is whether the structures do not fail in flight. Once an accident occur s an accident investigation takes place and recommends corrective actions to prevent similar accident s from occurring in the future With a cost effectiveness study for past accident investigations of airplane s and the Space Shuttle we conclude that this reactive safety measure is very efficient for a highly safe mode of transportation, i.e., commercial aviation. PAGE 14 14 CHAPTER 1 INTRODUCTION 1.1 B ackground and Motivation Continuous evolution of analytical machinery, such as finite element method (FEM) and computational fluid dynamic (CFD) allows us to design structural components of aviation systems with a high degree of confiden ce [ 1 2 ] However, at the same time, n ew materi als and new technology continue to be introduced for the sake to boost performance [ 3 4 ] and that mak es existing analytical machinery obsolete and adds uncertainty. This process mandate s con duct ing reality check s i.e., tests, in order to ensure safety. Aircraft builders commonly deploy a hierarchical structural test procedure so called the building block test s [ 5 ] The procedure starts with lower stru ctural complexity to characteriz e material properties and failure mechanism s of structural elements. At the component and system level s t ests are intended to make sure that the discrepancy between analytical p redictions and actual responses will not cause critical problems [ 6 ] T he ultimate test after the development is whether the components function well in flight without having any failure A key feature of th ose tests is that they may be followed by corrective actions. That is that t he safety is guaranteed not only by initial design but also by tests and follow ing design corrections. However, since the current design practice does not model the effects of test followed by possible design correction it is not clear how efficient the current app roach is as a whole cycle For example, current design optimization methods [ 7 8 ] do not account for the effects of tests. Therefore, t ests are customarily implemented without quantifyi ng thei r PAGE 15 15 contribution to safety even though structural tests dominate the lifecycle cost [ 9 ] Furthermore, since current methods are not capable of predicting the risk associated with future design correction, p rojects often suffer the additional costs and schedule delays. For example, JAXA (Japan Aerospace Exploration Agency) [ 10 ] reported that the cost for redesign triggere d by tests throughout the development of a liquid rocket engine was far beyond their expectation s Finally, the ultimate test, i.e., flight, c an reveal design deficiencies as a result of accident Accident investigation s will tell us necessary design corrections, but airplane accidents often involve a large number of fatalities. T herefore, it is imperative to understand the cost effectiveness of the safety measures by tests and accident investigation to manage the m effectively The a bove mentioned difficulties may be attribut ed to the traditional design approach so called deterministic design. D eterministic design uses factors of safety which are intended to compensate for underlying uncertainties and have been historically established [ 11 ] F or m any safety sensitive products, l ike airplanes, buildings, and automobiles, regulators determine the factors of safety that would guarantee the acceptable levels of safety. Thanks to the simplicity and efficiency of factors of safety the deterministic design approach has enjoyed massive popularity However, the use of factors of safety hinder s the designers ability to predict the risk behind the values of factors of safety This dissertation investigates three important stages where reality checks provided by tests modify our analytical machinery and the resulting design : (1) test s for characterizing failure criteri a of structural element s (2) post design test s for verif ying the design of structural element s and (3) accident investigation s W e tailor probabilistic and PAGE 16 16 statistical techniques to each test stage in order to improve upon the qualitative nature of the current practices 1.2 Objectives F ollowing are brief descriptions of the research tasks associated with the three test stages 1.2.1 Test s f or Failure Criterion C haracterization To design a structur al element, it is important to understand how it fails (failure modes) and when it fails (failure boundary mapping for each failure mode with respect to the design parameters ). Due to the complexity of failure mechanism s and lack of knowledge about new materials or new geometrical arrangements establishing failure criteria tends to rely on an experimental approach. Tests identif y underlying failure modes and the observed data are used to approximate the failure boundar ie s such as failure load mapping. While t est ing many different configurations within the design domain is important to not miss critical failure modes (exploration), we typically re plicate specimens for the same structural configuration s to deal with nois y observation (replication). We take a look at th is resource allocation problem (exploration vs. replication) and examine an effective test strategy by taking advantage of surrogate models some of which are known to be capable of smoothing functions equiv alent to a noise filter 1.2.2 T est s for D esign Acceptance In the aerospace community, structur al design s must be verified and certified by tests. When tests show that the design is unsafe redesign is needed to restore the safety. W hile an initially conservative design reduces the risk of redesign, it will suffer the loss of performance, e.g., increased weight. On the other hand, a less conservative PAGE 17 17 design is likely to run into redesign, but redesign would yield a much better design because the test calibrate s the analytical models. We propose a design optimization method that deals with such a tradeoff associated with future test s between the expected performance improvement after redesign and the probability of redesign (cost of red esign) 1.2.3 Accident I nvestigation Accident investigation is the final safeguard in the lifecycle. It has been playing a key role in aviation safety in terms of improving the design philosophies and safety regulations After an accident happen s an elaborate investigation identifies the probable causes and issues safety recommendations in order to prevent similar accidents from occurring in the future. We discuss whether this reactive safety measure is efficient or not. For that, w e conduct a co st effectiveness study on past airplane accidents as well as the Space Shuttle disasters 1.3 Outline The organization of this dissertation is as follows. Chapter 2 reviews the literature associated with the probabil istic and statistical technique s that we will use for the research tasks Chapters 3 5 cover the research tasks described in the previous subsection. In each chapter, the detailed introduction is also provided. Chapter 6 addresses concluding remarks. PAGE 18 18 CHAPTER 2 LITERATURE REVIEW 2.1 Probabilistic Design Approach 2.1.1 Deterministic D esign vs. P robabilistic D esign A challenge in designing a structure is to ensure that it will perform intended functions without having any critical failure throughout the lifecycle. To achieve this underlying uncertainties such as variability in material properties, manufacturing processes, and operational conditions, and errors in design need to be taken into account T here are two typ es of design approaches to deal with uncertainty : deterministic design and probabilistic design. In deterministic design, factors of safety are introduced to protect against uncertaint y As the name suggests, both the state of structure and the corresponding design margin i.e., safety factor, are deterministically expressed F i gure 2 1 (a) illustrates the use of a safety factor for structural design. Suppose that we are designing a structure by using a required safety factor of 1. 5 T he strength of the material is deterministically predicted at 300 MPa. T hen t he structure is designed such that the operational stress which is also deterministically predicted, i s smaller than the strength by the factor of 1.5 resulting in the opera tional stress being 200 MPa. In probabilistic design on the other hand, the uncertainties both in the stress and strength are modeled in a probabilistic manner and the associated risk is assessed quantitatively by probability of failure F igure 2 1 (b ) describes a concept o f calculating the probability of failure of the same structure designed in Figure 2 1 ( a ) The variations in the stress and strength are modeled using probability density function s T he overlapped area of the probability density functions represents the state where th e PAGE 19 19 stress could exceed the strength, i.e., failure state. Statistically, the probability of failure can be calculated by integrating the density function s over the range of failure state In this way, t he structure may also be designed such that it satisfie s a required probability of failure instead of using a safety factor. V arious methods to calculate probability of failure including moment based techniques sampling based approaches, and so on, have been extensively studied and covered by text books, for example Refs [ 12 13 ] Deterministic design and factors of safety have been historically establish ed [ 11 ] and gained massive popularity in various engineering fields because of their simplicity and convenience S afety related regulations for engineering systems, such as airplanes, space vehicles, automobiles, and buildings, specify the factors of safety to be used. In aviation, t he Federal Aviation Administration requires a safety factor of 1.5 for any structural design [ 14 ] NASA issues a technical standard regarding required factors of safety for structur es of spaceflight hardware [ 15 ] A shortcoming of the deterministic design approach is that the risk assessment tends to be subjective and qualitative, e.g., risk matrix [ 1 6 ] because it does not provide any information about the risk associated with uncertainty as illustrated in F i g. 2 1 (a). Probabilistic design, on the other hand, has been developed to overcome the drawback of deterministic design and provide quantitative insights into the risk For example, a proba bilistic risk analysis conducted by JAXA for a next generation liquid rocket engine [ 10 ] r evealed that a component that is designed with the highest safety factor ha d the highest probability of failure. This finding le d JAXA to revise the risk mitigation plan and invest more in the component design subjected to the highest failure probability. Moreover, NASA s latest probabilistic risk analysis for the Space Shuttle [ 17 ] PAGE 20 20 wh ich was designed deterministic ally reported that a fatal accident would have occurred in about 1 in 10 flights at the beginning of the Space Shuttle operation while the people involved in the Space S h uttle program believed that the risk was somewhere between 1 in 1000 and 1 in 10,000. Even on the last flight of the Space S h uttle after numerous safety remedies being applied over the 30 years of operation the estimated risk is still high er at 1 in 90. Th ese stud ies demonstrate the usefulness of probabilistic design in terms of identif ying dangerous desig ns and helping the designers conduct appropriate risk mitigation Despite many attractive aspects of probabilistic design there still are many technical and managerial challenges Zang et al. in NASA [ 18 ] addressed the obstacles that hinder the practitioners from transitioning from deterministic design to probabilistic design, including comfortableness with traditional culture, computational burdens of probabilistic methods inaccuracy and expens e of statistical modeling of uncertainty, and so on. Tanco et al. [ 19 ] conducted a survey on why the statistical design tools are not widely used in Europe The top 1 and 2 barriers listed in the paper are low managerial commitment and lack of statistical knowledge of engineers respectively 2.1.2 Reliability Based Design Optimization An important advantage of probabilistic design over deterministic design is that a n explicit expression of risk i.e., probability of failure, allows us to tradeoff risk against other qua lities of a product, such as performance cost, and so on. In structural design, we typically seek the lightest structure that satisfies the intended performance as well as the required probability of failure This is called reliability based design optimization (RBDO). RBDO becomes more powerful when there are multiple constituent components (or failure modes) in a structural system because RBDO can appropriately PAGE 21 21 allocate the probabilit ies of failure under a constraint of system probability of failure D eterministic design is not capable of such risk allocation even if one of the components happens to have a substantially lower probability of failure than others Acar and Haftka [ 20 ] illustrated this paradigm with a simp le problem considering the design trad eoff between a wing (heavier structure) and a tail structure ( lighter structure). T he study first optimized the entire system using factors of safety as a reference design T hen, it re optimized the system probabilistically while maintaining the mass of th e reference design The results showed that moving a small amount of material from the wing to the tail substantially improve d the system probability of failure Alternatively a lighter design can be achiev ed given the system probability of failure of the reference design There ha ve been quite a few studies that apply RBDO to structural applications. For example, Qu et al. [ 21 ] compared deterministic design and probabilistic design of a composite laminate plate design used for a cryogenic tank. Youn et al. [ 22 ] applied various RBDO methods, incl uding approximate moment approach, reliability index approach, and performance measure approach to crash worthiness of a car Ramu et al. [ 23 ] investigated RBDO using inverse reliability measures with an application of a cantilever beam. M ahadevan and Rebba [ 24 ] explicitly incorporated the errors associated with computational models into the RBDO framework of a cantilever plate design Yao et al. [ 8 ] provided a comprehensive review o f RBDO and multidisciplinary design optimization for aerospace vehicles 2.1.3 Uncertainty Classification It is critical to identify all the uncertaint ies involved in the des ign. DeLaurentis and Mavris [ 25 ] carefully addressed the uncertainties relating to robust design of a PAGE 22 22 supersonic transport aircraft. Oberkampf et al. [ 26 ] proposed a framework for identifying uncertainty in computational simulation throughout the process of system analysis. More importantly, i t has been recognized that uncertainty should be appropriately treated and modeled depending on its nature Hoffman and Hammonds [ 27 ] address ed the importance of disti nguishing between a fixed but unknown value and an unknown but distributed value. I n a similar manner, u ncertainty is typically classified into two categories: aleatory uncertainty and epistemic uncertainty Aleatory uncertainty stems from inherent variat ion of a physical system, such as variability in geometry and material properties. The variation in external environment s such as flight conditions, is also considered as an aleatory uncertainty. Since aleatory uncertainty has a physical variation, there is a consensus in the research community that aleatory uncertainty be modeled by a probability distribution. For example a design handbook issued by the Department of Defense for composite structures [ 28 ] suggests characterizing material strength s with standard probability distributions, such as normal, lognormal, and Weibull distributions. Epistemi c uncertainty is due to lack of knowledge It is also known as subjective uncertainty. For example, an error in computer simulation is a typical type of epistemic uncertainty K ennedy and O Hagan [ 29 ] elaborately differentiated the uncertainty associated with computer modeling. E pistemic uncertainty does not form a distribution and takes on a unique value but it is unknown Therefore, there is a debate on how to model epistemic uncertainty [ 30 ] and various technical approaches have been studied, including probability theory [ 22 31 ] Dempster Shafer evidence theory [ 32 33 ] and PAGE 23 23 possibility theory [ 34 35 ] Gu et al. [ 36 ] proposed a design method that consider s the worst case scenar io of epistemic uncertainty There are two typical ways of treating aleatory uncertainty and epistemic uncertainty in probability of failure calculation using probability distribution s The first approach is to treat epistemic uncertainty and aleatory unce rtainty alike. E pistemic and aleatory uncertainties are modeled probability density functions as shown in Fig. 2 2 After combining the distributions (e.g., via a convolution [ 37 ] ), we obtain a single value of the failure probability P by calculating the area of the failure region above the allowable which is assumed to be constant. The second approach treats epistemic uncertainty differently from aleatory unc ertainty. It considers realization s of epistemic uncertainty (e 1 e 2 , e N ) as possible outcome s of response, and then calculates the corresponding probabilit ies of failure (P 1 P 2 , P N ) as illustrated in the left side of F i g 2 3 This procedure is implemented by Monte Carlo simulation resulting in a distribution ( or histogram) of the probability of failure (right side of Fig. 2 3 ). If a conservative estimate is preferable, we may want to take the highest value or 95th percentile value from the distribution. T aking t he mean value is essentially the same as the pro bability of failure P obtained by the first approach. Selecting a reasonable conservativeness in the probability of failure estimate is important because an overly conservative estimate will suffer poor performance, e.g., weight penalty. Especially for ai rplanes and space vehicles, whose design selection is strictly govern ed by the so called square cube law [ 38 ] an unreasonable conservativeness even makes the design infeasible. PAGE 24 24 I t is important to notice that aleatory uncertainty often includes epistemic uncertainty because s tatistical parameters, e.g., mean and standard deviation of a distribution, are subject to sampling errors. A simple example is t hat the variation (standard deviation) of a sample mean is inversely proportional to the square root of the sample size ( ). M aterial strengths defined by the Federal Aviation Administration (FAA), called A basis and B basis strengths [ 28 ] take into account such sampling errors. Park et al. [ 39 ] modeled a sampling error in material property characterization and investigated the contribution of the number of tests in the context of a hierarchical test procedure. McDonald et al. [ 40 ] proposed a framework that incorporate t he error due to sparse data into reliability analysis. 2.1.4 Uncertainty Reduction Epistemic uncertainty is also known to be reducible with additional knowledge In structural design, a test is usually conducted for the purpose of design acceptance and the observed data c an be considered as additional knowledge. By comparing analytical prediction s and the observed actual response s the analytical models (or corresponding error models) can be calibrated. This process is called uncertainty reduction or uncerta inty quantification B ayesian inference is commonly used for uncertainty reduction. B aye s rule is named after Tomas Bayes who derived a well known formula of posterior probability shown in Eq. (1. 1 ) [ 41 ] (1. 1 ) where is the probability of the event A happening given that the event B happened, called posterior. is the conditional probability of the event B PAGE 25 25 happening given that the event A happened and the probability of event A, is called prior. This formulation can be extended to probability distribution as (1. 2 ) where and represents probability density In analogy with error calibration after a structural test, is the predicted distribution of an error in the analytical model e.g., finite element analysis (FEA) called likelihood function represents how likely the actual response is observed by test given that the true value of the error is assumed to be Usually, a probability density function of the measurement error in a test is used to calculate the likelihood. is the updated error distribution given the test observation The use of Bayesian inference has been extensively s tudied from various aspects of structural design An et al. [ 42 ] illustrate d that a Bayesian inference technique allows us to reasonably avoid the unconservative estimate of failure stresses using test observations. Mahadevan et al. [ 43 ] applied Bayesian networks to identify uncertainty in simple structur es that have multiple failure modes. Arendt et al. [ 44 ] proposed a framework that identifies both parameter uncertainty and model uncertainty simultaneously from multiple responses Urbina et al. [ 45 ] established a Bayesian framework for hierarchical complex systems considering both epistemic and aleatory uncertainties. 2.1.5 Quantifying the Contribution of Tests Probabilistic design and uncertainty reduction are useful tools not only for designing new structures but also for understanding how effective the conventional design and development practices are For example, a battery of tests has been PAGE 26 26 integrated into the development process, such as tests for characterizing material properties tests for verifying designs of structural elements, and test s for system certification. However, the number of tests is conventionally determined without having quantitative rationale s For example, a design guideline for composite materials issued by the Department of Defense [ 46 ] suggests the number of tests for the verification of structural elements, typically from 3 to 5, but there is no specific explanation to supp ort the numbers. Acar et al. [ 47 48 ] pioneered the work on quantifying the contribution of post design tests and revealed in a quantitative manner that the safety level of airplane structure s attribute s not only to the safety factor being applied but also to the material test and certification test which reduce the error in design calculation. Acar et al. [ 49 ] also quantified the contribution of hierarchical tests such as the number of tests for material and the number of tests for structural element to the safety Similarly, Park et al. [ 39 ] viewed such a hierarchical test procedure as a resource allocation problem and quantified the contribution of each of the tests f or the failure stress estimation. V enter and Scotti [ 50 ] proposed a design optimization method to take advantage of the uncertainty reduction by acceptance test, which is usually required for each flight component of space vehicles. 2.2 Surrogate Models 2.2.1 Surrogate Models As mentioned in the prev ious subsection, computational expensiveness is one of the drawbacks of probabilistic design. For example, if we need to calculate a probability of failure of 10 4 with an accuracy of 10%, Monte Carlo simulation requires a million samples. This means that for example, we need to run a finite element analysis PAGE 27 27 1,000,000 times to account for the variations in input parameters and boundary conditions. This is not practical. To overcome this issue, the use of surrogate models ha s drawn considerable attention. Surrogate model is also known as metamodel and response surface model. It is used in lieu of expensive computer simulations or costly experimental data. A survey paper written by Simpson et al. [ 51 ] which summarize d the history of the use of surrogate models for design optimization, defines that surrogate mod els can either interpolate the values of the response at certain points or provide a best fit based on some metric Surrogates are modeled by a simple mathematical formula, e.g., a polynomial function, and the parameters that characterize the model are estimated based on a set of observ ed data either by computer simulation or physical tests Then, surrogate models are used to predict the responses corresponding to t he points at which no data have been observed. Polynomial response s urface (PRS) is a t raditional surrogate [ 52 54 ] It assumes that the relationship between inpu t variables (independent variables) and output (dependent variable) is approximated by a polynomial. Then, the least square method is deployed to de termine the coefficients of the polynomial. More sophisticated surrogate models were developed in the 1990 s. Kriging also known as Gaussian process regression (GPR) [ 55 57 ] is one of the most popular surrogates used for engineering applications because of its flexibility of function approximation. Kriging assumes that the true function is a set of realizations of a random process, i.e., Gaussian process, and obtains the prediction function by using conditional distribution of multivariate normal distributions. Support vector regression (SVR) [ 58 60 ] is another flexible approximation PAGE 28 28 model recently introduced. SVR is typically in the form of a radial basis function at the same time, allowing for some degree of error in the prediction, so called in sensitivity, in order to deal with nois y observation. Forrester et al. [ 61 ] describes various surrogate models commonly used for engineering applications. Extensive benchmark stud ies of surrogate models were conducted by researche r s Giunta and Watson [ 62 ] examined the performances of PRS and Kriging for 1D and 2D problems. Ji n et al. [ 63 ] tested m ultivariate a daptive r egression s plines (MARS) and r adial b asis f unctions (RBF) as well as PRS and Kriging for 14 numerical problems including highly nonlinear surfaces. Clarke et al. [ 58 ] compared SVR with PRS, MAR S and RBF. Simpson and Mistree [ 64 ] investigated the use of Kriging in the context of design optimization 2.2.2 Surrogate Models for Uncertainty Quantification and RBDO RBDO typically deploys very costly double loop iteration called nested formulation [ 65 ] ; that is uncertainty estimation in the inner loop and optimization i n the outer loop. Figure 2 4 depicts the framework of a nested RBDO. In the inner loop, for a given design ( ), the computer simulation is run a number of times varying the uncertain variables ( ) until the probability distribution of interest ( ) is obtained. Based on the probability distribution, the statistics ( ) e.g., mean standard deviation, or pro bability of failure are calculated and returned to the optimizer. The optimizer, then, searches for a new design candidate ( ) and this iterati on lasts until the solution converges. As mentioned earlier, if a probability of failure calculation is involv ed, millions of runs of the comput er simulation will be needed for each of the iteration s of optimization, which is practically impossible. PAGE 29 29 One way of coping with th e computational burden is to replace the inner loop calculation with a surrogate model as illustrated in Fig 2 5 called Layered/Nested RBDO [ 65 ] In this formulation, the statistic s ( ) corresponding to a set of design points ( ) in the design domain are calculated in advance Then, a surrogate model is fitted to the statistics to construct its approximat ion model The optimizer uses the approximated value ( ) instead of running a number of computer simulations for each iteration to obtain In the same manner, surrogate models can be used as an alternative to the response ( ). Giunta et al. [ 66 ] benchmarked various combinations of sampling technique also known as design of experiments (DOE) [ 67 ] and surrogate models for uncertainty quantification Jin et al. [ 68 ] applied various surrogate models for structural design optimization including uncertainty quantifica tion Taking into account the uncertain ty in surrogate prediction is also important for RBDO Hajela and Vittal [ 69 ] proposed a method to incorporate the confidence interval of reliability estimate predicted by PRS into a design optimization framework. Kim and Choi [ 70 ] used the prediction interval of PRS which is more conservative than the confidence interval, for RBDO. Picheny et al. [ 71 ] directly incorporate d the Kriging uncertainty estimate into the probability of failure calculation. 2.2.3 S u rrogate Models for Smoothing Noisy D ata Experimental observation s and computational simulation s are often subjected to noise, which is a primary source of error in approximation and might prevent the convergence of optimization algorithms. Some surrogate models are known to be PAGE 30 30 capable of smoothing noisy data equivalent to noise filter and some researche r s took advant age of this property. Giunta et al. [ 72 ] and Papila and Haftka [ 73 ] successfully applied PRS to the design opti mization for a high dimensional problem (high speed civil transportation) in order to tackle the noisy computational simulations Giunta and Watson [ 62 ] compa red the performance of PRS and Kriging fitted to two relatively simple noisy functions. The study illustrated that the relative accuracy of the surrogate models is problem dependent Jin et al. [ 63 ] also examined PRS and Kriging as well as multivariate adaptive regression splines (MARS) w ith a few noisy test functions It was found that all surrogate models, except for Kriging, perform ed almost as well in terms of R square as the ones fitted to noise free data. Kriging yield ed a poor prediction because it interpolates the noisy data. T he selection of DOE is also as important as tuning surrog ate models. Studies on optimal DOE for PRS to noisy observation s date back to the middle of the 20 th century [ 74 76 ] These e arly stud ies focused only on simple polynomial regression and sought an appropriate allocation of samples considering repetitions according to optimality criteria such as D optima l ity. Later, an empirical investigation into optimal alloca tion with Kriging was conducted by Picheny [ 77 ] study found that, for a two dimensional function with a constant noise over the space, having a larger number of different obs ervations from high noise simulations provide d a better result than a smaller number of observations from a lower noise simulation. Note that the performance metric s used for the these studies are associated with the level of confidence of the prediction models, i.e., prediction variance [ 52 ] However, PAGE 31 31 t he confidence level o f prediction does not necessarily represent the error with respect to the true function which is a main concern of structural design. In fact, Goel et al. [ 78 ] showed that the D optimal design, which minimizes the maximum of variance of coefficients, may have a large bias error. PAGE 32 32 Figure 2 1 Deterministic design vs. pro babilistic design. ( a ) A structure is design ed by using a safety factor of 1.5 and deterministic predictions of the stress and strength. ( b) The probability of failure of the structure is assessed in a manner of probabilistic design. The uncertainties in the stress and strength are modeled by the probability density functions. Figure 2 2 Probability of failure calculation. Epistemic and aleatory uncertainties are treated equally PAGE 33 33 Figure 2 3 Probability of failure estimation Epistemic and aleatory uncertainties are treated differently Figure 2 4 Nested reliability design optimization (RBDO ) Figure 2 5 Layered/ Nested reliability design optimization (RBDO) PAGE 34 34 CHAPTER 3 EFFECTIVE TEST STRATEGY FOR STRUCTURAL FAILURE CRITERION CHARACTERIZATION 3.1 Background and Motivation For structural design, appropriate characterization of failure criteria is critical. The main objectives of failure criterion characterization are (1) identifying underlying failure modes, and (2) constructing an accurate design a llowable chart for each failure mode, e.g., failure load map with respect to geometry and load conditions. Inaccurate characterization may lead to structural designs that would experience unexpected failure modes and suffer large errors U sing analytical f ailure theories, such as Tsai Wu and von Mises, is a reasonable approach f or well known materials and structures. However, analytical theories may not be reliable enough for newly introduced materials, e.g., composite materials [ 79 ] and new structural elements due to lack of knowledge. Therefore, f ailure criterion characterization often relies on an experimental approach. In the experimental approach, we c onduct a series of tests for a structural element for a particular use. To discover potential failure modes, it is important to explore within the design space with as many different structural configurations as possible. At the same time, f ailure load mapping is carried out by fitting a surrogate to the observed test data [ 28 46 ] Because the test results are noisy due to variability in material properties, test conditions, and errors in measurement devices, we often replicate the same structural configurations for statistical analysis of the observation s Since tests are costly, these process es need to be accomplished under a budget ary constrain t, i.e., a limited number of tests. Then, there arises a resource allocation problem betwe en exploration and replication. For example, for a two PAGE 35 35 dimensional problem, there may be two options: (1) 4x4 matrix with 3 replications and (2) 7x7 matrix without re plications, as shown in F i g. 3 1 T his chapter explores effective resource allocation of tests for failure criteria characterization. We are particularly interested in the question of whether we can take advantage of the s moothing effect of surrogate techniques equivalent to a noise filter to remove the need for replication. The same problem of exploration versus replication is also encountered in fitting surrogates to noisy simulations, such as the probability calculated from Monte Carlo simulations. As addressed in the literature review on surrogate models for noisy data (section 2.2.3 ), little has been discussed on the effects of replicated data which is the scope of this study W e illustrate the failure criterion characterization using two example structural elements. Each structural element has two potential failure modes, one of which dominates the design space. The less domin ant failure mode is considered an un modeled failure mode when it is missed by the test matrix. The failure load map of the dominant mode is assumed to be approximated by using test data and surrogate models. In order to examine the noise filtering capabil ity of surrogate models one of the structural example s has a simple failure load surface, and the other has a highly nonlinear surface. We test different types of surrogate models, including t he p olynomial response surface, Gaussian process regression, and support vector regression. With the help of the examples, we will discuss effective strategies of the failure criterion characterization. PAGE 36 36 3.2 Surrogate Models I n this section, we summarize the formulations of surrogate models, including polynomial response surface (PRS), Gaussian process regression (GPR), and support vector regression (SVR). 3.2.1 Polyno mial R esponse S urface (PRS) Polynomial response surface uses a polynomial function and the le ast square fit to approximate a true function. Let a prediction of output y be and be a location where we predict y is expressed as a linear combination of polynomial function as (3. 1 ) where are basis functions, typically monomials. represent coefficients, and is the number of coefficients. Let be an estimator of and (i,j) component of the matrix be of i th observation (total observations). The error s betwee n the observations and the predictions are expressed in vector form The least square solution that minimizes is obtained by determining c oefficients for a given set of observations (3. 2 ) As the true function is represented by where is the error PRS soothe s the noise [ 62 63 ] PRS is also known for its computational tractability Because polynomial functions are applied, it may cause a problem when being fitted to functions not approximated well by pol ynomials. In this study, we will also discuss the selection of the basis function s using some metrics such as the standard error and cross validation error i.e., leave one out cross validation called PRESS (prediction of residual error sum PAGE 37 37 of square) We used the Surrogate Tool Box [ 80 ] routine for the fitting [ 81 ] 3.2.2 Gaussian P rocess R egression (GPR) Gaussian process regression was originally developed as a method for spatial statistics [ 55 ] A special type of GPR is also known as Kriging [ 56 ] GPR views a set of data points as a collection of random variables that follow some rule of correlation, called random process defined by Eq. (3. 2 ) The name Gaussian process originates from the form of random process using a multivariate normal (Gaussian) distribution. (3. 3 ) The mean function is also called the trend function represents the correlation between points For example, the Gaussian correlation which is the most commonly used for engineering applications, is expressed as (3. 4 ) where is the process variance with zero mean and is the scaling parameter of the l th component of in d dimension, which determines the correlation between the points The prediction is assumed to be a realization of the random process that is identified by given N observation s and Note that here is different from that used in PRS. T h e first step of fitting is to choose the parameters of and called hyperparameters. The hyperparameters are selected such that the likelihood of observing is maximum. In analogy with PRS, this process corresponds to the selection of monomials. PAGE 38 38 Next, prediction at a new location is obt ained as a conditional mean function given and by using conditional distribution of multivariate normal distribution (3. 5 ) with and (i,j) component of is an vector of ones. is a diagonal matrix of diagonal terms Noise variance with zero mean is assumed to be independent of and enables us to deal with replicated data. This process is equivalent to determine N coefficient s of the radial basis functions In case replication exists the total number of coefficients is reduced by a factor of the number of replication s be cause the radial basis functions corresponding to the replicated points are the same. The advantage of GPR is the flexibility of fitting to nonlinear functions. However, the fitting process of GPR is time consuming due to the optimization of the hyperparam eters W e use the Gaussian Process Regression and Classification Toolbox version 3.2 [ 55 ] for the implementation. We select a linear model for the trend function and Gaussian model for the correlation function, shown in Eq. (3. 4 ) Since the toolbox deploys a line search method for the optimization of the hyperparameters, the optimal solution tends to depend on the starting points of the search To avoid endi ng up with a local optimum, we use multiple starting points [1, 0.1, 0.01, 0.001, 0.0001, 0.00001] for both and in the normalized output space (36 combinations of the starting points). W e also select the starting point of such that for the closest two points among the training points assuming that the nearby points should be PAGE 39 39 highly correlated. After fitting with all the combinations of starting point, we select the best model based on the maximum likelihood. 3.2.3 Support V ector R egression (SVR) Support vector regression evolved from a machine learning algorithm [ 58 59 82 ] SVR balances the flatness of the approximated function and the residual error. A unique aspect of SVR is an explicit treatment of noise by a user supplied error tolerance That is, only differences between the fit and the data that are larger than are minimized. Figure 3 2 illustrates one of the most common models for the error tolerance, so called in sensitive loss function. When the error ( ( ) is zero ; otherwise the loss is proportionally increased with the error. We use the in sensitive loss function for this study. In the case of line ar approximation, the prediction model is formulated as (3. 6 ) where is the coefficient vector and the vector of input variables and b is the base term. The regression is carried out by optimizing and b by solving the optimization problem shown in the following equations: (3. 7 ) s.t. (3. 8 ) R egularization parameter C is a user defined parameter and trade s off between the flatness of the function ( the first term in Eq. (3. 7 ) ) and the violation of the error PAGE 40 40 tolerance ( the second term in Eq. (3. 7 ) ) The prediction model at can be expressed by using the Lagrange multipliers, and of the two constraints in Eq. (3. 8 ) as (3. 9 ) where represents the i th training point There potentially are N parameters to be optimized ( N sets of Lagrange multipliers). In case we have replication, since all the replicated points have the same dot product the number of parameters to be optimized is essentially reduced by a factor of the number of replications, like GPR. Furthermore the prediction model is only determined by the training points corresponding to non zero Lagrange multipliers, called suppor t vectors. When in sensitive loss function is used, support vectors correspond to the training points being located outside of the error tolerance For nonlinear regression, the dot product may be replaced with a kernel function denoted as Kernel functions map input vectors into a feature space with a higher dimension, where the flattest function is to be found by the optimization. For the implementation, we use the Surrogate Tool Box [ 80 ] which uses the MATLAB code offered by Gunn [ 60 ] One of the cha llenges of SVR is to select an appropriate set of parameters. In general, for the regularization parameter, substantial large C is suggested [ 83 ] and the exact selection of C [ 61 ] and has only negligible effect on the generalization perfor mance [ 84 ] Figure 3 9 shows the surrogate error ( NRMSE defined later in Eq. (3. 11 ) ) with respect to various combinations of C and for 4x4 matrix with 3 replications for both the example problems tested in this chapter Note that C and and are scaled by the range PAGE 41 41 of the failure loads We can see that the accuracy of SVR is not sensitive to C as long as C is large enough. When C = 0.1, the accurac y become s substantially worse. These trends are also consistent with a past extensive study on the parameter selection fitted to a variety of functions and noise types [ 84 ] For a 5x5 matrix with 2 replications and a 7x7 matrix without replication for the composite laminate place, we observed that C of infinity is the best over the range of For the support bracket, C of 1 and 0.5 was slightly better than infinity. B ased on these observations, we decided to use infinity for C for both examples. In terms of of zero did not necessarily provide the best accuracy. This illustrates that the noise canceler with an appropriate size of helps reduce the error. Following some papers [ 85 ] we deci ded to use the average standard deviation of the observed noise to for each of the problems For the kernel function, w e use a Gaussian model as shown in Eq. (3. 10 ) which is commonly used, with being also a user defined parameter. (3. 10 ) In the same manner discussed for GPR, because n earby points should be smoothly connected and after some experimentation, we selected such that k =0.9 for the closest two points. For which is suggested to be close to the level of noise [ 85 ] we use the average standard deviation of the observed data from a 7x7 matrix with 7 replications We consider it a practical assumption because the designer can get some idea about the noise level from the observation. For the selection of the parameters C , and cross validation is usually suggested [ 59 61 86 ] H owever, identifying the best parameters is out of the scope of PAGE 42 42 this re search and testing a number of combinations of the parameters is computationally intractable Instead, we will discuss how the parameter selection affects the performance of the approximation. 3.3 Example P roblems In order to illustrate practical failure criterion characterization w e chose two simple examples for clarity and to allow exhaustive study of a large number of strategies. The examples are a support bracket and a composite laminate plate. Each structure has two underlying failure modes; one is d ominant in the design space and the other is rare, representing an un modeled mode that might be missed. T he composite laminate plate has a high order of nonlinearity of the failure load surface, while the support bracket has a smooth and almost linear sur face. T he following sub sections describe the example structures, test matrix, treatment of the replicated data for approximation, and error evaluation for analyzing the results. 3.3.1 Support B racket A simple support bracket mounted on a base structure is shown in F i g 3 3 A load is applied on the handle and the expected operational load angle is 0 to 110 deg in the x z plane. It is also assumed that the height l and length a of the bracket are fixed due to space constraint s The diameter d of the cylindrical part is consider ed as a design parameter. Table 3 1 shows the geometry and its variabillity of the structure. The combination of loading and geometry generates multi axial states of stress due to axial, bending, torsion, and t ransversia l shear stresses. Figure 3 4 illustrates the critical failure modes of the structure. Because of the additive effect of the torsion and torsional shear s tresses or bending and axial stresses, the stress at point D is likely to exceed the strength. However, point A can be a critical point under some conditions as PAGE 43 43 shown in Figure 3 4 If the designer fails to l ocate the failure mode initiated at point A, the design allowable will be underestimated. It is assumed that t he yield strength of the material is normally distributed, and the geometry of test specimens varies within the tolerances of manufacturing, which are the sources of the noise in test observation s The f ailure is predicted by the v on Mises criterion ignoring stress concentrations. The tests seek to allow designer s to predict the mean failure loads due to the dominant mode at point D. Figure 3 5 depicts the failure load surface corresponding to point D whic h will be approximated by the surrogate models 3.3.2 C omposite L aminate P late For the second example intended to have a more complex failure load surface, a s is considered (Fig. 3 6 ). The laminate is subject ed to mechanical loading along x and y directions defined by the N x = (1 and N y As design parameters for the failure load id and the are selected The range of the parameters are set as [0, 90] deg for 3 2 sh ows the material properties and strain allowables, including strain allowable along fiber direction 1allow transverse fiber direction 2allow and shear 12allow All the properties are assumed to be normally distributed and the source s of noise in test observations. The strains are predicted by the C lassical L aminat ion Theory (CLT) Figure 3 7 shows the mapping of the critical failure modes one due to the ply axial strain, which is dominant, and the other due to ply shear strain, which is rare. The designer is assumed to have conduct a series of tests in order to construct an accurate PAGE 44 44 approximation of the failure load map of the dominant mo de as well as to spot the less dominant mode. Figure 3 8 is the failure load surface due to ply axial strain. 3.3.3 Test M atrix and F itting S trategy Our test matrices range from 4x4 to 7x7 with evenly spaced test points in order to investigate the effect of the density of matrix on the accuracy of approximation For each test matrix, we rep licate the same test configuration up to seven times. Table 3 3 shows the total number of test s for the matrices. For both structural examples, a 5x5 test matrix or denser ones will detect the less dominant failure modes ; therefore ob viously a 4x4 matrix is not a desirable option We compare the following two strategies for fitting the surrogate models. Note that both strategies provide the same result for PRS. 1. All at once fitting strategy : T he surrogate models are fitted to all test data including the rep licated ones 2. Mean fitting strategy : T he mean values of the rep licated data are taken first at each location in the design space. Then, the surrogate models are fitted to the mean s 3.3.4 Error E valuation In order to evaluate the accuracy of the failure load mapping, we compare the surrogate predictions with the true values from a 20x20 matrix of test points (in total 400 points). We define the true values of the failure load as the mean of infinite number of test observations. For the support bracket this correspond s to the mean value of the material yield strength (only uncertainty in the problem and linear relationship of the failure load). For the composite laminate plate, since multiple sources of unce rtainty are involved, we estimate the true mean values from 10,000 random samples. The standard errors of the sample means is less than 0.5% of the range of the failure load, which is small compared to the surrogate errors. PAGE 45 45 To measure overall accuracy, we use the root mean square error normalized by the range of failure loads (NRMSE) calculated by Eq. (3. 11 ) We also evaluate the normalized maximum absolute error calculated by Eq. (3. 12 ) I n order to examine the robustness of the surrogate models. (3. 11 ) (3. 12 ) W e first produce the failure loads corresponding to a set of randomly generated input structural and geometrical properties for a particular matrix of experiments Then, we fit the surrogate models to the failure loads and evaluate NRMSE and NMAE We repeat th is fitting process 100 times, each with a different set of the random inputs, failure loads, and failure load approximation The error s discussed in the following section are the mean values over 100 fits The standard errors of the mean values over 100 runs are, on average, less than 0. 1 % for NRMSE and less than 0. 4 % for NMAE, which mean s that only differences between surrogate models t hat are substantially larger than these standard errors are statistically significant. 3.4 Results 3.4.1 Supp ort B racket We first discuss the results of s urrogate models fitted to the almost linear surface of failure load of support bracket ( Fig. 3 5 ) For PRS, 1 st 2 nd and 3 rd order polynomial functions were fitted, and then, 2 nd order PRS was selected as the best ones based on PAGE 46 46 t he leave o ne out cross validation, P RESS. PRESS predict ed well the best polynomial functions that offer smallest NRMSE except for the cases of the 7x7 matrix with 6 and 7 replications. In other words, PRESS properly warned that 3 rd and 4 th order PRSs overfitted nois e. For SVR, the error tolerance is selected at 936 lb as the average of the noise level (one standard deviation) of the failure load ranges from 87 lb to 2421 lb. Note that 936 lb corresponds to 4.1% of the range of failure load. To examine the resou rce allocation (replication vs. exploration) F igure 3 10 and 3 11 show NRMSE and NMAE of the three surrogate models with respect to the total number of tests when the all at once fitting strategy is applied ( for the details of test matrix, see Table 3 3 ). For PRS and GPR, all four curves corresponding to the densit ies of matrix (from 4x4 matrix to 7x7 matrix) form a single curve in NRMSE. This means that rep lication and exploration contribute equally to improving the accuracy of approximation. For PRS, this trend is supported by the behavior of the standard error predicted by PRS which represents the unbiased estimator of noise variance F i gure 3 12 shows the boxplot of standard errors of the various test strategies with about 50 tests, including a 7x7 matrix without replication, a 5x5 matrix with 2 replications and a 4x4 with 3 replications. I t can be seen that the medians and variations of the standard errors were almost the same, indicating that whether it is replication or exploration d id not matter from the standpoint of noise prediction. In order to identify the causes of e rror, Table 3 4 compares the errors fitted to noise free data and that to noisy data with a 7x7 matrix. NRMSE for noise free data purely represents the modeling error of the surrogate. For example, PRS has a 0.6% PAGE 47 47 error for noise free data, but the error increased to 1.7% when the noise was introduced. Similarly, for other surrogate models, most of the errors are due to noise rather than the modeling error From these observations, for PRS and GPR there was no significant advantage of replication over exploration in terms of the accuracy of approximation This lead s us to conclude that exploration is more important than rep lication for this example in the context of failure criterion characterization where we must search for potential failure modes. SVR, on the other hand, shows different trends and underperforms PRS and GPR. This may reflect the fact that SVR particularly applied to this problem is less sensitive than PRS to very large errors. While PRS minimizes L2 error norm (i.e., in sensitive loss function minimizes L1 error norm. This loss function does not penalize at all small errors, and does not e mphasize the effect of the largest errors by squaring them In fact, a s seen in Table 3 4 L1 norm considering the error tolerance defined following eq uation, of SVR (0.6%) is substantially smaller than those of others (more than 4%). (3. 13 ) where N is the number of test points in which Note that of SVR is set to be zero when SVR was fitted to noise free data. It is also observed that the accuracy of SVR deteriorates a s the test matrix becomes denser both in NRMSE (Fig. 3 10 ) and NMAE (Fig. 3 11 ). GPR showed a similar trend in NMAE (Fig. 3 11 ) but it was not as significant as that of SVR This is PAGE 48 48 explained mainly by the ratio of the number of training points and the number of parameters that determine the prediction model it is call ed chapter If the parameter ratio is small, e.g., the number of parameters to be tuned is larger than the number of training points, the regression model has a danger of overfitting [ 87 ] Table 3 5 shows the number s of parameters and the parameter ratio s of all the surr ogate models fitted to for about 50 tests. For example, 2 nd order PRS for two input variables has six coefficients regardless of the test matrix. If it is fitted to a 4x4 matrix with 3 replications (48 points) the ratio is 8 (=48/6). For GPR, as discussed earlier, the number of parameters is the same as the number of training points leading to the parameter to be 3, which is substantially smaller than that of PRS. In the case of replication, where the total number of parameters is reduced by a factor of the number of replications the parameter ratio becomes even smaller and is 1. Similarly, SVR, for which the number of support vectors determines the parameters, has the smaller parameters. Th ese ratios account for the accuracy deterioration of GPR and SVR with respect to matrix density The poor performance of SVR may also be attributed to not well tuned parameters, i.e., C and unlike GPR which optimizes all the hyperparameters by the maximum likelihood estimator. Our numerical exp eriments showed that tuning the parameters of SVR improved the accuracy and alleviated the accuracy deterioration with a denser matrix. Figure 3 13 illustrates that increasing the correlation coefficient for the nearest two points from 0.9 (original set) to 0.99 reduce d the error. This makes sense because a higher correlation governs a large r area of the space and makes the PAGE 49 49 fitting curve flatter and less sensitive to noise In fact, the optimized correlation coefficients by GPR turn ed out to be higher than 0.9 Figure 3 13 also shows that tunin g the regularization parameter C which is originally set as infinity, improved the performance. N ext we investigate d the treatment of rep licated data for fitting Figure 3 14 compares the all at once fitting and the mean fitting for GPR in terms of NRMSE We show that there was no significant difference between the two fitting strategies. From the standpoint of the parameter ratio, the mean fitting has a higher risk of overfitting As shown in Table 3 5 the parameter ratio of the mean fitting is always 1 and smaller than the ratios of the all at once fitting. On the other hand, taking the mean values essentially reduces the noise level, thereby less threat of overfitting. It seems that the se two phenomena cancel out each other. Figure 3 15 shows that the mean fitting performed substantially better than the all at once f itting for SVR On top of the smaller parameter ratio the hyperparameters of SVR are not well tuned. In such case, reducing the noise by taking the means was more influential than the proneness to overfitting Note that we adjusted the error tolerance when the mean fitting is applied in order to account for the accuracy of the estimator of sample mean For the adjustment, the original was divided by the square root of the number of replications ( ). It is also noteworthy that using the original for the mean fitting offer ed a poorer performance than the all at once fitting. PAGE 50 50 3.4.2 Composite L aminate P late T he fitting performance to complex failure load surface of the composite laminate plate is discussed in this section F or PRS, various orders of polynomial functions ranging from 2 nd order to 8 th order were tested and the best models listed in Table 3 6 were selected based on NRMSE. For the selection, we limited the number of coefficients of the polynomial functions to be smaller than the number of training points as the least square fit typically assumes [ 52 ] Since NRMSE is unknown in reality, we will discuss the selection of best PRS in the next subsection Figure s 3 16 and 3 17 show NRMSE and NMAE of all the surrogate models with the all at once f itting. It can be seen that the 4x4 matrix did not capture well the failure load surface for all the surrogate models Once a 5x5 or denser matrix was available, the accuracy was substantially improved. An important observation from these two figures is that the contribution of rep lication to reducing the error is miniscule compared to that of increasing the density of the matrix For example, with PRS, the error of a 7x7 matrix without replication (tota l 49 tests) was smaller than that of a 6x6 matrix with 7 replications (total 252 tests). In terms of the error sources T able 3 7 compares the errors of the surrogate models fitted to the noise free data and those to the noisy data. Obviously, most of the errors come from the modeling error rather than the noise for all surrogate models Even for SVR, the error for noise free data was worse than the average er ror over 100 fits to noisy data. Table 3 8 shows the parameter ratios for abo ut 50 tests. SVR has support vectors which are located out of the error tolerance, at almost all test points. For example, with a 7x7 matrix without replication, there are support vectors at 46.6 PAGE 51 51 locations on average out of 49 locations. This also indicat es that the modeling error is significant. In conclusion, it can be said that explo r ing different locations was found to be more important than replicating tests at the same location when modeling error was dominant. For PRS, th is was also supported by the standard error which follow ed the same trend. As for the treatment of replic ated data, Fig. 3 18 compares different fitting strategies for GPR Unlike the support bracket problem, t he all at once fitting clearly outperformed the mean fitting. It seems that informing as many observations including replicated ones (though they suffer from noise ) improved the accuracy. This result is consistent with the previous work conducted by Picheny (2009). He experimentally demonstrated that having a larger number of different observations from high noise simulation provide d a smaller prediction variance than a smaller number of observations from a lower noise simulation. For SVR, F i g ure 3 19 illustrates that t he mean fitting still helped to compensate for the not well tuned model as discussed for the support bracket problem Since the errors discussed in this section are the average out of 100 fits, w e were interested in whether exploration always outperform s replication for each of the fitting iterations. Test strategies for about 100 tests are examined including a 7x7 matrix with two replications (98 tests), a 6x6 matrix with three replications (10 8 tests), and a 5x5 matrix with four replications (100 tests) Table 3 9 depicts how many times which test strategy performed better than others. Except for the comparison of SVR between the 7x7 matrix and the 6x6 matrix, whose performances were very comparable (Fig. 3 19 ) PAGE 52 52 a denser test matrix st eadily provided a more accurate approximation at almost more than 90% Furthermore, one might wonder which surrogate model should be chosen among others. We examine d whether the PRESS predict ed the best surrogate for each of the fitting iterations of a 7x7 matrix with 2 replications Table 3 10 shows the selections of PRESS and the correct rate. PRESS well predicted that GPR was better than PRS at the correct rate of 98%. As for the comparison to SVR, PRESS seem ed to work well. However, we found that PRESS of SVR was substantially higher than those of other surrogate mo dels, and that is why PRESS always predicted that SVR was inferior to others. This reflects th e fact that the SVR used in this study is very sensitive to noise 3.4.3 Selection of B est PRS for C omposite L aminate P late Finally, we discuss the selection of th e best polynomial function of PRS. Table 3 11 compares the best polynomial function based on NRMSE, the prediction of the best function by PRESS and by the standard error (SE). With a 4x4 matrix, neither PRESS nor SE predict ed properly the best functions. With the denser matrices, they predict ed correctly. Nonetheless PRESS always fail ed to predict in the case of no replication and tended to pick up a smaller order of polynomial function compared to the actual best. As shown in Table 3 8 PRS fitted to no replicated data was prone to overfitting because of the small er parameter ratio. This ma de PRESS of a higher order of polynomial function larger, and in turn, PRESS preferred a lower order of polynomial function. SE, on the other hand, steadily predict ed the best functions or at least a reasonably higher order of po lynomial function even when there was no replication. Interestingly, in many cases, we observed that the best PRS models violated the requirement that the number of coefficients be smaller than the number of observations. PAGE 53 53 For example, for the 5x5 matrix ( 25 different observations in the input space), 6 th order PRS (28 coefficients hence undetermined system ) offer ed the minimum NRMSE. Matlab regress function handles such undetermined problems by ignor ing linearly dependent column vectors of by applying the Householder QR decomposition The QR decomposition tends to choose column vector s of that ha ve a higher norm as linearly independent columns As a result, for the 6 th order PRS, after the independent column vectors are ignored the high est monomial appeared to remain, giving some flexibility to the fitting curve and offering a better accuracy. It should be noted that the selection of monomials highly depends on how we normalize the input space. For more details about the algorithms of Matlab regress the reader should refer to APPENDIX A Note that we normalize d the input variables from 0 to 1 for this study An other important warning about this behavior is that it is very hard to identify such best function for the undetermined problem either by PRESS, standard error, or R square. In other words, PRESS for undetermined system might be misleading. 3.5 Summary We inves tigated an effective test strategy for failure criterion characterization, focusing on allocation of tests to rep lication or exploration. For approximating the failure load surfaces, p olynomial response surface (PRS), Gaussian process regression (GPR), and support vector regression (SVR) were examined. With the illustration of two structural element examples we conclude that replication is not necessarily needed and that exploration is more important both for discovering underlying failure modes and for th e accuracy of failure load approximation. PAGE 54 54 For the example with almost linear failure load surface t he noise in observation s was significant compared to the error in surrogate modeling, and then replication and exploration contributed equally to reducing the error for PRS and GPR. Then, the test s should be used for exploration for the purpose of discovering unexpected failure modes. On the other hand, for the example with compl icated nonlinear failure load surface, in which the error in the surrogate model is dominant, exploration was clearly more important for all surrogate models both for capturing the behavior of the surface and for identifying unexpected failure modes. We also examine d two different treatment s of replicated data for surrogate fitting: (1) fitting a surrogate to all replicated data simultaneously, and (2) fitting only to the mean values of replicated data. While the a ll at once fitting outperformed the mean fitting for GPR t he mean fitting compensate d for the proneness of the not well t uned SVR to overfitting by reducing the noise. We also found that it was important to adjust the error tolerance for the mean fitting to account for the accuracy of sample mean estimator. Finally, we addressed the issue that the least square fit provid ed by Matlab might offer a better solution when the number of coefficients is larger than the number of test points (undetermined problem). However, t he solution varies depending on how we normalize input space and the performance metrics, such as PRESS, m ight not be reliable. PAGE 55 55 Figure 3 1 Tradeoff between replication and exploration given 50 tests Figure 3 2 in sensitive loss function for SVR PAGE 56 56 Figure 3 3 Support bracket Figure 3 4 Critical failure modes of support bracket A blue circle indicates that failure initiates at point D, while a cross indicates initial failure at point A. Figure 3 5 F ailure load surface of support bracket initiated at point D PAGE 57 57 Figure 3 6 Composite laminate plate Figure 3 7 Critical failure modes of composite laminate plate A solid circle indicates the failure due to strain along fiber direction 1 or strain transverse along fiber direction 2 while an empty circle indicates the failure due to shear strain 12 Figure 3 8 Failure load surface of composite laminate plate due to axial strain PAGE 58 58 Figure 3 9 Accuracy of SVR with various combinations of the regularization parameter C and the error tolerance (4x4 matrix with 3 replications) Figure 3 10 Error comparison for support bracket : NRMSE for all at once fitting strategy Markers of each line correspond to one replication through seven replications from left to right. The average standard errors of the means of NRMSE are 0.05% (PRS), 0.06% (GPR) and 0.05% (SVR). PAGE 59 59 Figure 3 11 Error comparison for support bracket : NMAE for all at once fitting strategy Markers of each line correspond to one replication through seven replications from left to right. The average standard errors of the means of NRMSE are 0.18% (PRS), 0.25% (GPR) and 0.30% (SVR). Figure 3 12 Standard errors predicted by PRS for support bracket for about 50 tests PAGE 60 60 Figure 3 13 Performance of SVR with various combinations of C and R C is the regularization parameter, and R is the correlation coefficient for the closest two points. Figure 3 14 Comparison of fitting strategy: NRMSE of GPR for support bracket Markers of each line correspond to one replication through seven replications from left to right. The average standard error of the mean s of NRMSE is 0.07%. PAGE 61 61 Figure 3 15 Comparison of fitting strategy: NRMSE of SVR for support bracket Markers of each line correspond to one replication through seven replications from left to right. The average standard error of the mean s of NRMSE is 0.29%. Figure 3 16 Error comparison for composite laminate plate: NRMSE for all at once fitting strategy Markers of each line correspond to one replication through seven replications from left to right. The average standard errors of the mean of NRMSE are 0.0 5 % (PRS), 0. 08 % (GPR) and 0.0 7 % (SVR). PAGE 62 62 Figure 3 17 Error comparison for composite laminate plate: NMAE for all at once fitting strategy Markers of each line correspond to one replication through seven replications from left to right. The average standard errors of the mean of NRMSE are 0. 29 % (PRS), 0. 34 % (GPR) and 0. 27 % (SVR). Figure 3 18 Comparison of fitting strategy: NRMSE of GPR for compo site laminate plate Markers of each line correspond to one replication through seven replications from left to right. The average standard error of the mean s of NRMSE is 0.07%. PAGE 63 63 Figure 3 19 Comparison of fitting strategy: NRMSE of SVR for composite laminate plate Markers of each line correspond to one replication through seven replications from left to right. The average standard error of the mean s of NRMSE is 0.07%. Table 3 1 Properties of sup port bracket Property Quantity Variability l [inch] 2 N/A a [inch] 4.6 N/A d [inch] [1, 3] N/A [deg] [0, 110] N/A Yield strength [psi] 43,000 Normal, 10% COV Table 3 2 Properties of composite laminate plate Property Quantity Mean CV E 1 [GPa] 150 5% E 2 [GPa] 9 5% 12 0.34 5% G 12 [GPa] 4.6 5% Thickness of ply [ m] 125 N/A 1allow 0.01 6% 2allow 0.01 6% 12allow 0.015 6% PAGE 64 64 Table 3 3 Test matrix and total number of tests Matrix Number of rep lications 1 2 3 4 5 6 7 4x4 16 32 48 64 80 96 112 5x5 25 50 75 100 125 150 175 6x6 36 72 108 144 180 216 252 7x7 49 98 147 196 245 294 343 Table 3 4 Errors fitted to noise free data and in sensitive errors ( support bracket ) Error type Surrogate Noise free data (7 x7 matrix ) N oisy data ( 7x7 matrix with no replication ) NRMSE PRS 0. 6 % 1.7% GPR 0.0% 1.9% SVR 0.1 % 4. 7 6% in sensitive error PRS 4. 3 % GPR 4.0% SV R 0. 6% Table 3 5 Ratio between the number of training data points and the number of parameters for about 50 tests ( support bracket ). The numbers in parentheses represent the number of parameters to be optimized for fitting. All numbers are the mean over 100 fits. Fitting strategy Test matrix PRS GPR SVR All at once fitting 4x4 3 reps 8.0 (6) 3 (16) 3.4 (13.9) 5x5 2 reps 8.3 (6) 2 (25) 2.5 (20.3) 7x7 1 rep 8.2 (6) 1 (49) 1.7 (29.1) Mean fitting 4x4 3 reps 1 (16) 1. 5 (8.3) 5x5 2 reps 1 (25) 1.7 (11.4) PAGE 65 65 Table 3 6 Best polynomial functions for PRS based on NRMSE for composite laminate plate 4x4 matrix 5 x 5 matrix 6x6 matrix 7 x 7 matrix 1 rep 4 th 5 th 7 th 8 th 2 rep s 4 th 5 th 7 th 8 th 3 rep s 4 th 5 th 7 th 8 th 4 rep s 4 th 5 th 7 th 8 th 5 rep s 4 th 5 th 7 th 8 th 6 rep s 4 th 5 th 7 th 8 th 7 rep s 4 th 5 th 7 th 8 th Table 3 7 Errors of surrogate models fitted to noise free data ( composite laminate plate ) Surrogate NRMSE Noise free data (7 x7 matrix ) NRMSE N oisy data ( 7x7 matrix with no replication ) PRS 7. 30 % 7.54 % GPR 7.02 % 7.45 % SVR 8.00 % 7.75 % Table 3 8 Ratio between the number of training data points and the number of parameters for about 50 tests (composite laminate plate). The numbers in parentheses represent the number of parameters to be optimized for fitting. All numbers are the mean over 100 fits. Fitting strategy Test matrix PRS GPR SVR All at once fitting 4x4 3 reps 3.2 (15) 3 (16) 3. 0 (16.0) 5x5 2 reps 2.4 (21) 2 (25) 2. 0 (24.9) 7x7 1 rep 1.1 (45) 1 (49) 1. 1 (46.6) Mean fitting 4x4 3 reps 1 (16) 1 (16) 5x5 2 reps 1 (25) 1 (25) PAGE 66 66 Table 3 9 Performance comparison between test matrices for each fitting iteration for about 100 tests ( c omposite laminate plate) Surrogate models are fitted to 100 sets of random test observation. For each set of the test observation NRMSEs of the surrogate models are compared. Test PRS GPR with all at once fitting SVR with mean fit ting 7x7 2reps offers a smaller NRMSE than 6x6 3 reps 93% 89% 69% 7x7 2reps offers a smaller NRMSE than 5x5 4 reps 99% 90% 93% 6x6 3reps offers a smaller NRMSE than 5x5 4 reps 99% 86% 93% Table 3 10 S urrogate selection by PRESS ( c omposite laminate plate) The surrogate models fitted to 7x7 matrix with 2 replications are compared. The a ll at once fitting is used for GPR and the mean fitting is for SVR. Test Judge Correct rate True positive False p ositive False negative True Negative GPR offers a smaller NRMSE than PRS 98 2 0 0 98% PRS offers a smaller NRMSE than SVR 72 28 0 0 72% GPR offers a smaller NRMSE than SVR 98 2 0 0 98% PAGE 67 67 Table 3 11 Best polynomial functions for PRS predicted by PRESS and SE for composite laminate plate Metr i c 4x4 matrix 5x5 matrix 6x6 matrix 7x7 matrix 1 rep RMSE 4 th 5 th 7 th 8 th PRESS 2 rd 2 rd 3 rd 5 th SE 3 rd 5 th 5 th 8 th 2 reps RMSE 4 th 5 th 7 th 8 th PRESS 3 rd SE 3 rd 3 reps RMSE 4 th 5 th 7 th 8 th PRESS 3 rd SE 4 th 4 reps RMSE 4 th 5 th 7 th 8 th PRESS 3 rd SE 4 th 5 reps RMSE 4 th 5 th 7 th 8 th PRESS SE 6 reps RMSE 4 th 5 th 7 th 8 th PRESS SE 7 reps RMSE 4 th 5 th 7 th 8 th PRESS SE PAGE 68 68 CHAPTER 4 DESIGN OPTIMIZATION ACCOUNTING FOR THE EFFECTS OF TEST FOLLOWED BY REDESIGN 4.1 Background and Motivation After conducting tests we can reduce the error in the prediction models ( epistemic uncertainty ) as discussed in section 2.1.4 This allows us to refine the design with a calibrated error model (or calibrated analytical model). Even though redesign requires additional cost, the resulting design might be much better than the initial one because of the calibrated model. In this chapter w e propose a design optimization fra mework that takes into account a tradeoff associated with the test and redesign cycle in the future For example, an initially conservative design can r educe the risk of redesign but it will suffer a performance loss, e.g., increased weight and miss the opportunity to refine the design with the calibrated model. On the other hand, a less conservative design will have a higher chance of redesign, but it might result in a better design thanks to the refined model. V illanueva et al. [ 88 ] introduced a method to simulate the effects of future test possibly followed by redesign Th e stud y showed that considering the future may lead to significant weight saving for a thermal protection system for space vehicles examined in [ 88 ] We extend th e earlier work by incorporating the method into a design optimization framework to offer the tradeoff information between the expected performance improvement and the redesign risk We compare the proposed method to standard design optimization framework which does not account for the future in order to quantify the benefits of the proposed approach. PAGE 69 69 W e also investigate various treatments of epistemic uncertainty As discussed in section 2.1.4 a conservative treatment of epistemic uncertainty in probability of failure estimate might suffer performance loss or even result in a n in feasible design We compare different treatments of epistemic uncertainty and see how the proposed method helps tackle the performance loss Finally, w e demonstrate that the proposed approach can be applied to the traditional safety factor based design without using proba bilistic reliability assessment For an application, we design an integrated thermal protection system (ITPS) for space vehicle s aimed at protecting the vehicle from aerodynamic heating during atmospheric reentry [ 89 ] The error in calculation of the bottom face sheet temperature is considered as an example of epistemic uncertainty 4.2 Modeling t he Effects o f F u ture Test Followed b y Redesign 4.2.1 Epistemic Uncertainty Corresponding to Future In this section, we explain how we simulate the future processes, such as test and redesign, for a given design. Since the error will be probed by a future test, we view each realization of the error as corresponding to a different possible future. For exa mple, suppose we predict the temperature of the bottom sheet of a thermal protection system as 400 K based on a finite element analysis. At the same time we are assumed to have some idea about the error in the analysis, say 10% F i gure 4 2 illustrates that different calculation error values correspond to different futures. The worst case scenario happens when the calculation error is 10%. In this chapter, a negative error represents an unconservative error. A very high temperature of 440 K will be observed by the test, and then we might determine that such high temperature is not acceptable and redesign. In reality, there will be another error in test PAGE 70 70 obse rvation. We ignored the error in test observation in this illustration for the sake of simplicity of explanation, but we account for it in the proposed method. On the other hand the most optimistic future occurs if the error is 10%. T he test shows that th e t emperature is 360 K and we might decide that we do not redesign (or maybe we can also redesign it to reduce the weight) In a similar way, we can simulate other possible future scenarios of a giv en design by accounting for realizations of the error. 4.2.2 Procedure of F u ture Simulation Th e implementation of the simulation that will happen in the future includes five steps: (STEP 1) initial design evaluation (STEP 2) test observation, (STEP 3) error calibration, (STEP 4) redesign decision, (STEP 5) redesign, and (STEP 6) post simulation evaluation Here, w e consider a design problem with only one failure mode and a corresponding system response for the sake of simplicity of explanation. At STEP 1, we estimate the probability of failure of a giv en design by considering error in the calculation of response which is denoted as in the same manner described in section 2.1.4 STEPs 2 to 5 are associated with the future corresponding to i th realization of the error and the error in test measurement W e repeat these STEPs times, which represen ts the number of realizations of the error as well as the number of possible futures we simulate. This iterative process can be carried out by Monte Carlo simulation to generate random sample of the errors After repeating STEPs 2 to 5 we obtain possible design outcomes in the future some of which go through redesign and others remain the same as the initial design. F i nally, at PAGE 71 71 STEP 6, we analyze all possible design outcomes in the future such as the performance and probability of failure as wel l as the probability of redesign. In order to distinguish the designer who conducts the future simulation from the designer who makes decisions in the simulated future, we use the terms designer present self designer future self respectively While the designer present self knows the value of true error assigned to each simulated future ( STEPs 2 5 ) the designer future self acting in the simulated future should not know it T he b asic assumptions to make the future simulation practical are as follows. 1. The designer can initially estimate the distribution of the error in calculation and the error in test measurement (even if just based on lower and upper limits of these errors). 2. This distribution of the error in calculation is conservative eno ugh to cover all possib i l it ies 3. The error in test measurement is small compared to the error in calculation for the purpose of design acceptance so that the distribution of the error in calculation will be narrowed by the calibration after test. 4. The error in calculation is independent of the design variables so that the error distribution can be applied to any design including the updated design after redesign. 5. Only one specimen of the structure is tested for design acceptance 6. Once redesign is conducted, the chance of a test showing that further redesign is needed is small enough to be neglected. 4.2.2.1 STEP 1: Initial design evaluation T he designer present self starts with his calculated system response and a distributi on of the error in calculation, for example 10% uniform distribution of 10%. First, the designer present self constructs an expected distribution of the test result based on and the initial distribution of the calculation error. We denote this distribution as Also, the designer present self can obtain the distribution of PAGE 72 72 probability of failure as described in Fig. 4 1 as well as section 2.1.3 As an initial estimation of the probability of failure the mean of the distribu tion or the 95 th percentile is considered here. 4.2.2.2 STEP 2 : T est observation STEPs 2 through 5 are associated with the i th realization of the error s out of realizations. T he true response in the i th simulated future is defined by the designer s calculation and the i th realization of the error as (4. 1 ) T o simulate the test, we introduce an error in test observation accounting for uncertainties in measurement and boundary conditions which are a second epistemic uncertainty. We model it as a random variable following a probability distribution. Thus, the observed system response is obtained by assigning it the i th random realization of the measurement error (4. 2 ) Here, we assume that all the geometry and material properties of the test specimen are accurately measured, meaning that there is no uncertainty in the input of the system response calculation Any such uncertainty if necessary, can also be approximately represented by Note that and are unknown to the designer future self who c alibrates the error distribution and make s a redesign decision in the following step s PAGE 73 73 4.2.2.3 STEP 3 : Error c alibration In this step, the designer future self c alibrate s the error distribution by using the test observation. Bayesian approach [ 41 ] updates the initial distribution, called prior, with additional knowledge, here test observation The updated distribution called posterior, is defined by the B ayesian formula. (4. 3 ) where called likelihood function, is the conditional probability density of the system response given the test result reflecting the uncertainty due to Figure 4 3 illustrates the procedure of Bayesian updating and that the u pdated distribution becomes narrower than the initial one based on the assumption 3 addressed in the beginning of this subsection. 4.2.2.4 STEP 4 : Redesign decision Once the error distribution is updated the designer future self will also update the probability of failure estimate This is done in the same manner described in STEP 1 by replacing the initial error distribution with the updated distribution obtained at STEP 3, resulting in or with superscript up representing the updated estimate of the probability of failure Because the error is calibrated, the estimated probability of failure after the test is more accurate than the initial estimate. If this updated probability estimate is higher than the allowa ble probability redesign is needed is intended to reject unsafe designs. 4.2.2.5 STEP 5 : Redesign If redesign is judged to be needed, it is implemented by solving the following optimization problem PAGE 74 74 s.t. (4. 4 ) w here is an objective function, e.g., structural weight Note that the probability of failure for redesign optimization is calculated using the updated error distribution. D enote the optimal solution after redesign by Recall that all the processes in the simulated future are based on the designer future self who does not know the true error ; therefore that future self can only base decision making on the probability estimation, i.e., or considering the error distribution. However, the designer present self, who is conducting the future simulation by assigning the true error can identify the tru e probability of failure of the design outcome If redesign does not take place, the initial design remains and its true probability of failure also remains at 4.2.2.6 STEP 6 : Post simulation evaluation Finally, repeating STEPs 2 5 for all sets of the errors for futures ( sets of and ) in the overall Monte Carlo simulation will provide the designer present self with a distribution of the true probabilities of failure after test and redesign. Similarly, a distribution of the objective function after redesign is obtained. For the objective function of optimization discussed later, we use the mean value of the future objective function Figure 4 4 illustrate s the possible effects of future redesign on the distribution of true probability of failure and objective function The unsafe tail of the distrib ution of probability of failure is truncated by redesign (left side of Fig. 4 4 ). As a result, instances of the objective function (e.g., structural weight) subjected to redesign are relocated due PAGE 75 75 to the design correction (right side of Fig. 4 4 ). Note that all the instances of the objective function are initially in one spot because all the simulated futures start with the same design. In addition, the pr obability of redesign can be obtained as (4. 5 ) where if redesign takes place ; otherwise For the illustrative example in Fig. 4 4 three instances are redesigned out of 27 instances, leading to a probability of redesign of 11%. A flowchart of the steps is shown in Fig 4 5 4.2.3 RBDO Incorporating Simulated Future We introduce a RBDO framework using the future properties described in the previous section. Once possible future scenarios are simulated the probability of failure is less important because all the possible design outcomes after redesign satisfy the acceptable probability of failure Instead, t here is a tradeoff between system performance and probability of redesign Thus, the optimization pro blem for the designer present self is formulated so as to optimize system performance under a constraint o n the probability of redesign. The o ptimization problem using the mean of probability of failure for redesign decision is formulated in Eq. (4. 6 ) s.t. (4. 6 ) where is a target probability of redesign We also apply the proposed framework to a traditional safety factor approach in order to demonstrate that it is feasible to limit probabilistic consideration only to the PAGE 76 76 tradeoff between performance and probability of redesign To approximate better the traditional approach f or error calibration after the test, we assume that the calculation model is deterministically calibrated by the ratio between the test observation and the prediction ( ) instead of by statistical Baye sian inference The redesign in the simulated future is formulated with a n allowable safety factor in Eq. (4. 7 ) The safety factor is defined as the ratio of a deterministic allowable value to the calculated system response, meaning that a safety factor less than one is unconservative. s.t. (4. 7 ) We also compare the proposed method to standard reliability based design optimization ( RBDO ) which does not account for the future test and redesign, in order to quantify the benefits of the proposed approach. Standard RBDO simply uses an initial estimated probability of failure based on the distribution without simulating the future as shown in Eq. (4. 8 ) s.t. (4. 8 ) Note that even with the standard RBDO we assume that regulations require a test followed by redesign if needed, using the same redesign approach as in Eq. (4. 4 ) It is only the present design problem that is different. To distinguish standard RBDO and the proposed RBDO that models future test, we use the acronyms standard RBDO and RBDO FT respectively. PAGE 77 77 4.3 E xample Problem 4.3.1 Integrated Thermal Protection System We applied the proposed method to a design problem of a thermal protection s ystem for space vehicles. Thermal protection systems protect a space vehicle from extreme temperatures during atmospheric reentry. The Integrated thermal protection system ( ITPS ) is a concept for reusable vehicles, which can provide structural load bearing function and insulation function simultaneously, and is intended to save weight [ 90 91 ] The present ITPS concept is comprised of a top face sheet (titanium) bottom face sheet (beryllium) web (titanium), and corrugate core filled with insulation material ( Saffil foam) illustrated in Fig 4 6 F or design and redesign optimization, the thicknesses of web (t W ) bottom face sheet (t B ) and foam (t F ) are selected as design variables. Table 4 1 shows the list of random variables of the ITPS and their distributions 4.3.2 Demonstration of F uture S imulation C onsidering R isk A llocation W e first demonstrate the future simulation described in the previous section. Vi l lanueva [ 88 ] considered the ITPS problem with s ingle failure mode hi gh temperature of the bottom face sheet that exceeds its allowable value and showed that the future simulation provides an reasonable reliability estimate which might lead to weight saving. In this section we extend this previous study to a multi ple failure mode problem by adding two other failure modes the mechanical and thermal stress on the web and the thermal bucking due to the temperature difference between the top and bottom face sheets Since there are three failure modes, our main interest is how the future simulation reflects the risk allocation between the three failure modes that results from the redesign criteria for the failure modes Since the probabilities of failure of the three PAGE 78 78 modes are small enough, the system probability of failure is defined by the sum of the three probabilities of failure by ignoring the cross terms as shown in Eq. (4. 9 ) (4. 9 ) Finite element models for the temperature stress and buckling were obtained from a past study in Ref. [ 89 ] We ran the Abaqus analyses to construct the surrogate model s for temperature and stress prediction to reduce the computational time The details of the surrogate construction are summarized in Table 4 2 The a llowable te mperature of the bottom face sheet, at which the material deterioration starts, is assumed to be log normally distributed with a mean of 707K and a coefficient of variation (CV) of 5% The allowable stress of the web made of titanium alloy (Ti 6Al 4V) is assumed to follow a log normal distribution with a mean of 660 M P a and a CV of 5%. A scaled buckling criterion derived from a reference design in Ref. [ 89 ] is used and the details of the criterion is describe d in Appendix C. For calculating the probability of failure, w e use the conditional expectation method [ 92 ] Table 4 3 shows the geometry of a n initial design candidate which was arbitrarily chosen for the purpose of demonstration and its mass of the unit cell depicted in F i g. 4 6 Tables 4 4 and 4 5 summarize the assumptions on the calculation errors and test measurement errors, respectively. For test observation we assume single thermal test to observe the bottom sheet tempe rature to evaluate the temperature failure and the temperature difference between the top and bottom face sheets to evaluate the buckling criterion. T hree mechanical and thermal load tests are also assumed to observe the stress strength of the web. 1000 po ssible futures of the design candidate ( N f =1000) are PAGE 79 79 simulated meaning that we generate 1000 sets of random true errors both in calculation and test observation. W e first examine the effect s of uncertainty reduction after the tests described in STEP 3 Figure 4 7 (a) shows t he discrepancy between the initial estimate of the system probability of failure ( = 3. 8 x10 3 ) and the true probability of failure defined by the true calculation errors assigned to each simulated future We see then in Figure 4 7 (b) that the discrepancy was substantially reduced by the uncertainty reduction. For example, the maximum unconservative discrepancy was reduced from 0.015 to 0.005 This illustrates th at the redesign decision making in the future is carried out with a refined probability of failure estimate On average out of the 1000 simulated futures, the errors were reduced by the tests as follows: 10% 1.0% for the temperature prediction, 10% 1.0% for the stress prediction, and 5% 0.9% for the temperature difference prediction Each calibrated error bounds are almost equivalent to the corresponding accuracy of test measurement. W e simulate the redesign in the future, for which the allowab le and target system probability of failure is selected as 6.7 x10 3 such that the probability of redesign is 2 0%. The formulation of redesign optimization is defined by Eq. (4. 10 ) s.t. (4. 10 ) Figure 4 8 compare s the histograms of true system probabilit y of failure before and after redesign. It can be seen that the unconservative design s were redesigned, and in turn the distributi on of true system probability of failure was truncated and narrowed. Note that the probabilities of failure of redesigned instances PAGE 80 80 are not exactly the same as the target value ( = 6.7 x10 3 ) because of the remaining error s even after the uncertainty reduction. In terms of risk allocation by the future redesign, Fig 4 9 and Table 4 6 show the histograms of true probabilit ies of failure of all failure modes before and after redesign and the corresponding mean values of the probabilities respectively We see that the stress failure was initially domina nt (3.2x10 3 on average out of all 1000 instances) and the temperature failure was the least critical (1.7 x10 5 on average out of all 1000 instances ) By redesign, the temperature failure turned out to be critical (2.4x10 3 on average out of 200 redesigned instances) and the stress failure became no longer dominant ( 2.8x10 9 on average out of 200 redesigned instances) while the probability of buckling failure remained at almost the same level In fact, the redesign optimization reduced t he thickness of the foam from 72.88 mm to less than 60 mm and the thickness of bottom face sheet from 6.90 mm to less than 6.00 mm making the bottom face sheet temperature higher and the temperature difference between the top and bottom sheets smaller These geometry changes make the temperature failure, which was initially less critical, active. The reduced also contribute to making the buckling constraint less active, but the slightly thinned web by redesign (from 1.69 mm to about 1.4 mm) compensate s for it to keep the buckling probability at the same level As a result of balancing the temperature failure and buckling failure, the stress failure became no longer critical. T he redesign was intended to improve the safety of unsafe designs which tends to be accompanied by a mass penalty However, b ecause of the fact that the initial design did not allocate the risk optimally the redesign reduced the mass. Figure 4 10 PAGE 81 81 illustrates the histogram of mass after redesign where w e see that the mass was substantially decreased by redesign. As demonstrated in this section, the future simulation provides the possible design outcomes of a given design candidate accounting for the effects of test possibly followed by redesign. This enables us to make a more reasonable design selection which is discussed in the next section 4.3.3 Design Optimization with Simulated Future In this section, we illustrate a design optimization with simulated future for the ITPS problem Here, w e only deal with single failure mode, the temperature failure of the bottom face sheet in order to avoid the complexity to analyze the results. The objective function is t o minimize the mass per unit cell of the ITPS for a given probability of redesign as shown in Eq. (4. 11 ) The t arget probability of redesign is varied from 0% to 60%. s.t. (4. 11 ) The allowable probability of failure is set at 1x10 4 ; therefore, i n the simulated future (inner loop), redesign is needed if R edesign if necessary, is carried out by the following formulation s.t. (4. 12 ) The same error assumptions shown in Tables 4 4 and 4 5 are applied and 1000 possible f uture scenarios each including testing and redesign, are simulated for each design candidate throughout the optimization iteration For the standard RBDO, we use PAGE 82 82 the same probability 1x10 4 for the target probability of failure in Eq. (4. 8 ) For safety factor based design optimization, t he target safety factor is chosen as 1.28 which is approximately equivalent to the probability of failure of 1x10 4 MATLAB fmincon command is used for the optimization. S i nce multiple loop iteration is embedded in the entire optimization process w e deploy surrogate models [ 93 ] in order to save computational time. Table 4 7 summarizes the surrogate models used for the computational implementation. We first examine the optimal solutions from the standard RBDO which does not take into account future. Table 4 8 compares the optima of using and using As expected, the optimal design from the conservative design strategy using 95 th percentile is heavier ( 29.96 kg) than the one using mean value (28. 98 kg). We simulate d the future of these optimal solutions, and the results show that the 95 th percentile design will require redesign 13.5% of the time, and the mean design has 35.8% probability of redesign. B ecause the redesign takes place only for improving safety of unsafe designs a mass increase was observed after redesign for both cases kg on average for the 95 th percentile d esign). The conservative 95 th percentile design end ed up 2.7 % heavier than the mean design Part of that mass penalty reflects the more conservative approach, but part of it reflects the reduced probability of redesign. T he reason that the probability of redesign for the 95 th percentile design (13.5%) is higher than 5% is the designer future self also used the 95 th percentile value for the probability of failure estimate ( ) which is conservative than the true probability of failure. Note that all the probabilities of redesign and masses of simulated future of the PAGE 83 83 optimal solutions discussed in this section are recalculated without using the surrogate models. Next, we explore the optimal solutions from the proposed RBDO with simulated future ( RB DO FT) Figure 4 11 plots the future design outcomes of the optimal solutions and corresponding initial designs in case is used for redesign decision It can be seen that RBDO FT can offer a Pareto front of the trad e off between mass and probability of redesign w hile the standard RBDO only provides a single point of the tradeoff (here corresponding to 35.8% chance of redesign) This Pareto front allows the designer to select the balance between the expected mass in the future and the probability of redesign which can be viewed as additional development cost. For example, if the program cannot afford redesign cost (0% of redesign is required), th e initially conservative and heavy design ( 30.1 kg) is selected If, on the other hand, 60% chance of redesign is acceptable, the designer initially select s the less conservative and lighter design (28.0 kg), expecting that it would end up with 28.8 kg on average In order to investigate the effect of treat ments of the error, Figure 4 12 compare s the 95 th percentile designs to the mean designs from RBDO FT We observe the mass penalty for the 95 th percentile design for all targets of probabilit y of redesign However, it is notable that the average mass penalty for the 95 th percentile design of RBDO FT for a given probability of redesign is 1.2% which is substantially s maller than the mass penalty of the standard RBDO of 2.7 %. This reflects the fact that, for the standard RBDO, a more conservative error treatment (conservative probability of failure estimate) reduces the probability of redesign resulting in a heavier de sign On the other hand, the PAGE 84 84 proposed RBDO FT can select the conservativeness of error treatment without changing the probability of redesign. Finally, we demonstrate that the tradeoff between mass and development cost can be achieved even with the tr aditional safety factor design. Fig ure 4 13 compares the tradeoff curve of safety factor based approach and that for mean failure probability. Both tradeoff curves overlap ped for high probability of redesign, but when they approach zer o probability of redesign the mass of the safety factor design became heavier than th at of the mean design. This reflects the difference in the ways of error calibration. Figure 4 14 depicts a case where an observed test temperature exceeds the initially predicted temperature distribution (prior). This happens when an unconservative calculation error combines with a large measurement error For Bayesian approach, the upper bound of updated distribution cannot exceed the upper bound of the prior distribution. On the other hand, the safety factor approach described in the previous section simply uses the observed temperature as the calibrated tempe rature. Thus, in such a worst case future scenario, the safety factor approach is more conservative than Bayesian approach. When a 0% probability of redesign is imposed, this worst case scenario is influential because the safety level of the worst case mus t be less than the target value. As the target of probability redesign increases this effect gradually vanishes because the worst case scenario just increases slightly the probability of redesign. 4.4 Summary We proposed a reliability based design optimization (RBDO) that models the effects of future test followed by redesign Using the design problem of a thermal PAGE 85 85 protection system, we showed that by using the proposed RBDO we can obtain a tradeoff between the ex pected mass after redesign and development cost, as measured by probability of needing to redesign. In comparison the standard RBDO provides only a single point on the tradeoff curve W e showed that the tradeoff curve can be obtained even with the traditi onal safety factor approach with the probabilistic optimization confined to achieving a desired probability of redesign. W e also compared the case treating epistemic uncertainty and aleatory uncertainty equally (i.e., using the mean of the probability of f ailure) to a conservative treatment (i.e., using 95th percentile ). The results show ed that the proposed method enables us to reduce substantially the mass penalty due to conservative treatment of the epistemic uncertainty compared to the standard RBDO whi ch does not take into account future PAGE 86 86 Figure 4 1 Probability of failure calculation considering epistemic uncertainty (possible error realization) Figure 4 2 Illustration that each realization of error corresponds to different futures PAGE 87 87 Figure 4 3 Illustration of Bayesian inference. Initial distribution is set about the calculated response and is updated by the test obser vation The likelihood function represents the distribution of the error in the measurement. Figure 4 4 Possible e ffect s of redesign on the distribution of probability of failure and objective function PAGE 88 88 Figure 4 5 Flowchart of future simulation PAGE 89 89 Figure 4 6 Integrated thermal protection system (ITPS). (a) Overview (ITPS) and (b) a unit cell of the ITPS Figure 4 7 Effect s of uncertainty reduction after tests on the probability of failure estimate. (a) Discrepancy between the initial probability of failure estimate and the true probabilities of failure and (b) discrepancy between the updated probabilities of failure estimat e and the true probabilities of failure PAGE 90 90 Figure 4 8 Histogram of true probability of failure. (a) Before redesign, and (b) after redesign. Figure 4 9 Histograms of true probability of failure. (a) Temperature (b) stress, and ( c ) buckling PAGE 91 91 Figure 4 10 Histogram of mass after redesign Figure 4 11 Optimal designs from RBDO FT using the mean of probability of failure. Note that the target probability of redesign was always a multiple of 10%, so the slight deviations from these values in the symbols reflect erro rs due to the use of surrogates PAGE 92 92 Figure 4 12 Mass penalty for conservative design (Comparison between the 95 th percentile design and the mean design). Note that, for 95 th percentile design, the deviation of the optimal solution of the standard RBDO from the Pareto front reflects the error of the surrogate models Figure 4 13 Mass and probability of redesign tradeoff ( Using safety factor vs. using the mean of probability of failure ) Note that the target probability of redesign was always a multiple of 10%, so the slight deviations from these values in the symbols reflect errors due to the use of surrogates PAGE 93 93 Figure 4 14 Difference in error calibration between Bayesian approach and safety factor approach Table 4 1 G eometry and material properties of ITPS and their vari ability (aleatory uncertainties) Variable Nominal Variability Type CV Top face sheet thickness 1.2 mm Uniform 3% Bottom face sheet thickness 6.82 mm Uniform 3% Web thickness 1.87 mm Uniform 3% Angle of corrugation 80 deg Uniform 3% Foam thickness 70.5 mm Uniform 3% Half unit cell length 34.1 mm Normal 2.89% Density of titanium 4416 kg/m3 Normal 2.89% Thermal conductivity of titanium 7.15 W/m/K Normal 2.89% Specific heat of titanium 561 J/kg/K Normal 2.89% Density of beryllium 1853 kg/m3 Normal 2.89% Thermal conductivity of beryllium 203 W/m/K Normal 3.66% Specific heat of beryllium 1885 J/kg/K Normal 2.89% Density of foam 24 kg/m3 Normal 2.89% Thermal conductivity of foam 0.032 W/m/K Normal 2.89% Specific heat of foam 775 J/kg/K Normal 2.89% CTE (PPM/K) of titanium 9x10 6 PPM/K Normal 2.89% CTE (PPM/K) of beryllium 11x10 6 PPM/K Normal 5.77% Density and thermal conductivity are assumed to be correlated with a correlation coefficient of 0.95 PAGE 94 94 Table 4 2 Surrogate models for structural responses Output Input Design of experiment s Surrogate model Temperature of bottom sheet 1 80 points from LHS 2 nd order polynomial response surface(0.1 6 % error ) Stress of web 179 points from LHS 2 nd order polynomial response surface (about 2.8 % error ) Temperature difference 179 points from LHS 2 nd order polynomial response surface (about 0.01 % error ) The temperature difference between the top and bottom sheets LHS: Latin hypercube sampling Cross validation error (PRESS) compared to the mean value of the output Table 4 3 Geometry of d esign candidate Description Quantity Web thickness 1.65 mm Bottom face 6.90 mm Foam thickness 72.88 mm Unit mass 33.92 kg Table 4 4 Error assumption for calculation Failure mode Calculation Error model Temperature Temperature at bottom sheet 10% (Uniform) Stress Stress strength of web 10% (Uniform) Stress Stress of web 0 % Thermal buckling Temperature difference ( )* 5% (Uniform) The temperature difference between the top and bottom sheets Table 4 5 Error assumption for test observation Observation Error model Number of tests Temperature at bottom sheet 5K (Uniform) 1 test Stress strength 19.8MPa ( Uniform ) 3 tests Stress N/A N/A Temperature difference ( )* 5K (Uniform) 1 test The temperature difference between the top and bottom sheets A bout 3% of the mean of failure strength. PAGE 95 95 Table 4 6 Risk allocation by future redesign M ean of true probabilities of failure Temperature Stress Buckling System Initial design 1.8x10 5 ( 1.7x10 5 ) 3. 2 x10 3 ( 1.1 x10 2 ) 5.6x10 4 (6.4x10 4 ) 3.8x10 3 ( 1.2x10 2 ) After redesign 5.0x10 4 (2.4x10 3 ) 1.0x10 3 (2.8x10 9 ) 5.2x10 4 (4.4x10 4 ) 2.0 x10 3 ( 2.8x10 3 ) The number in parentheses is the mean of true probabilit ies of failure only for the redesigned instances (probability of redesign is 20%) Table 4 7 Surrogate models used for optimization Output Input Design of experiment s Surrogate model 500 points from LHS + 8 corners of design space 3 rd order polynomial response surface (about 1% error ) 500 points from LHS + 8 corners of design space 4 th order polynomial response surface (about 2% error) LHS: Latin hypercube sampling Cross validation error (PRESS) compared to the mean value of the output Table 4 8 Optimal solutions from the standard RBDO Redesign decision Initial design Simulated future Optimal design Mass a Mass (mean) Probability o f redesign t W t B t F using 28.89 kg 29.20 kg 35.8% 1.42mm 5.65mm 70.50mm using 29.96 kg 29.98 kg (+2.7%) 13.5% 1.42mm 5.65mm 76.25mm The mass of 20 unit cells PAGE 96 96 CHAPTER 5 COST EFFECTIVENESS OF ACCIDENT INVESTIGATION 5.1 Background and Motivation Air travel has enjoyed many advances in safety technology since its inception An important incentive for safety enhancement in aviation is not only the technology evolution but also the incremental improvements of design philosophy and safety related regulations triggered by accidents. There were several epoch making accidents that fa cilitated the evolution of the safety system [ 1 94 ] One of famous examples is continuous accidents of the De Havilland Comet in the 1950 s leading to the recognition of design for metal fatigue. Accident investigation has been playing a central role for improving aviation safety. Elaborate investigation identifies the probable causes of the accident and issues safety recommendations to pr event similar accidents from occurring in the future. Accident investigation serves not only as a reactive safety measure but also as a proactive measure in a sense that it provid es new knowledge about a complex system which will enhance the safety of next generation aircraft as well as existing ones [ 94 ] Independence from other authorities and separation from blame guarantee the quality of investigation, and aviation pioneered in this regard among other civil transportation [ 95 96 ] More recently, it is prospected that the approaches and methods of accident investigation will be extended to wider context of social concerns, such as natural disaster, economic fraud, etc. [ 97 98 ] Despite the usefulness of accident investigation, the cost effectiveness of aviation accident investigation h as not been discussed in the literature. Figure 5 1 shows the number of accidents from 2002 to 2009 in the US [ 99 ] and the NTSB budget PAGE 97 97 for aviation accident investigations [ 100 ] The NTSB (National Transportation Safety Bo ard) is the independent US government agency responsible for accident investigation of civil transportation. It can be seen that while the number of accidents is decreasing, the NTSB budget slightly increases ( The correlation coefficient between the total number of accidents and the NTSB budget is 0.27 ) According to private communication with the Office of Aviation Safety of NTSB, the NTSB does not even track the cost for each of the accident investigations. An external examination of NTSB [ 101 ] found that the agency needs more resources t o maintain its integrity amid increasingly complex accidents. F u rthermore, it is reported that the entire cost of an investigation is much higher than the NTSB cost (at least by a factor of four) when the cost of other agencies and private sector is taken into account [ 102 ] T he bottom line is that cost for accident investigation is not negligible. In addition, a survey conducted by Waycaster et al. [ 103 ] revealed that while the commercial air carrie r s is the safest mode of transportation, it has been receiv ing much more regulatory attention than other modes of transportation. Table 5 1 retrieved from Ref. [ 103 ] shows the number of fatalities per passenger miles from 2002 to 2009 and the economic impact of safe ty related regulations that were enacted after the accidents over the same time period. Table 5 1 compares commercial air carriers on demand air taxis and scheduled commuter carriers general aviation, private auto, and bus. We see that air carrier is roughly 250 times safer than private automobiles which is the least safe mode However, t he regulation cost of air carriers is about 200 times as large as tha t of private auto PAGE 98 98 We view the results of the survey as an indicator that the aviation safety is driven by public demand rather than through economic justification. This interpretation is also supported by the past research showing that the economic losses of aircraft manufacturers and airlines due to an accident are not significant especially for large companies [ 104 107 ] These resul ts lead us to conclude that accident is the main driver to enhance aviation safety, and it is imperative to understand the cost effective ness of accident related safety effort, i.e., accident investigation In this chapter we demonstrate a cost effectiveness study of aviation accident investigation with past accidents. We simply use the break even point of the investment by considering the expected monetary value of lives to be saved by an accident investigation. We also discus s the treatment of probability of accident, which is a challenging part of the cost effectiveness study. In order to cover a wide spectrum of aviation accidents, we examine two types of aviation systems that have different characteristics, such as commerci al airplane and space vehicle, i.e., the Space Shuttle. While commercial airplanes are designed much safer than space vehicles, an airplane accident would involve a large number of victims In addition, the cost for the accident investigation of each of th e Space Shuttle disasters is much higher than the NTSB annual budget. T hrough the demonstration, we discuss potential benefits of the cost effectiveness study of accident investigation. 5.2 Cost Effective Measures 5.2.1 C o st Effective Measures T his section describes the cost effective measures that we use for the cost effectiveness study We deploy a simple break even calculation of the investment in an accident investigation. The expense, is the cost of the investigation and the PAGE 99 99 following safety remed ies, if needed. The payoff is the expected monetary value of lives to be saved, in the future as a result of the investigation and remedies. Potential future fatalities related to an accident are calculated by the product of the expected number of fatalities that would result from a similar accident, the number of airplane s that have the same failure potential and the probability of reoccurrence of the accident in the remaining lifetime For estimating one may take into account not only existing airplanes but also not yet buil t ones that will potentially benefit in the future from the improved design and safety regulations Accident investigation has the potential to change the probability of accident reoccurrence, through implementation of the recommended safety measures. On this basis, the expected monetary value of lives to be saved can be calculated as (5. 1 ) w here is the value of a si ngle life, is the probability of a fatal accident occurring per remaining lifetime of one airplane before safety improvement is applied, and is the probability of an accident after the improvement is applied The break even point happens whe n the invested cost in the investigation and remedies, is equal to The dollar value of a fatality, is defined as the amount we are willing to give up in exchange for a small decrease in the probability of one less fatality called the value of a statistical life [ 108 ] This is a common approach in economics, used to evaluate effecti veness of policies in medicine, environment and other areas. H ow much a society should invest in preventing fatalities is controversial, as seen in many ongoing discussions in different communities, e.g., health care, transportation, environment, etc. PAGE 100 100 Visc usi [ 109 ] analyzed data on worker deaths across different industries and suggested that the value of a life lies in the range of $4.7 to $8.7 million. In aviation, economic values used in investment and regulatory decisions of the U.S. Depar tment of Transportation (DOT) were analyzed and determined. T h e guidance led to the value of $ 6.2 million per fatality averted in 2011 [ 110 ] and lately updated it to $9.1 million in 2013 [ 103 ] Similarly in Europe, an aviation fatality avoided is valued at 4.05 million EURO by the European Transport Safety Council in 2003 [ 111 ] For a given investigation cost it is possible to calculate how much we spend to prevent the loss of one life in the future as (5. 2 ) This measure would be compared to the DOT guideline ($6.2 million or $9.1 million) to determine whether accident investigation is cost effective or not. On the other hand, the value of lives to be saved can be used as the cost effective threshold of the in vested cost as (5. 3 ) This would help determine whether an investigation needs to be continued at some point in the investigation when the investigators have reached the level of under standing of the causes enough to estimate their probabilities. One might argue that loss of airplane and loss of property should be taken into account in the break even calculation. The value of an airplane might be negligible as compared to that of passe 747, for example, using 333 passengers and $350 million for a fully equipped new airplane [ 112 ] If all the seats are fully occupied, the value of the airplane is about 1 1.6% of the total value of lives PAGE 101 101 according to the DOT s statistical life ($ 9.1 million). This figure would even decrease when the de preciation of airplane is considered. However, the loss of a space v ehicle would not be negligible. We will discuss it later in an example. Note, also, that the formulation neglect s the fact that accident investigation facilitates long term improvement of a safety system, for example the t raffic collision avoidance system airport safety management system [ 94 ] and so on which might be very difficult to predict and measure their benefits at the time of the accident 5.2.2 Estimating Probability of Accident A challenging part of cost effectiveness study is to estimate the improvement in probability of accident This section discusses how we can deal with the probability estimation. Suppose the syst em failure is represented by a direct cause of the accident and all the remaining causes of system failure, forming a series system in the reliability diagram (Fig. 5 2 ). The probability of accident during the lifetime due to the direct cause and its change are denoted as and respectively. The probability of failure of the remaining system is The n, the system probability of accident and its change are obtained as (5. 4 ) (5. 5 ) It is obvious that a ssessing the system level improvement requires us to evaluate the probabilities of all potential accident causes which might be impractical for complex systems like airplanes. In comparison, estimating the probability PAGE 102 102 improvement of direct accident causes is easier because of the profound knowledge of causality available after or during the investigations. F or civil airplanes, the probability of a fatal accident in lifetime is roughly estimated at 10 3 from the average fatal accident rate in the U.S. in 2002 2009 (1.9x10 7 per aircraft departure [ 99 ] ) and a typical design lifetime of ai rplanes of 40,000 flight cycles. This implies that the term is almost 1. Thus, can be accurately approximated by and we do not necessarily have to assess the probabilities of all other accident causes. On the other hand, the level of system safety of space vehicles Space Shuttle [ 17 ] is calculated as 0.19 according to the top 10 risks listed in the report. NASA also reported that the likelihood of the direct cause (O ring failure of the solid motor) is improved from 1 in 25 missions to 1 in 1500 missions, equivalent to =0.66 in the lifetime assuming tha t the remaining number of missions is 110. Thus, cannot be neglected as compared to For such case, the probabilities of other potential accident causes should be carefully estimated. 5.3 Demonstration of a C ost E ffectiveness Study In this section, we demonstrate cost effectiveness studies o f accident investigation with two past airplane accident s and two Space Shuttle accidents. It should be emphasized that the demonstration does not attempt to accurately quantify the cost effectiveness of each of the examples since some of the data, such as the actual costs of the investigations and the probabilities of the accidents, are not publicly available. We can, however, discuss challenges and usefulness of the cost effectiveness study through these examples. PAGE 103 103 5.3.1 A merican Airlines Flight 587 Accident The first example is the accident of American Airlines Flight 587, wh ich occurred on November 12 2001 The airplane, an Airbus A300 605R crashed into a neighborhood in Belle Harbor, New York, after taking off from th e John F. Kennedy International A i rport. All 260 people aboard and five people on the ground were killed by the accident [ 113 114 ] The NTSB determined that the probable cause was he in flight separation of the vertical stabilizer as a result of the loads beyond ultimate strength that were created by the first officer s unnecessary and excessive rudder pedal inputs (when the pilot re acted to wake turbulence ). Contributing to these r udder pedal inputs were characteristics of the A i rbus A300 600 rudder system design and elements of the American Airlines Advanced Aircraft Maneuvering Program. The NTSB report concluded that wake turbulen ce by taking unnecessary actions, including making excessive control inputs and bank angle simulator exercise could have caused the first officer to have an unrealistic and exaggerate d view of the effects of wake turbulence T he report also discussed a widespread misunderstanding among pilots about performance of the rudder limiter system; pilots believed that a limiter would prevent structural damage no matter how they move d the cont rol. However, the limiter did not take into account structural damage caused by repetitive opposite direction rudder inputs which resulted in the excessive load. The FAA issued an airworthiness directive (AD) in 2011 [ 115 ] requiring the modification to the rudder control system, called the pedal travel limiter unit (PTLU). The AD estimates the implementation cost of PTLU for 215 airplanes registered in US in the PAGE 104 104 fleet at $4 2 677 500 (the fleet size in the world is not available in the AD) For the cost effectiveness study, t he number of potential fatalities was estimated at 213, based on the typical passenger capacity of the model (266 passengers [ 116 ] ) and a load factor of about 80% [ 117 ] and nine crewmembers. Adding the costs of accident investigation and other safety remedies (e.g., pilot training), which are not publicly available, the total invested cost is roughly es timated as $50 million. S i nce a pilot related cause was involved, it is difficult to determine the probability of occurrence of the accident Instead, we vary the probability t o examine the cost effectiveness threshold defined in Eq. (5. 3 ) Table 5 2 shows the cost effective threshold against the probability of accident Here, we use d $6.2 million for which was suggested by the DOT when the FAA issue d the AD. W e assumed that the probability of accident reoccurrence after the remediation is zero We can see that the safety investment can be considered to be cost effectiv e if the probability of accident reoccurr ing is higher than 1.8x10 4 in the remaining lifetime of a single airplane This probability corresponds to 8.8 x10 9 per flight assuming that the remaining life time is roughly half the design service goal of the airplane ( 40,000 flight cycle [ 118 ] ) In other words, if the probability that at least one airplane in the fleet of 215 airplanes runs into the same accident is higher than 3 .7 %, the safety remedies incl uding the accident investigation are cost effective. If we replace $6.2 million for with the actual government expense to try to reduce fatalities, i.e., $ 31 million per fatality shown in Table 5 1 the probability threshold even decreases to 3.5 x10 5 in lifetime ( 1.8 x 10 9 per flight). T hese figures lead PAGE 105 105 us to conclude that this accident investigation and following safety remedies were cost effective unless the likelihood of the accident is extremely rare. Another important observation from Table 5 2 is that a rough estimate of the probabil ity (for example, an order of magnitude estimate) is adequate enough since the corresponding cost threshold also varies in order of magnitude. G iving the range of order of magnitude of the probability would be practical after collecting sufficient knowle dge about the accident causes in the middle of the investigation. 5.3.2 Alaska Airlines Flight 26 1 Accident The next example is the crash of Alaska Airlines Flight 261, which occurred on January 31, 2000. Fatalities included two pilots, three cabin crewmembers, and 83 passengers. The airplane, MD 83, was destroyed by impact forces [ 119 ] The NTSB the in flight failure of the hori nut threads. The thread failure was caused by excessive wear resulting from Alaska According to th e NTSB report, several factors contribu ted to the accident. First, lubrication of the nut threads was not adequately performed. The report concluded that he accident jack screw assembly lubrication practices would be a possibility that a similar lubrication problem could continue to happens if an appropriate improvement was not applied. In fact, the process. Second, there were inappropriate ly wide lubrication and inspection intervals for PAGE 106 106 the wear condition; because of this, wear exceeding its critical condition could not be discovered before the following lubrication or inspection point. Lastly, there was no fail safe mechanism to prevent catastrophic effects from the nut thread loss. The FAA issued airworthiness directives [ 120 125 ] requiring repetitive inspections and lubrication. T hese improvements were applicable not only to MD series but also to Boeing airplanes Table 5 3 shows the fleet size s of airplane models to which the ADs were applied, and the passenger sizes of th ose airplane s obtained from the company s website [ 126 ] We roughly assume that the five inspection s and lubrication s would be needed in the rest of the lifetime of each airplane and the overhaul of nut and screw, which was applied only to Boeing 737, is just a one time item. Based on the work hours and labor rate s addressed in the ADs we estimate that the total cost for the safety improvement s is about $ 17.8 million US 2011 dollars. Table 5 4 shows the cost threshold with respect to the probability of reoccurrence of the accident based on $6.2 million for H ere, t he number of potential fatalities to be saved is calculated by summing up of each airplane model shown in Table 5 3 with a load factor of 80%. We assum e that the probability of accident reoccurrence after the remediation is zero. It can be seen that if the probability of the accident is higher than 3.7 10 6 per lifetime of the airplane, the accident investigation can be considered to be cost effective. ( 1.8 10 1 0 per flight assuming that the remaining flight cycle of the airplane is 20,000) 5.3.3 Space Shuttle Accidents Throughout the 30 year operation of Space Shuttle s on 135 missions, NASA experienced two catastrophic failures, Challenger (STS 51L) in 1986 and Col umbia PAGE 107 107 (STS 107) in 2003. After the C h allenger disaster, where the vehicle exploded during the ascent phase, the Rogers Commission was formed to investigate the accident The commission found that the accident was caused by a failure in the O ring sealing o f a joint of the solid rocket motor (SRM) aimed at preventing the pressurized hot gases from leaking. The Columbia orbiter was destroyed during atmospheric reentry. T he Columbia Accident Investigation B o ard (CAIB), an independent investigation committee d etermined that a piece of insulating foam which was separated right after the liftoff from the left bipod ramp section of the external tank made a breach in the thermal protection system on the leading edge of the left wing resulting in deterioration of h eat shielding function. The investigation cost for the Challenger disaster was reportedly $175 million [ 127 ] This was substantially beyond NASA s estimat e in the middle of the investigation (between $40 million to $65 million) [ 128 ] Columbia accident investigation was reportedly estimated by NASA at $152 million [ 129 ] For the probabilities of the accident, we use the result s of NASA probabilistic reliability assessment (PRA) that tracks the changes in risks, probabilit y of loss of crew and vehicle (LOCV), of dominant failure causes of the Space Shuttle over the missions. Figure 5 3 shows the risk history over the program period estimated in the latest report issued in 2011 [ 17 ] T he risk of the Space Shuttle at the beginning of the operati on is estimated as 1 in 12 missions. After the Challenger investigation, the risk of LOCV is reduced by about half, from 1 in 10 to 1 in 17 missions. The estimated risk of 1 in 10 missions of the Challenger corresponds to a 92% chance that an accident shou ld have happened before the 25 th flight, but it was not the case. This implies that the actual risk is likely to be smaller than the estimate. The probability of direct accident cause of the PAGE 108 108 Challenger was improved from 1 in 25 to 1 in 1500 missions. After the Columbia disaster, the probability of LOCV was further reduced from 1 in 47 to 1 in 73 missions mainly by the improvement s of the risk of direct cause from 1 in 130 to 1 in 600 missions The estimated parameters for the cost effectiveness study ar e summarized in Table 5 5 Note that the safety improvement is expressed by the probability of LOCV per average remaining lifetime of a vehicle. For the details of parameter estimation, the reader should refer to APPENDIX C Note that we took into account the probabilities of calculate th e safety improvement of the system Using the above parameters, the cost effectiveness measures of the accident investigations were calculated and summarized in Table 5 6 $6.2 million is used for the value of life and the investing costs were converted into US 2011 dollars. Unlike the previous airplane case s the accident investigations are clearly not cost effective because the cost effective threshold s ($19.1 million and $4.9 million for the Challenger and the Columbia, respectively) are substantially lower than the actual expenses Despite the substantial safety improvement, 0.12 for the Challenger and 0.04 for the Columbia, the small er sizes of crew and fleet led to low effective cost thresholds. One might argue that the Space Shuttle is not commercial passenger transportation and it is not fair to apply the DOT guideline for civil transportation ( $6.2 million ) to the astronauts. However, th e costs per life saved ($11 6.5 million and $235 .0 million) are much higher than the DOT guideline Note that the costs per life saved will be even higher if we include the costs for the corrective safety actions implemented to the remaining vehicles. PAGE 109 109 As di scussed in the previous section, for airplane accidents, the value of a lost airplane can be negligible as compared to the values of lives, and we ignored it in the previous cost effective study. However, for the space program, that might not be the case. In fact, the reconstruction cost of a Space Shuttle orbiter is reportedly $2.4 billion 1986 US dollars [ 128 ] equivalent to $ 4. 5 billion 201 1 US dollars I t can be said that for the space program the contribution of the accident investigation is not only to save the astronauts but also to protect the vehicle. Therefore, we consider the value of the Space Shuttle orbiter as well as the monetary value of lives Equation (5. 6 ) is the modified effective threshold of the invested cost when the value of vehicle is taken in to accou n t. (5. 6 ) W e adjusted the value of the vehicle by accounting for the depreciation of the remaining vehicles simply based on the ratio of the number of remaining missions to the total number of missions of each orbiter. The average residual value of the orbiters is $1 ,945 million and $3 76 million 201 1 US dollars at the time of the Challenger and Columbia accidents respectively. Table 5 7 shows the revised cost effective threshold s using $6.2 million for It turned out that the threshold for the C hallenger case ($286.0 million) is close to the actual spending of $359 million. If the value of an astronaut is evaluated higher than $27.7 million, the substantial expense of the investigation would be justified. For the Columbia accident, the accident investigation was still not cost effective because the value of vehicle is diminished by a factor of five from the time of the Challenge r accident. PAGE 110 110 5.4 Summary We presented and de monstrated a cost effectiveness study of a erospace accident investigation. The demonstration of the past accident investigations illustrated that it is relatively simple to gauge the cost effectiveness of an investigation. A challenging part of the cost ef fective measure is to estimate the degree of safety improvement at system level (equivalently the probability of accident) We showed that it can be accurately approximated by the probability improvement only of the direct accident cause, which would be ea sily estimated after determining the accident cause, when the risk of the remaining system is minimal, as in civil airplanes. We also demonstrat ed that an order of magnitude estimate for the probability of accident is accurate enough for decision making. Probability calculation usually requires time consuming computer simulations, such as high fidelity models and Monte Carlo simulation, and tends to perform poorly for complex systems. For the purpose of determining the cost effectiveness of accident inves tigations, a rough estimate such as expert opinions, may suffice. The case studies of the accidents of American Airlines Flight 587 and Alaska Airlines Flight 261 illustrated that those accident investigation s can be considered to be cost effective unless the pr obabilities of the accidents are extremely low These results suggest that an investigation into a fatal accident involving a number of fatalities is likely to be cost effective and that such a reactive safety measure is very efficient for a highly safe mode of transportation Furthermore, it was shown that a massive spending for the investigation into the Space Shuttle Challenger accident could also be justified, even if the number of fatalities is substantially smaller than that of a large commerci al airplane. This is PAGE 111 111 because first, the initial risk of the Space Shuttle is very high and that makes the risk reduction by the accident investigation substantial Second, the value of the Space S h uttle orbiter is considerable, which is not the case for airplane accidents. Another potential benefit of this work is to encourage the analysis of other past accident investigations which may classify what type of accident cause is worth thorough investigation This will help airlines and manufacture r s as well as regulators devise a n even more efficient strategy of preventive safety measures, which is relevant to the approach of the newly established joint activity of the Commercial Aviation Safety Team (CAST) CAST aim s to identify effective prevention strategies for aviation safety emphasizing data analysis [ 130 ] For the future, assigning the estimation of the probability of probable accident cause as a part of NTSB s work would be an effective and reasonable choice, s i nce accident investigators eventually become the most familiar with accident causes. In addition, the independence of NTSB among other government agencies ensures the imparti ality of the estimation. PAGE 112 112 Figure 5 1 Number of acci dents and cost for accident investigation by NTSB in US (2002 2009) Figure 5 2 System reliability diagram including direct accident cause Figure 5 3 Risk progress of the Space Shuttl e PAGE 113 113 Table 5 1 Fatalities p er b illion p assenger m iles and r egulation cost per f atalit y in millions (Year 2002 2009) Air Carrier Commuter & Air Taxi General Aviation Private Auto Bus Fatalities per billion passenger miles 0.038 (21) 4 11* (42) 30 160* (560) 9.09 (41,000) 0.26 (45) Regulation cost per fatality $31 ($6.4) $11 ($4.8) $0.50 ($2.8) $0.15 ($63) $69 ($31) Load factors for commuter and general aviation are estimated at 5 10 and 1 3 passengers, respectively The number in parentheses is the total number of fatalities during the period The number in parentheses is the total cost in billions during the period. Table 5 2 Effec t ive cost threshold with respect to probability of accident (American Airlines) Table 5 3 Parameters estimated for Ala ska Airlines case Fleet size* Passenger size (model) MD 80 1218 155 ( MD 83 ) Boeing 767 411 2 18 ( 767 3 00ER ) Boeing 737 164 1 162 (737 800) Boeing 747 236 416 ( 747 400 ) Boeing 7 5 7 730 280 (757 300) Boeing 777 203 365 ( 777 300ER ) Fleet size in the US Probability of accident Effective cost threshold ($ in million) 1x10 3 283.7 1. 8 x10 4 50 .0 1x10 4 28.4 1x10 5 2.8 1x10 6 0.3 PAGE 114 114 Table 5 4 Effective cost threshold with respect to probability of accident (Alaska Airlines) Table 5 5 Parameters estimated for the cost effectiveness study Investing cost ( ) Fatalities per mission ( ) Remaining vehicle ( ) Safety improvement ( ) Challenger $359 million 6.2 4 0.12 Columbia $185 million 6.6 3 0.04 US 2011 dollars Only the cost for accident investigation is considered. Table 5 6 Cost effectiveness measures of Space Shuttle accident investigations Cost effective threshold, (million US 2011 dollars) Cost per life saved, (US 2011 dollars) Challenger $19.1 $116.5 million Columbia $4.9 $235.0 million $6.2 million is used for the value of life Table 5 7 Cost effectiveness measures of Space Shuttle accident investigations considering the value of vehicle Cost effective threshold, (million US 2011 dollars) Challenger $ 286.0 Columbia $ 22.7 $6.2 million is used for the value of life Probability of accident Effective cost threshold ($ in million) 1x10 3 4821.1 1x10 4 482.1 1x10 5 48.2 3.7 x10 6 17.8 1x10 6 4.8 PAGE 115 115 CHAPTER 6 CONCLU SIONS Designing components of aerospace systems relies heavily on analytical and computational prediction tools A series of tests are integrated into the lifecycle of the system to make sure that those prediction tool s are reliable enough to ensure safety. Tests can reveal unsafe designs and then, the unsafe designs are corrected based on the refined knowledge The fact that tests are very expensive motivated us to investigate the effectiveness of such a safety improvement cycle associated with tests In th is research, we selected the following thre e test stages. Tests for failure criterion characterization that reveal un known failure modes Post design tests t hat identify the discrepancy between prediction model s and actual responses and may lead to re design Test by a ctual flight In case of an accident, the accident investigation identifies the causes of accident and recommend s necessary design correction s F i rst, we examined an eff icient test strategy for structural failure criterion characterization The goals include discovery of potential failure modes and a better approximation of failure boundary, e.g., failure load mapping A challenge of the process is noise in test observation To identify the effect of noise tests for the sam e structural configurations are typically rep licated With the demonstration of two structural elements, w e found that the rep lication of test s is unnecessary because some surrogate models have a noise canceling effect without having replicated data Th e result s suggest that we use tests for exploring the design space in order to discover unknown failure modes and to improve the accuracy of failure boundary approximation. Next, we considered post design tests for the purpose of design acceptance There w hile the design can be corrected based on a more accurate prediction model PAGE 116 116 calibrated by test data the design correction will require additional cost and schedule delays To capture this tradeoff associated with the future test w e view the error in th e prediction model as corresponding to multiple different future s which enables us to simulate possible design outcomes after test possibly followed by redesign. By incorporating the simulated futures into a design optimization framework we proposed a method that offers the tradeoff information between the expected improvement of performance after redesign and the probability of redesign, equivalent to the cost of redesign. Furthermore, we showed that t he proposed method can reduce the perfor mance loss for a conservative reliability estimate With s tandard reliability based design optimization (RBDO), which does not simulate the future, a conservative reliability esti mate not only penalizes the performance but also decreases the probability of redesign a t the same time which requires an additional performance loss. T he proposed method o n the other hand, can provide different tradeoff curves relating to the levels of conservativeness of reliability estimate This allows t he designer to choose any combination of the level of conservativeness a nd the probability of redesign without having an additional performance loss. Finally, t he cost effectiveness of accident investigation was examined. For the cost effectiveness study, we foc used on the investigation into fatal accidents. W i th the case study of past accidents, we showed that the cost effectiveness can be easily implemented by a simple break even calculation. A lso, it is demonstrated that a rough estimate of the probabilit y of accident, say an order of magnitude, is adequate enough for decision making PAGE 117 117 The study on the airplane accidents revealed that the investigation into an accident involving a number of fatalities can be considered to be cost effective unless the probability of accident is extremely small The important message is that even though the accident investigation is a reactive safety measure it is a very efficient for a highly safe mode of transportation i.e., commercial aviation We ex panded the study to the accidents of the Space Shuttle. Interestingly, the large expense for the accident investigation of the Space Shuttle Challenger, reportedly hundreds of millions of dollar could be justified because the risk was initially very high ; in return, the degree of safety improvement is substantial and the value of the Space Shuttle orbiter is considerable. PAGE 118 118 APPENDIX A M ATLAB LEAST SQUARE FIT For least square fit, Matlab provides the regress command. This appendix briefly describes the algorithms that regress deploys and how it works for undetermined problems. Matlab regress deploys the Householder QR decomposition [ 131 ] Let d be the rank of (N by k) N is the number of observations and k is the number of coefficients (for undetermined problem N PAGE 119 119 the selection of coefficients, a set of training data and a six order polynomial function are considered. Since there are seven coefficients including the constant term, four coefficients (or monomials) should be removed to make the problem solvable. The Householder QR decomposition tends to recognize the column vectors of that have higher norms as linearly independent, especially at the first iteration of decomposition As a result, when is not normalized, the column vectors of that correspond to the higher monomials ( and ) are considered as linearly independent. Then, the appr oximation model turns out to be If is normalized from 0 to 1, the highest and lowest monomials remain, and then Finally, if is normalized from 0 to 0.1, the column vectors of that correspond to the higher m onomials apparently have smaller norms ( e.g., ) and are ignored, ending up with The last model is essentially the same as the model of least square fit where we limit the number of coefficients up to the number of training points. PAGE 120 120 APPENDIX B B UCKLING CRITERION Instead of constructing a fine simulation model for buckling prediction, we deploy a simplified scaling buckling criterion by utilizing a reference design which was optimized for similar constraints and with 3D finite element analysis. For deriving the scaling buckling criterion, we assume that the web of ITPS is the weakest link because thermal consideration push es it to be thin and that its buckling is overall Euler buckling. If the difference in the operational temperature between the top face sheet and the bottom face sheet, which is denoted as is greater than the one causes buckling, the structure fails. Therefore a limit state of buckling can be written as ( B 1 ) By using data of the reference design, the left hand side of Eq.( B 1) can be rewritten as ( B 2 ) Since the reference design is designed to have a safety factor of 1.2 in terms of the first term in the right hand side of Eq.( B 2) equals 1.2. In the last term of Eq.( B 2), is given from the reference design and is calculated for a given design Assuming that buckling of the ITPS is mainly occurred due to a thermal compression load, the second term can be approximately equal to the ratio of the Euler buckling load, to the thermal compression load, as shown in Eq.( B 3). ( B 3 ) B y substituting the formulas of and into Eq.( B 3) we obtain as PAGE 121 121 ( B 4 ) where Finally, by substituting all components into Eq.( B 1), the buckling criteria can be expressed as a function of material property and geometry of structure, and ( B 5 ) In the limit state function formulation Eq.( B 5) is rewritten as following. ( B 6 ) PAGE 122 122 APPENDIX C P ARAMETER ESTIMATE FOR SPACE SHUTTLE For the size of the fleet, three orbiters (Atlantis, Columbia, and Discovery ) remained one orbiter ( Endeavour ) was newly constructed after the Challenger mission, and three orbiters after the Columbia mission ( =4 and 3). The number of potential fatalit ies is determined by the average crew size o f the actual missions. For the Challenger the average crew size was 6.2 based on total 685 crewmembers for 110 remaining ascent missions. 6.6 is the average number of crewmembers after the Columbia accident calculated from a total of 145 crewmembers for 2 2 reentry flights. S i nce the costs of the accident investigations are not publicly available, we estimated them from news sources. The investigation cost for the Challenger disaster was reportedly $175 million [ 127 ] This is substantially beyond NASA s estimat e in the middle of t he investigation (between $40 million to $65 million) [ 128 ] Columbia accident investigation was estimated at $152 million [ 129 ] These costs need to be converted into 201 1 US dollars by multiplying th e inflation factors 2.05 and 1.22 respectively 1 Then $ 352 million and $184 million are obtained for the Challenger and Columbia respectively. The inflation rates were obtained from t he website of the Bureau of Labor Statistics http://www.bls.gov/data/inflation_calculator.htm The degrees of safety improvement are calculated based on NASA latest report [ 17 ] Since these risks are expressed as probability of LOCV per mission, we need to convert them to probability of LOCV in a lifetime to be applicable to Eq. (1). After the Chal lenger mission (25 th mission), 110 missions remained and were completed by the four remaining vehicles. For simplicity, we assume that each vehicle has an average PAGE 123 123 27.5 missions in its lifetime. Then, the probability of LOCV for each remaining vehicle is ca lculated as where represents probability per mission and is the probability in a lifetime. In the same manner, the average missions of each vehicle after the Columbia accident (113 th mission) was obtained by three v ehicles with 22 total remaining missions, resulting with Table A 1 shows the system level probabilit ies of LOCV in the remaining lifetime The improvement of system level risk is estimated based on the top 10 risks addressed in NASA report. Table B 1 : S afety improvement at system level Probability of LOCV in lifetime Before improvement After improvement Challenger 0.939 ( 1 in 10.4 missions ) 0.815 ( 1 in 16.8 missions ) 0.124 Columbia 0.121 ( 1 in 57.4 missions ) 0.081 ( 1 in 87.2 missions ) 0.040 PAGE 124 124 LIST OF REFERENCES [1] Mohaghegh, M., "Evolution of structures design philosophy and criteria," Journal of Aircraft, Vol. 42, No. 4, 2005, pp. 814 831. doi: 10.2514/1.11717 [2] Venkataraman, S., and Haftka, R. T., "Structural optimization complexity: what has Moore's law done for us?," Structural and Multidisciplinary Optimization, Vol. 28, No. 6, 2004, pp. 375 387. doi: 10.1007/s00158 004 0415 y [3] Renton, J., Olcott, D., Roeseler, B., Batzer, R., Baron, B., and Velicki, A., "Future of Flight Vehicle Structures (2000 to 2023) Journal of Aircraft, Vol. 41, No. 5, 2004, pp. 986 998. doi: 10.2514/1.4039 [4] Paul, D., Kelly, L., Venkayya, V., and Hess, T., "Evolution of U.S. Military Aircraft Structures Technology," Journal of Aircraft, Vol. 39, No. 1, 2002, pp. 18 29. doi: 10.2514/2.2920 [5] Marshall Rouse, D. C. J., David M. McGowan, Harold G. Bush, W. Allen Waters, "Utilization Of The Building Block Approach In Structural Mechanics Research," 46th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics & Materials Conference Austin, Texas, 1 8 21 April, 2005. [6] Krueger, R., Cvitkovich, M. K., O'Brien, T. K., and Minguet, P. J., "Testing and analysis of composite skin/stringer debonding under multi axial loading," Journal of Composite Materials, Vol. 34, No. 15, 2000, pp. 1263 1300. [7 ] Sobieszczanski Sobieski, J., and Haftka, R. T., "Multidisciplinary aerospace design optimization: survey of recent developments," Structural Optimization, Vol. 14, No. 1, 1997, pp. 1 23. doi: Doi 10.1007/Bf01197554 [8] Yao, W., Chen, X. Q., Luo, W. C ., van Tooren, M., and Guo, J., "Review of uncertainty based multidisciplinary design optimization methods for aerospace vehicles," Progress in Aerospace Sciences, Vol. 47, No. 6, 2011, pp. 450 479. doi: DOI 10.1016/j.paerosci.2011.05.001 [9] B., F., Bo ito, M., Graser, J. C., and Younossi, O., "Test and Evaluation Trends and Costs for Aircraft and Guided Weapons," RAND Corporation, 2004 [10] Fujimoto, K., Kobayashi, T., Okita, K., Sunakawa, H., Kurosu, A., Taguchi, H., Miyoshi, H., Nishimoto, M., Yam anishi, N., Ogawara, A., Kumada, N., and Manako, H., "Quantitative risk analysis for next generation rocket development PAGE 125 125 based on high fidelity numerical simulations," International Workshop on Future of CFD and Aerospace Sciences Kobe, Japan, April 23 25, 2012. [11] Elishakoff, I., Safety Factors and Reliability: Friends or Foes? Kluwer Academic Publishers, 2004. [12] Choi, S. K., Grandhi, R. V., and Canfield, R. A., Reliability based Structural Design London, Springer Verlag, 2007. [13] Haldar, A. and Mahadevan, S., Probability, reliability and statistical methods in engineering design New York, Wiley, 2000. [14] "Airworthiness Standards: Transport Category Airlines," Federal Aviation Administration, Department of Transportation, CFR 14 Part 2 5. [15] "NASA Technical Standard: Structural Design and Test Factors of Safety For SpaceFlight Hardware," NASA, 2008, NASA STD 5001A. [16] "NASA Goddard Technical Standard: Risk Management Reporting," NASA, 2009, GSFC STD 0002. [17] Hamlin, T. L., Thigpen, E., Kahn, J., and Lo, Y., "Shuttle Risk Progression: Use of the Shuttle Probabilistic Risk Assessment (PRA) to Show Reliability Growth," AIAA SPACE 2011 Conference & Exposition Long Beach, California, 27 29 September, 2011. [18] Zang, T. A., Hemsch, M.J., Hilburger, M.W., Kenny, S.P., Luckring, J.M., Maghami, P., Padula, S.L., Stroud, L., "Needs and Opportunities for Uncertainty Based Multidisciplinary Design Methods for Aerospace Vehicles," Center, N. L. R., NASA/TM 2002 211462 2002. [19] Tanco, M., Viles, E., lvarez, M. J., and Ilzarbe, L., "Why is not design of experiments widely used by engineers in Europe?," Journal of Applied Statistics, Vol. 37, No. 12, 2010, pp. 1961 1977. [20] Acar, E., and Haftka, R. T., "Reliab ility based aircraft structural design pays, even with limited statistical data," Journal of Aircraft, Vol. 44, No. 3, 2007, pp. 812 823. doi: Doi 10.2514/1.25335 [21] Qu, X. Y., Haftka, R. T., Venkataraman, S., and Johnson, T. F., "Deterministic and re liability based optimization of composite laminates for cryogenic environments," AIAA Journal, Vol. 41, No. 10, 2003, pp. 2029 2036. PAGE 126 126 [22] Youn, B. D., and Choi, K. K., "Selecting probabilistic approaches for reliability based design optimization," AIAA Journal, Vol. 42, No. 1, 2004, pp. 124 131. [23] Ramu, P., Qu, X., Youn, B. D., Haftka, R. T., and Choi, K. K., "Inverse reliability measures and reliability based design optimisation," International Journal of Reliability and Safety, Vol. 1, No. 1/2, 2006, pp. 187 205. [24] Mahadevan, S., and Rebba, R., "Inclusion of model errors in reliability based optimization," Journal of Mechanical Design, Vol. 128, No. 4, 2006, pp. 936 944. doi: Doi 10.1115/1.2204973 [25] DeLaurentis, D. A., ., and Mavris, D.N., "Uncertainty modeling and management in multidisciplinary analysis and synthesis 38th Aerospace Sciences Meeting and Exhibit Reno, NV, January 10 13, 2000. [26] Oberkampf, W. L., DeLand, S. M., Rutherford, B. M., Diegert, K. V., and Alvin, K. F., "Error and uncertainty in modeling and simulation," Reliability Engineering & System Safety, Vol. 75, No. 3, 2002, pp. 333 357. [27] Hoffman, F. O., Hammonds, J.S., Propagation of uncertainty in risk assessments: the need to distinguish between uncertainty due to lack of knowledge and uncertainty due to variability," Risk Analysis, Vol. 14, No. 5, 1994, pp. 707 712. [28] DoD, "Department of Defense Handbook, Compo site materials handbook, Volume 1, Chapter 8," Department of Defense, 2002, MIL HDBK 17 3F. [29] Kennedy, M. C., and O'Hagan, A., "Bayesian calibration of computer models," Journal of the Royal Statistical Society Series B Statistical Methodology, Vol. 6 3, No. 3, 2001, pp. 425 464. [30] Ferson, S., Joslyn, C. A., Helton, J. C., Oberkampf, W. L., and Sentz, K., "Summary from the epistemic uncertainty workshop: consensus amid diversity," Reliability Engineering & System Safety, Vol. 85, No. 1 3, 2004, p p. 355 369. doi: 10.1016/j.ress.2004.03.023 [31] Acar, E., Haftka, R. T., and Johnson, T. F., "Tradeoff of uncertainty reduction mechanisms for reducing weight of composite laminates," Journal of Mechanical Design, Vol. 129, No. 3, 2007, pp. 266 274. d oi: Doi 10.1115/1.2406097 PAGE 127 127 [32] Shafer, G., A mathematical theory of evidence Princeton University Press, 1976. [33] Gogu, C., Qiu, Y., Segonds, S., and Bes, C., "Optimization Based Algorithms for Uncertainty Propagation Through Functions With Multidim ensional Output Within Evidence Theory," Journal of Mechanical Design, Vol. 134, No. 10, 2012, pp. 100914 1 8. [34] Nikolaidis, E., Chen, S., Cudney, H., Haftka, R. T., and Rosca, R., "Comparison of probability and possibility for design against cata strophic failure under uncertainty," Journal of Mechanical Design, Vol. 126, No. 3, 2004, pp. 386 394. doi: Doi 10.1115/1.1701878 [35] Du, L., Choi, K. K., Youn, B. D., and Gorsich, D., "Possibility based design optimization method for design problems w ith both statistical and fuzzy input data," Journal of Mechanical Design, Vol. 128, No. 4, 2006, pp. 928 935. [36] Gu, X. Y., Renaud, J. E., and Penninger, C. L., "Implicit uncertainty propagation for robust collaborative optimization," Journal of Mechanical Design, Vol. 128, No. 4, 2006, pp. 1001 1013. [37] Varadhan, S. R. S., Probability theory Amer Mathmatical Society, 2001. [38] Cleaveland, F. A., "Size effects in conventional aircraft design," Journal of Aircraft, Vol. 7, No. 6, 1970, pp. 48 512. doi: 10.2514/3.44204 [39] Park, C., Matsumura, T., Haftka, R. T., Kim, N. H., and Acar, E., "Modeling the effect of structural tests on uncertainty in estimated failure stress," 13th AIAA/ISSMO Multidisciplinary Analysis and Optimization Con ference Fort Worth, Texas, Sept. 13 15, 2010. [40] McDonald, M., Zaman, K., and Mahadevan, S., "Probabilistic Analysis with Sparse Data," Aiaa Journal, Vol. 51, No. 2, 2013, pp. 281 290. doi: Doi 10.2514/1.J050337 [41] Gelman, A., Carlin, J. B., Ster n, H. S., and Rubin, D. B., Bayesian Data Analysis New York, Chapman and Hall, 2004. [42] An, J., Acar, E., Haftka, R. T., Kim, N. H., Ifju, P. G., and Johnson, T. F., "Being Conservative with a Limited Number of Test Results," Journal of Aircraft, Vol. 45, No. 6, 2008, pp. 1969 1975. doi: Doi 10.2514/1.35551 PAGE 128 128 [43] Mahadevan, S., Zhang, R. X., and Smith, N., "Bayesian networks for system reliability reassessment," Structural Safety, Vol. 23, No. 3, 2001, pp. 231 251. doi: Doi 10.1016/S0167 4730(0 1)00017 0 [44] Arendt, P. D., and Chen, Wei., "Improving Identifiability in Model Calibration Using Multiple Responses," ASME International Design Engineering Technical Conferences & Computers and Information in Engineering Conference Washington, DC, Au gust 28 31, 2011. [45] Urbina, A., Mahadevan, S., and Paez, T. L., "Quantification of margins and uncertainties of complex systems in the presence of aleatoric and epistemic uncertainty," Reliability Engineering & System Safety, Vol. 96, No. 9, 2011, pp. 1114 1125. doi: DOI 10.1016/j.ress.2010.08.010 [46] DoD, "Department of Defense Handbook, Composite materials handbook, Volume 3, Chapter 4," Department of Defense, 2002, MIL HDBK 17 3F. [47] Acar, E., Kale, A., Haftka, R. T., and Stroud, W. J., "Structural safety measures for airplanes," Journal of Aircraft, Vol. 43, No. 1, 2006, pp. 30 38. doi: Doi 10.2514/1.14381 [48] Acar, E., Kale, A., and Haftka, R. T., "Comparing effectiveness of meas ures that improve aircraft structural safety," Journal of Aerospace Engineering, Vol. 20, No. 3, 2007, pp. 186 199. [49] Acar, E., Haftka, R. T., and Kim, N. H., "Effects of Structural Tests on Aircraft Safety," AIAA Journal, Vol. 48, No. 10, 2010, pp. 2235 2248. [50] Venter, G., and Scotti, S. J., "Accounting for Proof Test Data in a Reliability Based Design Optimization Framework AIAA Journal, Vol. 50, No. 10, 2012, pp. 2159 2167. doi: 10.2514/1.J051495 [51] Simpson, T. W., Toropov, V., Bala banov, V., Viana, F.A.C. "Design and Analysis of Computer Experiments in Multidisciplinary Design Optimization: A Review of How Far We Have Come or Not," 12th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference Victoria, British Columbia Canada, 10 12 September, 2008. [52] Myers, R. H., Montgomery, D.C. Response Surface Methodology: Process and Product Optimization Using Designed Experiments New York, Wiley, 1995. PAGE 129 129 [53] Johnson, R. A., and Wichern, D. W., Applied Multivariate Stati stical Analysis (6th Edition) Upper Saddle River, NJ, Pearson, 2007. [54] Box, G. E. P., Hunter, W. G., and Hunter, J. S., Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building John Wiley & Sons, 1978. [55] Rasmus sen, C. E., and Williams, C. K. I., Gaussian Processes in Machine Learning Cambridge, The MIT Press, 2005. [56] Sacks, J., Welch, W. J., Mitchell, T. J., and Wynn, H. P., "Design and Analysis of Computer Experiments," Statistical Science, Vol. 4, No. 4, 1989, pp. 409 423. [57] Martin, J. D., and Simpson, T. W., "Use of kriging models to approximate deterministic computer models," AIAA Journal, Vol. 43, No. 4, 2005, pp. 853 863. doi: Doi 10.2514/1.8650 [58] Clarke, S. M., Griebsch, J H., and Simpson, T. W., "Analysis of support vector regression for approximation of complex engineering analyses," Journal of Mechanical Design, Vol. 127, No. 6, 2005, pp. 1077 1087. doi: Doi 10.1115/1.1897403 [59] Smola, A. J., and Scholkopf, B., "A tutorial on support vector regression," Statistics and Computing, Vol. 14, No. 3, 2004, pp. 199 222. doi: Doi 10.1023/B:Stco.0000035301.49549.88 [60] Gunn, S. R., "Support Vector Machines for Classification and Regression," University of Southampton, 1998. [61] Forrester, A. I. J., Sbester, A., and Keane, A. J., Engineering Design via Surrogate Modelling: A Practical Guide Wiley, 2008. [62] Giunta, A. A., and Watson, L. T., "A comparison of approximation modeling techniques: Polynomial versus int erpolating models," 7th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization St. Louis, MO, September 2 4, 1998. [63] Jin, R., Chen, W., and Simpson, T. W., "Comparative studies of metamodelling techniques under multiple modelli ng criteria," Structural and Multidisciplinary Optimization, Vol. 23, No. 1, 2001, pp. 1 13. [64] Simpson, T. W., and Mistree, F., "Kriging models for global approximation in simulation based multidisciplinary design optimization," AIAA Journal, Vol. 3 9, No. 12, 2001, pp. 2233 2241. PAGE 130 130 [65] Eldred, M. S., Giunta, A. A., Wojtkiewicz, S. F., and Trucano, T. G., "Formulations for surrogate based optimization under uncertainty," 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization Atlan ta, GA, 4 6 September, 2002. [66] Giunta, A. A., Mcfarland, J. M., Swiler, L. P., and Eldred, M. S., "The promise and peril of uncertainty quantification using response surface approximations," Structure and Infrastructure Engineering, Vol. 2, No. 3 4, 2006, pp. 175 189. doi: Doi 10.1080/15732470600590507 [67] Montgomery, D. C., Design and Analysis of Experiments New York, Wiley, 2004. [68] Jin, R., Du, X., and Chen, W., "The use of metamodeling techniques for optimization under u ncertainty," Structural and Multidisciplinary Optimization, Vol. 25, No. 2, 2003, pp. 99 116. doi: DOI 10.1007/s00158 002 0277 0 [69] Hajela, P., and Vittal, S., "Optimal design in the presence of modeling uncertainties," Journal of Aerospace Engineerin g, Vol. 19, No. 4, 2006, pp. 204 216. doi: Doi 10.1061/(Asce)0893 1321(2006)19:4(204) [70] Kim, C., and Choi, K. K., "Reliability Based Design Optimization Using Response Surface Method With Prediction Interval Estimation," Journal of Mechanical Design, Vol. 130, No. 12, 2008, pp. 121401 1 12. doi: Artn 121401 Doi 10.1115/1.2988476 [71] Picheny, V., Ginsbourger, D., Roustant, O., Haftka, R. T., and Kim, N. H., "Adaptive Designs of Experiments for Accurate Approximation of a Target Region," Journal of Mechanical Design, Vol. 132, No. 7, 2010, pp. 071008 1 9. doi: http://dx.doi.org/10.1115/1.4001873 [72] Giunta, A. A., Dudley, J. M., Narducci, R., Grossman, B., Haftka, R. T., Mason, W. H., and Watson L. T., "Noisy aerodynamic response and smooth approximations in HSCT design," 5th Symposium on Multidisciplinary Analysis and Optimization Panama City Beach,FL, September 7 9, 1994. [73] Papila, M., and Haftka, R. T., "Response surface approximations: Noise, error repair, and modeling errors," Aiaa Journal, Vol. 38, No. 12, 2000, pp. 2336 2343. doi: Doi 10.2514/2.903 [74] Elfving, G., "Optimum Allocation in Linear Regression Theory," Annals of Mathematical Statistics, Vol. 23, No. 2, 1952, pp. 255 2 62. PAGE 131 131 doi: DOI 10.1214/aoms/1177729442 [75] Kiefer, J., "Optimum Designs in Regression Problems .2.," Annals of Mathematical Statistics, Vol. 32, No. 1, 1961, pp. 298 325. doi: DOI 10.1214/aoms/1177705160 [76] St. John, R. C., and Draper, N. R., "D Optimality for Regression Designs Review," Technometrics, Vol. 17, No. 1, 1975, pp. 15 23. [77] Picheny, V., "Improving accuracy and compensating for uncertainty in surrogate modeling," PhD diss., University of Florida, 2009. [78] Goel, T., Haftka, R. T., Shyy, W., and Watson, L. T., "Pitfalls of using a single criterion for selecting experimental designs," International Journal for Numerical Methods in Engineering, Vol. 75, No. 2, 2008, pp. 127 155. doi: Doi 10.1002/Nme.2242 [79] Ilcewicz, L., and Murphy, B., "Safety & Certification Initiatives for Composite Airframe Structure," 46th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference Austin, Texas, April 18 21, 2005. [80] Vian a, F. A. C., "Surrogates toolbox user's guide (Ver. 2.1)," Gainesville, FL, 2010. [81] "MATLAB and Statistics Toolbox Release 2012a," The MathWorks, Inc., Natick, Massachusetts, United States. [82] Vapnik, V. N., Statistical Learning Theory New York, Wiley Interscience, 1998. [83] Jordaan, E. M., and Smits, G. F., "Estimation of the regularization parameter for support vector regression," World Conference on Computational Intelligence 2002 Honolulu Hawaii, May 12 17, 2002. [84] Cherkassky, V., and Ma, Y. Q., "Practical selection of SVM parameters and noise estimation for SVM regression," Neural Networks, Vol. 17, No. 1, 2004, pp. 113 126. doi: Doi 10.1016/S0893 6080(03)00169 2 [85] Cherkassky, V., and Mulier, F., Learning From Data: Concepts, Th eory, and Methods Hoboken, New Jersey, John Wiley & Sons, Inc., 2006. [86] Basudhar, A., "Selection of anisotropic kernel parameters using multiple surrogate information," 14th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference Indianapolis, Indiana, 17 19 September, 2012. PAGE 132 132 [87] Hawkins, D. M., "The problem of overfitting," Journal of Chemical Information and Computer Sciences, Vol. 44, No. 1, 2004, pp. 1 12. doi: 10.1021/ci0342472 [88] Villanueva, D., Haftka, R. T., and Sank ar, B. V., "Including the Effect of a Future Test and Redesign in Reliability Calculations," AIAA Journal, Vol. 49, No. 12, 2011, pp. 2760 2769. doi: Doi 10.2514/1.J051150 [89] Bapanapalli, S. K., "Design of an Integral Thermal Protection System for Fut ure Space Vehicles," PhD diss., University of Florida, 2007. [90] Martinez, O. A., Sharma, A., Sankar, B.V, Haftka, R.T., Blosser M.L., "Thermal Force and Moment Determination of an Integrated Thermal Protection System," AIAA Journal, Vol. 48, No. 1, 201 0, pp. 119 128. doi: 10.2514/1.40678 [91] Gogu, C., Bapanapalli, S.K., Haftka, R.T., and Sankar, B.V., "Comparison of Materials for an Integrated Thermal Protection System for Spacecraft Reentry," Journal of Spacecraft and Rockets, Vol. 46, No. 3, 2009, pp. 501 513. doi: 10.2514/1.35669 [92] Ayyub, B. M., and Chia, C. Y., "Generalized Conditional Expectation for Structural Reliability Assessment," Structural Safety, Vol. 11, No. 2, 1992, pp. 131 146. [93] Queipo, N. V., Haftka, R. T., Shyy, W., Go el, T., Vaidyanathan, R., and Tucker, P. K., "Surrogate based analysis and optimization," Progress in Aerospace Sciences, Vol. 41, No. 1, 2005, pp. 1 28. doi: DOI 10.1016/j.paerosci.2005.02.001 [94] Stoop, J., and Dekker, S., "Are safety investigations pro active?," Safety Science, Vol. 50, No. 6, 2012, pp. 1422 1430. doi: DOI 10.1016/j.ssci.2011.03.004 [95] Stoop, J. A., and Kahan, J. P., "Flying is the safest way to travel: How aviation was a pioneer in independent accident investigation," European Journal of Transport and Infrastructure Research, Vol. 5, No. 2, 2005, pp. 115 128. [96] Stoop, J. A., "Independent accident investigation: a modern safety tool," Journal of Hazardous Materials No. 111, 2004, pp. 39 44. doi: 10.1016/j.jhazmat.2004.02.006 PAGE 133 133 [97] Roed Larsen, S., and Stoop, J., "Modern accident investigation Four major challenges," Safety Science, Vol. 50, No. 6, 2012, pp. 1392 1397. doi: DOI 10.1016/j.ssci.2011.03.005 [98] Dien, Y., Dechy, N., and Guillaum e, E., "Accident investigation: From searching direct causes to finding in depth causes Problem of analysis or/and of analyst?," Safety Science, Vol. 50, No. 6, 2012, pp. 1398 1407. doi: DOI 10.1016/j.ssci.2011.12.010 [99] "National Transportation Sta tistics," Research and Innovative Technology Administration (RITA), U.S. Department of Transportation, [online database], http://www.bts.gov/publications/national_transport ation_statistics/ [retrieved Augst 6, 2013]. [100] "Budget of the United States Government: Appendix, Other Independent Agencies," Office of Management and Budget, [online database], http://www.gpo.gov/fdsys/browse/collectionGPO.action?collectionCode=BUDGE T [retrieved 30 August, 2013]. [101] Sarsfield, L. P., Stanley, W.L., Lebow, C.C., Ettedgui, E., and Henning, G., "Safety in the Skies, Personnel and Parties in NTSB Aviation Accident Investigations: Master Volume," RAND Corporation, 2000 [102] "Economic Values for FAA Investment and Regulatory Decisions, a Guide," GRA, Inc., 3 October 2007 [103] "Guidance on Treatment of the Economic Value of a Statistica l Life in U.S. Department of Transportation Analyses (Memorandum to: Secretarial Officers Modal Administrators)," U.S. Department of Transportation, 2013 [104] Chalk, A. J., "Market Forces and Commercial Aircraft Safety," Journal of Industrial Economic s, Vol. 36, No. 1, 1987, pp. 61 81. [105] Rose, N. L., "Profitability and Product Quality Economic Determinants of Airline Safety Performance," Journal of Political Economy, Vol. 98, No. 5, 1990, pp. 944 964. [106] Kaplanski, G., and Levy, H., "Sentiment and stock prices: The case of aviation disasters," Journal of Financial Economics, Vol. 95, No. 2, 2010, pp. 174 201. [107] Borenstein, S., and Zimmerman, M. B., "Market Incentives for Safe Commercial Airline Ope ration," American Economic Review, Vol. 78, No. 5, 1988, pp. 913 935. PAGE 134 134 [108] Ashenfelter, O., "Measuring the value of a statistical life: Problems and prospects," Economic Journal, Vol. 116, No. 510, 2006, pp. C10 C23. doi: 10.1111/j.1468 0297.2006.010 72.x [109] Viscusi, W. K., "The value of life: Estimates with risks by occupation and industry," Economic Inquiry, Vol. 42, No. 1, 2004, pp. 29 48. doi: Doi 10.1093/Ei/Cbh042 [110] "Treatment of the Economic Value of a Statistical Life in Departmental Analyses 2011 Interim Adjustment (Memorandum to: Secretarial Officers Modal Administrators)," Department of Transportation, July 29 2011 [111] "Cost effective EU transport safety measures," European Transport Safety Council, 2003, [112] "Commercia l Airplane, Jet prices," Boeing, [online database], http://www.boeing.com/commercial/prices/index.html [retrieved 30 August 2013]. [113] "In Flight Separation of Vertical Stabilizer Ameri can Airlines Flight 587 Airbus Industrie A300 605R, N14053 Belle Harbor, New York, November 12, 2001," National Transportation Safety Board, 26 October 2004, NTSB/AAR 04(04). [114] verview of the Structure Investigation for the American Airlines Flight 578 Investigation," 46th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference Austin, TX, April 18 21, 2005. [115] "Airworthiness Directives; Airbus Airpl anes (Final rule)," Federal Aviation Administration, 2012, Docket No. FAA 2011 0518 (AD 2012 21 15). [116] "Airbus Aircraft family, A300 600," Airbus, [online database], http://www.airbus.com/aircraftfamilies/out of production aircraft/a300 600/ [retrieved 16 August 2013]. [117] "Airline Activity: National Summary (U.S. Flights)," The Bureau of Transportation Statistics, [online database], http://www.transtats.bts.gov/ [retrieved 16 August 2013]. [118] "FAST 45, December 2009 (Airbus Technological Magazine)," Airbus, 2009 [119] "Airplane Accident Report. Loss of Control and Impact with Pacifi c Ocean Alaska Airlines Flight 261 McDonnell Douglas MD 83, N963AS About 2.7 Miles North of Anacapa Island, California January 31, 2000," National Transportation Safety Board, 30 December 2002, NTSB/AAR 02/01. PAGE 135 135 [120] "Airworthiness Directives; Boeing Mode l 747 Airplanes," Federal Aviation Administration, 2005, Docket No. FAA 2005 22624. [121] "Airworthiness Directives; Boeing Model 767 Airplanes (Final rule)," Federal Aviation Administration, 2008, Docket No. FAA 2005 22623 (AD 2008 06 06). [122] "Airw orthiness Directives; Boeing Model 777 Airplanes," Federal Aviation Administration, 2006, Docket No. FAA 2006 24270. [123] "Airworthiness Directives; The Boeing Company Airplanes (Final rule) (Boeing 757)," Federal Aviation Administration, 2011, Docket N o. FAA 2011 1093 (AD 2012 16 16). [124] "Airworthiness Directives; The Boeing Company Airplanes (Final rule) (Boeing 737)," Federal Aviation Administration, 2012, Docket No. FAA 2008 0415 (AD 2011 27 03). [125] "Airworthiness Directives; McDonnell Doug las Model DC 9 10, DC 9 20, DC 9 30, DC 9 40, and DC 9 50 Series Airplanes; Model DC 9 81 (MD 81), DC 9 82 (MD 82), DC 9 83 (MD 83) and DC 9 87 (MD 87) Airplanes; Model MD 88 Airplanes; Model MD 90 30 Airplanes; and Model 717 200 Airplanes," Federal Aviati on Administration, 2006, Docket No. FAA 2005 22254 (AD 2006 10 14). [126] "Commercial Airplanes (Boeing's product website)," Boeing [online database], http://www.boeing.com/boeing/com mercial/products.page? [retrieved 10 September 2013]. [127] "Cost of Columbia Accident Inquiry Is Soaring," Los Angels Times, 15 March 2003, http://articles.latimes.com/2003/mar/15 /nation/na probe15 [128] "Budget Effects of the Challenger Accident," The Congress of the United States Congressional Budget Office (Staff working paper), March 1986, [129] "CAIB recommendations to cost $280 million," Spacetoday.net, 25 November 2003 http://www.spacetoday.net/Summary/2046 [130] "Fact Sheet Commercial Aviation Safety team," Federal Aviation Administration, 1 December 2011 [131] Chan, T. F., "Rank revealing QR factorizations," Linear Algebra and its Applications, Vol. 88 89, 1987, pp. 67 82. [132] Goodall, C. R., "Computation using the QR decomposition (Chapter 13)," Handbook of Statistics Vol. 9, Elsevier, 1993, pp. 467 508. PAGE 136 136 BIOGRAPHICAL SKETCH Taiki Matsumura was born in Kobe, Japan in 1973. He rec eived B.E. and M.E. degrees in a erospace e ngineering from Nagoya University, Japan in 1996 and 1998, respectively. He started working for the National Space Developm ent Agency of Japan (NASDA, currently Japan Aerospace Exploration Agency (JAXA) ) as a rocket engineer. He was in charge of o n site q uality and reliability assessment of industrial suppliers and system design, project management, and launch operation of H IIA and H IIB rocket projects. From 2007 to 2008, we worked as a visiting researcher at University of Florida on multi disciplinary optimization of a space plane After going back to JAXA, he was responsible for managing the long term and short term strateg ies of the Space Transportation Mission Directorate as a deputy manager in the program office In 2010, he enrolled in the doctoral program of the D epartment of Mechanical and Aerospace Engineering at the University of Florida. He work ed as a gradua te research assistant under the supervision of Prof. Raphael T. Haftka and Prof. Nam Ho Kim. His r esearch topic was design under uncertainty and reliability based design optimization for aerospace structures accounting for the effects of post design tests He received his Ph.D. from the University of Florida in December 2013. He is a member of t he American Institute of Aeronautics and Astronautics (AIAA) the American Society of Mechanical Engineers International Society for Structural and Multidisciplinar y Optimization (ISSMO) and t he International Council on Systems Engineering (INCOSE). He is a certified Project Management Professional (PMP) by the Project Management Institute (PMI). 