UFDC Home  myUFDC Home  Help 



Full Text  
RELIABILITYBASED STRUCTURAL OPTIMIZATION USINTG RESPONSE SURFACE APPROXIMATIONS AND PROBABLISTIC SUFFICIENCY FACTOR By XUEYONG QU A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2004 Copyright 2004 by Xueyong Qu This dissertation is dedicated to my lovely wife, Guiqin Wang. ACKNOWLEDGMENTS I want to thank Dr. Raphael T. Haftka for offering me the opportunity to complete my Ph.D. study under his exceptional guidance. He provided the necessary funding to complete my doctoral studies and support me to attend many academic conferences. Without his patience, guidance, knowledge, and constant encouragement, this work would not have been possible. Dr. Haftka made an immense contribution to this dissertation and my academic growth, as well as my professional and personal life. I would also like to thank the members of my supervisory committee: Dr. Peter G. Ifju, Dr. Theodore F. Johnson, Dr. Andre I. Khuri, and Dr. Bhavani V. Sankar. I am grateful for their willingness to review my research and provide constructive comments that helped me to complete this dissertation. Special thanks go to Dr. David Bushnell for his help with the PANDA2 program and stiffened panel analysis and design. Special thanks go to Dr. Vicente J. Romero for many helpful discussions and collaboration in wntmig papers. Financial support provided by grant NAG12177, contract L9889 and grant URETI from NASA is gratefully acknowledged. My colleagues in the Structural and Multidisciplinary Optimization Research Group at the University of Florida also deserve thanks for their help and many fruitful discussions. Special thanks go to Palaniappan Ramu, Thomas Singer, and Dr. Satchi Venkataraman for their collaboration in publishing papers. My parents deserve my deepest appreciation for their constant love and support. Lastly, I would like to thank my beautiful and loving wife, Guiqin Wang. Without her love, patience and support I would not have completed this dissertation. TABLE OF CONTENTS page ACKNOWLEDGMENT S .............. .................... iv LI ST OF T ABLE S ............ ...... .._ .............. ix... LIST OF FIGURES ............ _...... ._ ..............xiii... AB S TRAC T ..... ._ ................. ............_........x CHAPTER 1 INTRODUCTION .............. ...............1..... Focus .................. ...............2......... ...... Objectives and Scope............... ...............4.. 2 LITERATURE SURVEY: METHODS FOR RELIABILITY ANALYSIS AND RELIABILITYBASED DESIGN OPTIMIZATION ................. ................ ...._.6 Review of Methods for Reliability Analysis ................. ...............7............... Problem Definition .............. ...............7..... M onte Carlo Simulation .............. ... .............. .. ............. Monte Carlo Simulation Using Variance Reduction Techniques .........................8 MomentBased Methods .............. ...............9..... Response Surface Approximations............... ... ...........1 ReliabilityBased Design Optimization Frameworks ..........__......... _.._.............12 Double Loop Approach ................ ....___ ......... .............1 Inverse Reliability Approach............... ...............14 Design potential approach .................. .......__ ...............15...... Partial safety factor approach (Partial SF) .............. ....................1 Summary ................. ...............17................. 3 RESPONSE SURFACE APPROXIMATIONS FOR RELIABILITYBASED DE SIGN OPTIMIZ ATION ................. ......... ......... .............1 Stochastic Response Surface (SRS) Approximation for Reliability Analysis............20 Analysis Response Surface (ARS) Approximation for ReliabilityBased Design Optim ization .............. ............. ..................2 Design Response Surface (DRS) Approximation............... .............2 Analysis and Design Response Surface Approach ................. ... ........................24 Statistical Design of Experiments for Stochastic and Analysis Response Surfaces...25 4 DETERMINISTIC DESIGN OF COMPOSITE LAMINATES FOR CRYOGENIC ENVIRONMENT S ................ ...............28................. Introducti on ............... .... ............ .. ....... ..... ........................2 Composite Laminates Analysis under Thermal and Mechanical Loading .................29 Properties of IM600/133 Composite Materials ................. ................ ......... .30 Deterministic Design of AnglePly Laminates ................. .............................34 Optimization Formulation ................. ...............35... Optimizations without Matrix Cracking............... ...............36 Optimizations Allowing Partial Matrix Cracking .............. ....................3 Optimizations with Reduced Axial Load Ny .............. ...............39.... 5 RELIABILITYBASED DESIGN OF COMPOSITE LAMINATES FOR CRYOGENIC ENVIRONMENT S .............. ...............41.... ReliabilityBased Design Optimization ................. ...............41........... .... Problem Form ulation............... ....... ..... ... .................4 Response Surface Approximation for ReliabilityBased Optimization ........._....43 Analysis Response Surfaces ................ ...............43........... .... Design Response Surfaces............... ...............45 Refining the ReliabilityBased Design ................. ............. ......... .......46 Quantifying Errors in Reliability Analysis............... ...............47 Effects of Quality Control on Laminate Design ................. ................. ..........48 Effects of Quality Control on Probability of Failure ................. ............... .....49 Effects of Quality Control on Optimal Laminate Thi ckne ss ............... .... ...........50O Effects of Other Improvements in Material Properties............... ...............51 Summary .........___ ....... ...............54.... 6 PROBABILISTIC SUFFICIENCY FACTOR APPROACH FOR RELIABILITY BASED DESIGN OPTIMIZATION ................. ...............56........... .... Introducti on ................. .. ......... ...............56....... Probabilistic Sufficiency Factor .............. ........... ....... ...... .... .........6 Using Probabilistic Sufficiency Factor to Estimate Additional Structural Weight to Satisfy the Reliability Constraint ................. ...............62............... Reliability Analysi s Using Monte Carlo Simulation ................. ....... .. ......... ........._.64 Calculation of Probabilistic Sufficiency Factor by Monte Carlo Simulation......66 Monte Carlo Simulation Using Response Surface Approximation. ................... .68 Beam Design Example .............. ...............71.. Design with Strength Constraint ............... .. ..... ...............7 Design with Strength and Displacement Constraints .............. .....................7 Sum m ary ................. ...............79.......... ...... 7 RELIABILITYBASED DESIGN OPTIMIZATION USING DETERMINISTIC OPTIMIZATION AND MULTIFIDELITY TECHNIQUE .............. ...................80 Introducti on ................ ....... .. ... ... .. .. .. ........ .........8 ReliabilityBased Design Optimization Using Sequential Deterministic Optimization with Probabilistic Sumfciency Factor. ................... ............. .... .. .... ..... .........8 ReliabilityBased Design Optimization Using MultiFidelity Technique with Probabilistic Sumfciency Factor............... ...............82. Beam Design Example ............... ...... ... ........ .....................8 ReliabilityBased Design Optimization Using Sequential Deterministic Optimization with Probabilistic Sumfciency Factor. ................. ......... ....... ................... .........8 ReliabilityBased Design Optimization Using Coarse MCS with Probabilistic Suffciency Factor ................. ...............87................. Summary ................. ...............88................. 8 RELIABILITYBASED DESIGN OPTIMIZATION OF STIFFENED PANELS USING PROBABILISTIC SUFFICIENCY FACTOR ................. ............. .......90 Introducti on ............... .. ... ... ..... ...............90....... Aluminum Isogrid Panel Design Example .............. ...............92.... ReliabilityBased Design Problem Formulation .............. ....................9 U ncertainti es .................. ... ........ .... ...............94... Analysis Response Surface Approximation .............. ...............95.... Design Response Surfaces............... ...............97 Optimum Panel Design............... ...............98 Composite Isogrid Panel Design Example .............. ...............98.... Deterministic Design ................. .......... .... ...............100.... Analysis Response Surface Approximation ................ .. .. .......... ...............101 ReliabilityBased Design Optimization Using Sequential Deterministic Optimization with Probabilistic Sumfciency Factor .............. ...................103 ReliabilityBased Design Optimization using DIRECT Optimization.....................106 DIRECT Global Optimization Algorithm ................ .............. .... ............... 106 ReliabilityBased Design Optimization Using Direct Optimization with Safety F actor C orrected by Prob abili sti c Sumfici ency Factor ................. ................10 8 Sum m ary ................. ...............108.............. APPENDIX A MATERIAL PROPERTIES OF IM600/133 ................. .............................110 B CONTOUR PLOTS OF THREE DESIGN RESPONSE SURFACE APPROXIMATIONS AND TEST POINTS ALONG THE CURVE OF TARGET RELIABILITY ................. ...............113......... ...... LIST OF REFERENCES ................. ...............115................ BIOGRAPHICAL SKETCH ................. ...............120......... ...... LIST OF TABLES Table pg 41 Transverse strains calculated for conditions corresponding to the onset of matrix cracking in the 900 plies of a quasiisotropic (45/0/45/90)2s in Aoki et al. (2000).33 42 Transverse strains of an angleply laminate (f 25)4S under the same loading condition as Table Al .............. ...............34.... 43 Strain allowables for IM600/133 at 4230F .............. ...............34.... 44 Optimal laminates for different operational temperatures: E2u Of 0.01 10................. 37 45 Optimal laminates for temperature dependent material properties with EZu Of 0.0110 (optimized for 21 temperatures) .....__.._ ... ......_.._......_ ...........3 46 Optimal laminate for temperature dependent material properties allowing partial matrix cracking: E2u Of 0.011 for uncracked plies and 0.0154 for cracked plies......39 47 Optimal laminates for reduced axial load ofl, 200 lb./inch by using load shunting cables (equivalent laminate thickness of 0.005 inch) ....._____ ... ... ...._ ..........40 51 Strain allowablesa for IM600/133 at 4230F ........._. ...... .... ......._.........42 52 Coefficients of variation (CV) of the random variables ................. .........._ .....42 53 Range of design variables for analysis response surfaces .............. ............._..44 54 Quadratic analysis response surfaces of strains (millistrain) ................. ................44 55 Design response surfaces for probability of failure (Probability calculated by Monte Carlo simulation with a sample size of 1,000,000) ................. .................4 56 Comparison of reliabilitybased optimum with deterministic optima. .............._......46 57 Refined reliabilitybased design [f6]s (Monte Carlo simulation with a sample size of 10,000,000) .............. ...............47.... 58 Comparison of probability of failure from MCS based ARS and CLT. ........._........47 59 Accuracy of MC S ................. ...............48.......... .... 510 Effects of quality control of EZu On probability of failure for 0. 12 inchthick (16)s laminates............... ...............4 511 Effects of quality control of st", sil, E21, and Y12 On probability of failure of 0. 12 inchthick (f6)s laminates ................. ...............50........... .... 512 Effects of quality control of El, E2, G12, Cl12, Tzero, al, and a2 On probability of failure of 0. 12 inchthick (f6)s laminates ................. ...............50.............. 513 Effects of quality control of EZu On probability of failure for 0. 1 inchthick (16)s laminates............... ...............5 514 Effects of quality control of EZu On probability of failure for 0.08 inchthick (16)s laminates............... ...............5 515 Sensitivity of failure probability to mean value of 82u (CV=0.09) for 0. 12 inchthick 1(iO) s am inmates .............. ...............52.... 516 Sensitivity of failure probability to CV of 82u ( E(82u)=0.0154 ) for 0. 12 inchthick (16)s lam inmates .............. ...............52.... 517 Maximum E2 (milliStrain) induced by the change of material properties El, E2, G12, Cl12, Tzero, al, and a2 for 0. 12 inchthick [f250]s laminate ................. ................. .54 518 Probability of failure for 0. 12 inchthick [ f 250]s laminate with improved average material properties (Monte Carlo simulation with a sample size of 10,000,000)....54 61 Random variables in the beam design problem............... ...............62 62 Range of design variables for design response surface............___ ........._ ......72 63 Comparison of cubic design response surface approximations of probability of failure, safety index and probabilistic sufficiency factor for single strength failure mode (based on Monte Carlo simulation of 100,000 samples) ........._..... ..............73 64 Averaged errors in cubic design response surface approximations of probabilistic sufficiency factor, safety index and probability of failure at 11 points on the curves of tar get reliability ................. ...............74................ 65 Comparisons of optimum designs based on cubic design response surface approximations of probabilistic sufficiency factor, safety index and probability of failure .............. ...............75.... 66 Comparison of cubic design response surface approximations of the first design iteration for probability of failure, safety index and probabilistic sufficiency factor for system reliability (strength and displacement) ................. .................7 67 Averaged errors in cubic design response surface approximations of probabilistic sufficiency factor, safety index and probability of failure at 51 points on the curves of tar get reliability ................. ...............76................ 68 Comparisons of optimum designs based on cubic design response surface approximations of the first design iteration for probabilistic sufficiency factor, safety index and probability of failure ................. ............... ......... ...._..77 69 Range of design variables for design response surface approximations of the second design iteration ........... ..... .._ ...............78.... 610 Comparison of cubic design response surface approximations of the second design iteration for probability of failure, safety index and probabilistic sufficiency factor for system reliability (strength and displacement) ................. .................7 611 Comparisons of optimum designs based on cubic design response surfaces of the second design iteration for probabilistic sufficiency factor, safety index and probability of failure ................. ...............78........... .... 71 Random variables in the beam design problem............... ...............85 72 Optimum designs for strength failure mode obtained from double loop RBDO .....86 73 Design history of RBDO based on sequential deterministic optimization with probabilistic sufficiency factor under strength constraint for target probability of failure of 0.00135 ................. ...............86........... . 74 Design history of RBDO based on sequential deterministic optimization with probabilistic sufficiency factor under strength constraint for target probability of failure of 0.0000135 ................. ...............87........... . 75 RBDO using variable fidelity technique with probabilistic sufficiency factor under strength constraint .............. ...............88.... 76 Range of design variables for design response surface ................ .....................88 81 Amplitudes of geometri c imperfecti on handled by PANDA2 software ................... 94 82 Uncertainties in material properties (Al 2219T87) modeled as normal random variables .............. ...............94.... 83 Uncertainties in manufacturing process modeled as uniformly distributed random design variables around design (mean) value .............. ...............94.... 84 Deterministic Optimum .............. ...............95.... 85 Range of analysis response surface approximations (inch) ..........._._... ................ 96 86 Quadratic analysis response surface approximation to the most critical margins using Latin Hypercube sampling of 72 points .............. ...............96.... 87 Probabilities of failure calculated by Monte Carlo simulation with 1x106 Samples 97 88 Range of design response surface approximations (inch) .............. ...................97 89 Cubic design response surface approximation to the probability of failure and probabilistic sumfciency factor (calculated by Monte Carlo sampling of lx106 sam pl es) ................ ...............97.......... ...... 810 Optimum panel design............... ...............98. 811 Probabilities of failure calculated by Monte Carlo simulation of 1x106 Samples....98 812 Uncertainties in material elastic properties (AS4) modeled as normal distribution with coeffcient of variation of 0.03 ................ ...............100.............. 813 Uncertainties in material strength properties (AS4) modeled as normal distribution with coeffcient of variation of 0.05 ................ ...............100.............. 814 Variation of the random design variables around nominal design value ...............100 815 Safety factors used in deterministic design .............. ...............101.... 816 Deterministic Optimum (inch, degree, lb)............... ...............101.. 817 Quadratic analysis response surface approximation to the worst margins using Latin Hypercube sampling of 342 points............... ...............102 818 Probabilities of failure calculated by Monte Carlo simulation of 106 samples (material and manufacturing uncertainties) ................. ............... ......... ...102 819 Design history of RBDO based on sequential deterministic optimization using probabilistic sumfciency factor to correct safety factor directly by Equation (83)105 820 Design history of RBDO based on sequential deterministic optimization using probabilistic sumfciency factor to correct safety factor by actual safety margin using Equation (84) .............. ...............105.... 821 Design history of RBDO based on DIRECT deterministic optimization with probabilistic sumfciency factor correct safety factor by actual safety margin using Equation (84) .............. ...............108.... LIST OF FIGURES Figure pg 21 Double loop approach: reliability analysis coupled inside design optimization...... 12 22 Design potential approach: reliability constraints approximated at design potential point dpk; reliability analyses still coupled inside design optimization...................15 23 Partial safety factor approach: decouple reliability analysis and design optimization ................. ...............17................ 31 Analysis response surface and design response surface approach: decouple reliability analysis and design optimization ................ ..............................25 32 Latin Hypercube sampling to generate 5 samples from two random variables........27 41 Polynomials fit to elastic properties: El, E2, Gl2, and C112 .............. ..................31 42 Polynomials fit to coefficients of thermal expansion: a1 and u2 ................... .........32 43 Geometry and loads for laminates ................ ...............35............... 44 The change of optimal thickness (inch) with temperature for variable and constant material properties (EZu Of 0.0110) ................. ...............37.............. 45 Strains in optimal laminate for temperature dependent material properties with Ezu of 0.01 10 (second design in Table 43) ................. ...............38........... 51 Tradeoff plot of probability of failure, cost, and weight (laminate thickness) for [+25]s................ ...............53 61 Probability density of safety factor. The area under the curve left to s=1 measures the actual probability of failure, while the shaded area is equal to the target probability of failure indicating that probabilistic sufficiency factor = 0.8 .............61 62 Cantilever beam subj ect to vertical and lateral beading............_.__ ........._._ .....62 63 Monte Carlo simulation of problem with two random variables................_._..........66 71 Cantilever beam subj ect to vertical and lateral beading.............__ ..........__ .....84 81 Isogridstiffened cylindrical shell with internal isogrid and external rings with isogrid pattern oriented along circumferential direction for increased bending stiffness in hoop direction .............. ...............93.... 82 Isogridstiffened cylindrical shell with internal isogrid and isogrid pattern oriented along circumferential direction for increased bending stiffness in hoop direction; the zero degree direction for the composite laminates in isogrid and skin panel are show n .............. ...............99.... 83 1st iteration of DIRECT windowing for 2 dimensional example, GoldsteinPrice (GP) function (Finkel, 2003), and (b) further iterations on GP function with potentially optimal boxes shaded and subsequently divided along longest dim ension(s). ............. ...............107.... A1 Quadratic fit to al (1.0E6/oF) .............. ...............110............ ... A2 Sixthorder fit to a2 (1.0E4/oF) ................. ......... ...............110 .. A3 Quadratic fit to El (Mpsi) ................. ...............111.............. A4 Quartic fit to E2 (Mpsi) ................. ...............111.............. A5 Cubic fit to G12 (Mpsi) ................. ...............112.............. A 6 Q uartic fit to Cl2............... ...............112 B1 Contour plot of probabilistic safety factor design response surface approximation and test points along the curve of target reliability ................. .......................1 13 B2 Contour plot of probability of failure design response surface approximation and test points along the curve of target reliability. The negative values of probability of failure are due to the interpolation errors of the design response surface approxim ation............... .............11 B3 Contour plot of safety index design response surface approximation and test points along the curve of target reliability ....._.. ................ ............... 114 ... Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy RELIABILITYBASED STRUCTURAL OPTIMIZATION USINTG RESPONSE SURFACE APPROXIMATIONS AND PROBABLISTIC SUFFICIENCY FACTOR By Xueyong Qu August 2004 Chair: Raphael T. Haftka Major Department: Mechanical and Aerospace Engineering Uncertainties exist practically everywhere from structural design to manufacturing, product lifetime service, and maintenance. Uncertainties can be introduced by errors in modeling and simulation; by manufacturing imperfections (such as variability in material properties and structural geometric dimensions); and by variability in loading. Structural design by safety factors using nominal values without considering uncertainties may lead to designs that are either unsafe, or too conservative and thus not efficient. The focus of this dissertation is reliabilitybased design optimization (RBDO) of composite structures. Uncertainties are modeled by the probabilistic distributions of random variables. Structural reliability is evaluated in term of the probability of failure. RBDO minimizes cost such as structural weight subj ect to reliability constraints. Since engineering structures usually have multiple failure modes, Monte Carlo simulation (MCS) was used to calculate the system probability of failure. Response surface (RS) approximation techniques were used to solve the difficulties associated with MCS. The high computational cost of a large number of MCS samples was alleviated by analysis RS, and numerical noise in the results of MCS was filtered out by design RS. RBDO of composite laminates is investigated for use in hydrogen tanks in cryogenic environments. The major challenge is to reduce the large residual strains developed due to thermal mismatch between matrix and fibers while maintaining the load carrying capacity. RBDO is performed to provide laminate designs, quantify the effects of uncertainties on the optimum weight, and identify those parameters that have the largest influence on optimum design. Studies of weight and reliability tradeoffs indicate that the most costeffective measure for reducing weight and increasing reliability is quality control. A probabilistic sufficiency factor (PSF) approach was developed to improve the computational efficiency of RBDO, to design for low probability of failure, and to estimate the additional resources required to satisfy the reliability requirement. The PSF is a safety factor needed to meet the reliability target. The methodology is applied to the RBDO of composite stiffened panels for the fuel tank design of reusable launch vehicles. Examples are used to demonstrate the advantages of the PSF over other RBDO techniques CHAPTER 1 INTTRODUCTION Aerospace structures are designed under stringent weight requirement. Structural optimization is usually employed to minimize the structural weight subjected to performance constraints such as strength and deflection. Deterministically optimized structures can be sensitive to uncertainties such as variability in material. Uncertainties exist practically everywhere from engineering design to product manufacturing, product lifetime service condition and maintenance. Uncertainties can be introduced by manufacturing process such as variability in material properties and structural geometric dimensions; by errors in modeling and simulation; and by service conditions such as loading changes. Deterministic optimization can use large safety factors to accommodate uncertainties, but the safety and performance of the optimized structure under uncertainties (such as reliability) are not known, and the resulting structural design may be too conservative, and thus not efficient. To address this problem, reliabilitybased design optimization (RBDO) became popular in the last decade (Rackwitz 2000). The safety of the design is evaluated in terms of the probability of failure, with uncertainties modeled by probabilistic distribution of random variables. RBDO minimizes costs such as structural weight subject to reliability constraints, which are usually expressed as limits on the probability of failure of performance measures. Focus The focus of this dissertation is reliabilitybased structural optimization for use in the reusable launch vehicles (RLV). RLV is being developed as a safer and cheaper replacement of space shuttles, which suffer from high probability of failure and operating cost. For example, the probability of failure per mission launch is about 0.01 based on the shuttle launch history. The catastrophic failures both space shuttles, Challenger and Columbia, were all initiated from structural failure. The limited reuse of rocket boosters and fuel tanks also increase the operating cost of space shuttle. In order to reduce the operating cost, cryogenic fuel tank must be structurally integrated into the RLV, which motivates the use of composite materials. Composite materials are widely used in aerospace structures because of their high stiffness to weight ratio and the flexibility to tailor the design to the application. This extra flexibility can render deterministically optimized composite laminates very sensitive to uncertainties in material properties and load conditions (e.g., Giirdal et al. 1999). For example, the ply angles of a composite laminate deterministically optimized under unidirectional loading are all aligned with the loading direction, which leads to poor design for even small loading transverse to the fiber direction. Design of composite structures for RLV poses a major challenge because the feasibility of these vehicles depends critically on structural weight. With traditional deterministic design based on safety factors, it is possible to achieve a safe design, but it may be too heavy for the RLV to take off. Therefore, reliabilitybased design optimization based is required for the structural design of RLV structures in order to satisfy both safety and weight constraints. The advantages of reliabilitybased design over deterministic design have been demonstrated (e.g., Ponslet et al., 1995). For designs with stringent weight requirements, it is also important to provide guidelines for controlling the magnitudes of uncertainties for the purpose of reducing structural weight. Deterministic structural optimization is computationally expensive due to the need to perform multiple structural analyses. However, reliabilitybased optimization adds an order of magnitude to the computational expense, because a single reliability analysis requires many structural analyses. Commonly used reliability analysis methods are based on either simulation techniques such as Monte Carlo simulation, or moment based methods such as the firstorderreliabilitymethod (e.g., Melchers, 1999). Monte Carlo simulation is easy to implement, robust, and accurate with sufficiently large samples, but it requires a large number of analyses to obtain a good estimate of low failure probability. Monte Carlo simulation also produces a noisy estimate of probability and hence is difficult to use with gradientbased optimization. Momentbased methods do not have these problems, but they are not well suited for problems with many competing critical failure modes. Response surface approximations solve the two problems of Monte Carlo simulation, namely simulation cost and noise from random sampling. Response surface approximations (Khuri and Cornell 1996) typically fit low order polynomials to a number of response simulations to approximate response. Response surface approximations usually fit the structural response such as stresses in terms of random variables for reliability analyses. The probability of failure can then be calculated inexpensively by Monte Carlo simulation using the fitted response surfaces. Response surface approximations can also be fitted to probability of failure in terms of design variables, which replace the reliability constraints in RBDO to filter out numerical noise in the probability of failure induced by Monte Carlo simulation and reduce the computational cost. Different ways of using response surface approximations for reliability analysis and reliabilitybased design optimization will be presented in subsequent chapters. Objectives and Scope The main purpose of this dissertation is to address the challenges associated with the reliabilitybased design of composite panels for reusable launch vehicles. The problems encountered include the high computational cost for calculating probabilities of failure and for performing reliabilitybased design optimization, and the control of structural weight penalty due to uncertainties. Therefore the main obj ectives are: 1. Investigate response surface approximation for use in reliability analysis and design optimization. Analysis and design response surface approximation are developed. 2. Develop methods that allow more effcient reliabilitybased design optimization when the probability of failure must be low. This motivates the development of a probabilistic sumfciency factor approach. 3. Explore the potential of uncertainty control for reducing structural weight for unstiffened and stiffened panels. 4. Provide reliabilitybased designs of selected composite panels A literature survey of methods for reliability analysis and reliabilitybased design optimization is presented in Chapter 2. Chapter 3 introduces the response surface approximation techniques developed for the effcient RBDO (objective 2). Chapter 4 presents deterministic design optimization for composite laminates in cryogenic environment. Chapter 5 demonstrates the reliabilitybased design the composite laminate for use in cryogenic environments, and tradeoffs of weight and reliability via the control of uncertainty (objective 1). Chapter 6 proposes a probabilistic sumfciency factor approach for more efficient reliabilitybased design optimization. Chapter 7 demonstrates the use of probabilistic sufficiency factor for RBDO. Chapter 8 provides reliabilitybased designs of selected composite stiffened panels for the fuel tank design of reusable launch vehicles. CHAPTER 2 LITERATURE SURVEY: 1VETHODS FOR RELIABILITY ANALYSIS AND RELIABILITYBASED DESIGN OPTIMIZATION The basic conceptual structure of the reliabilitybased design optimization (RBDO) problem, called RBDO framework, can be formulated as minimize F = F(d) such that P3 () <;Pf ,j = n(21) where F is the obj ective function, d is a vector of design variables, Pj is the probability of failure of the jth failure mode, P,' is the allowable probability of failure of the jth failure mode, n, is the total number of failure modes, and x is a vector of random variables. To perform RBDO, reliability analyses must be performed to evaluate the probability of failure, which requires multiple evaluations of system performance (such as stresses in a structure). Depending on the specific reliability analysis method, the computational cost of a single reliability analysis is usually comparable to or higher than the cost of performing a deterministic local optimization. Furthermore, RBDO requires multiple reliability analyses, thus the computational cost of performing RBDO by directly coupling design optimization with reliability analysis is at least an order of magnitude higher than deterministic optimization. Efficient frameworks must be developed to overcome this computational burden. This chapter presents a literature review of stateoftheart reliability analysis methods and RBDO frameworks, and concludes with the motivation to develop the methodologies in Chapters 3, 6, and 7 for solving the problems. Review of Methods for Reliability Analysis The most common techniques for reliability analysis are Monte Carlo simulation, approaches based on most probable point (MPP), and decoupledd" Monte Carlo sampling of a response surface approximation fit to samples from some experimental design. Different techniques are preferable under different circumstances. Problem Definition The limit state function of the reliability analysis problem is defined as G(x), where G(x) represents a performance criterion and x is a random variable vector. Failure occurs when G(x)<0. Thus the failure surface or limit state of interest can be described as G(x)=0. The probability of failure can be calculated as P, = fx (x)dx (22) G(x) 0 where fx(x) is the joint probability distribution function (JPDF). This integral is hard to evaluate because the integration domain defined by G(x)<0 is usually unknown and integration in high dimension is very difficult. Commonly used probabilistic analysis methods are based on either simulation techniques such as Monte Carlo simulation or momentbased methods such as the first orderreliabilitymethod (FORM) or secondorderreliabilitymethod (SORM) (Melchers 1999). Monte Carlo Simulation Monte Carlo simulation (MCS) (e.g., Rubinstein 1981) generates a number of samples of the random variables x by using a random number generator. The number of samples required is usually determined by confidence interval analysis. Simulations (e.g., structural analyses) are then performed for each of these samples. Statistics such as mean, variance, and probability of failure can then be calculated from the results of simulations. This method is also called direct MCS or MCS with simple random sampling (SRS). Direct MCS is simple to implement; is robust; and is accurate with sufficiently large samples. But the usefulness of direct MCS in reliability analysis is quite limited because of its relatively low efficiency. For example, the probability of failure in engineering applications is usually very small, thus the number of limit state function evaluations required to obtain acceptable accuracy using direct MCS is very large (Chapter 5), which makes direct MCS very timeconsuming. Direct MCS is usually used as a benchmark to verify the accuracy and compare the efficiency of other methods using approximation concepts. To improve the accuracy and efficiency of simple random sampling, various simulation methods using Variance Reduction Techniques (VRT) have been developed to reduce the variance of the output random variables. Monte Carlo Simulation Using Variance Reduction Techniques Rubinstein (1981) and in Melchers (1999) gave good overviews of VRT for general Monte Carlo sampling. The VRT can be classified into different categories, such as sampling method, correlation method, conditional expectation method, and specific method. Sampling methods reduce the variance of the output by constraining samples to be representative of (or distorting the samples to emphasize the important region of) the performance function. Commonly used sampling methods include importance sampling (Harbitz 1986), adaptive sampling, stratified sampling, Latin Hypercube sampling, and spherical sampling. Correlation methods use techniques to achieve correlation among random observations, functions, or different simulations to improve the accuracy of the estimators. Some commonly used techniques are antithetic variate, common random numbers, control variates, and rotation sampling. Conditional expectation methods utilize the independence of random variables to reduce the order of probabilistic integration to achieve higher efficiency. Some common techniques are conditional expectation, generalized conditional expectation, and adaptive conditional expectation. Specific methods include response surface method and internal control variables techniques. The VRTs can be combined further to increase the efficiency of simulation. A comparison of the accuracy and efficiency of several common VRT methods can be found in Kamal and Ayyub (2000). Latin hypercube sampling and response surface methods are studied in this dissertation. The VRT requires fewer limit state function evaluations to achieve the desired level of accuracy, but the simplicity of simulation is lost, and the computational complexity of each simulation cycle is increased. MomentBased Methods Besides VRT, momentbased methods also reduce the computational cost drastically compared to MCS. The firstorderreliability method (FORM) and second order reliabilitymethod (SORM) are wellestablished methods that can solve many practical applications (Rackwitz 2000). FORM and SORM methods first transform the random variables from the original space (Xspace) to the uncorrelated standard normal space (Uspace). An optimization problem is then solved to find the minimum distance point (most probable point, MPP) on the limit state surface (Z=0) to the origin of the U space. The minimum distance, P, is called the safety index. The probability of failure is then calculated by using the normal cumulative distribution function Pf, = c(P) in FORM (Rackwitz and Fiessler 1978), or by using second order correction in SORM (Breitung 1984). Thus the safety index can be used directly as a measure of reliability. One disadvantage of FORM and SORM methods is that there is no readily available error estimate. The accuracy of FORM and SORM must be verified by other methods, such as MCS. The errors of FORM and SORM may come from the errors associated with MPP search and the nonlinearity of the limit state. The search of MPP requires solving a nonlinear optimization problem, which is difficult to solve for some problems. Wrong MPP usually leads to poor probability estimates, which is common problem for MPPbased reliability analysis methods. FORM and SORM methods are also not well suited for problems with many competing critical failure modes (i.e., multiple MPPs). Due to the limitations of firstorder and secondorder approximations, FORM and SORM methods do not perform well when the limit state surface is highly nonlinear around MPP. This nonlinearity may come from the inherent nonlinearity of the problem or may be induced by the transformation from Xspace to Uspace (Thacker et al. 2001). For example, transforming a uniform random variable to a standard normal variable usually increases the nonlinearity of the problem. When FORM and SORM methods encounter difficulties, sampling methods with VRT such as Importance Sampling can be employed to obtain/improve results with a reasonable amount of computational cost compared to direct MCS. Response Surface Approximations Response surface approximations (RSA) (Khuri and Cornell 1996) can be used to obtain a closedform approximation to the limit state function to facilitate reliability analysis. Response surface approximations usually fit low order polynomials to the structural response in terms of random variables. The probability of failure can then be calculated inexpensively by Monte Carlo simulation or FORM and SORM using the fitted polynomials. Therefore, RSA is particularly attractive for computationally expensive problems (such as those requiring complex finite element analyses). The design points where the response is evaluated are chosen by statistical design of experiments (DOE) so as to maximize the information that can be extracted from the resulting simulations. Response surface approximations can be applied in different ways. One approach is to construct local response surfaces around the MPP region that contributes most to the probability of failure of the structure. The DOE of this approach is iteratively adjusted to approach the MPP. Typical DOEs for this approach are Central Composite Design (CCD) and saturated design. For example, Bucher and Bourgund (1990), and Rajashekhar and Ellingwood (1993) constructed progressively refined local response surfaces around the MPP. This local RSA approach can produce good probability estimates given enough iterations. Another approach is to construct global RSA over the entire range of random variables (i.e., DOE around the mean values of the random variables). Fox (1993, 1994, and 1996) used BoxBehnken DOE to construct global RSA and summarized 12 criteria to evaluate the accuracy of response surfaces. Romero and Bankston (1998) used progressive lattice sampling, where the initial DOE is progressively supplemented by new design points, as the statistical design of experiments to construct global response surfaces. With the global approach, the accuracy of the RSA around the MPP is usually unknown, thus caution should be taken to avoid extrapolation around the MPP. Both the global and local approaches provide substantial savings in the number of total function evaluations. ReliabilityBased Design Optimization Frameworks This section summarizes several popular RBDO frameworks. These frameworks are based on the concepts of design sensitivity analyses, approximated limit state function, approximated reliability constraints, and partial safety factor concept to convert reliability constraints to approximately equivalent deterministic constraints, and RSA. Double Loop Approach Initial Design deterministicc optimum) Probabilistic Analyses: FORM/SORM Inner Loop: iterative probabilistic analysesl SNo liability Cone e? yg................... Outer Loop: design Design Sensitivity Sp/8dd optimization from PA Approximate Reliability Constraints 0 at DP d Update Design using Optimizer No Design Converge? Yes Stop Figure 21. Double loop approach: reliability analysis coupled inside design optimization The traditional approach of RBDO is to perform a double loop optimization: outer loop for the design optimization (DO) and inner suboptimization that performs reliability analyses using methods such as FORM or SORM. This nested approach is rigorous and popular, but it is computationally expensive and sometimes troubled by convergence problems, etc. (Tu et. al. 2000). The computational cost of RBDO with nested MPP may be reduced by sensitivity analysis. The sensitivity of the safety index to design variables can be obtained with little extra computation as byproducts of reliability analysis (Kwak and Lee 1987). A simplified formula that ignores the higher order terms in the estimation equation was proposed by Sorensen (1987). Yang and Nikolaidis (1991) used this sensitivity analysis and optimized an aircraft wing with FORM subjected to system reliability constraint. Figure 21 shows the typical procedure for the double loop approach. With this approach, the reliability constraints are approximated at the current design point (DP) dk. For problems requiring expensive finite element analysis, this approach may still be computationally prohibitive; and FORM (e.g., classical FORM such as HasoferLind method) may converge very slowly (Rackwitz 2000). Wang and Grandhi (1994) developed an efficient safety index calculation procedure for RBDO that expands limit state function in terms of intermediate design variables to obtain more accurate approximation. Reliability constraints can also be approximated to reduce the computational cost of RBDO. Wang and Grandhi (1994) approximate reliability constraints with multipoint splines within a double loop RBDO. Another way of improving the efficiency of multilevel optimization is to integrate the iterative procedures of reliability analysis and design optimization into one where the iterative reliability analysis stops before full convergence at each step of the optimization, as suggested by Haftka (1989). Maglaras and Nikolaidis (1990) proposed an integrated analysis and design approach for stochastic optimization, where reliability constraints are approximated to different levels of accuracy in optimization. Even combined with above approaches, nested MPP approach still suffers the problems of high computational cost and convergence. Several RBDO approaches are being developed to solve these problems. Inverse Reliability Approach Recently, there has been interest in using alternative measures of safety in RBDO. These measures are based on margin of safety or safety factors that are commonly used as measures of safety in deterministic design. The safety factor is generally expressed as the quotient of allowable over response, such as the commonly used central safety factor that is defined as the ratio of the mean value of allowable over the mean value of the response. The selection of safety factor for a given problem involves both objective knowledge (such as data on the scatter of material properties) and subjective knowledge (such as expert opinion). Given a safety factor, the reliability of the design is generally unknown, which may lead to unsafe or inefficient design. Therefore, using safety factor in reliabilitybased design optimization seems to be counter productive. Freudenthal (1962) showed that reliability can be expressed in terms of the probability distribution function of the safety factor. Elishakoff (2001) surveyed the relationship between safety factor and reliability, and showed that in some cases the safety factor can be expressed explicitly in terms of reliability. The standard safety factor is defined with respect to the response obtained with the mean values of the random variables. Thus a safety factor of 1.5 implies that with the mean values of the random variables, we have a 50% margin between the response (e.g., stress) and the capacity (e.g., failure stress). However, the value of the safety factor does not tell us what the reliability is. Birger (1970), as reported by Elishakoff (2001), introduced a factor, which we call here the Birger's safety factor that is more closely related to the target reliability. A Birger's safety factor of 1.0 implies that the reliability is equal to the target reliability; a Birger's safety factor larger than 1.0 means that the reliability exceeds the target reliability; and Birger's safety factor less than 1.0 means that the system is not as safe as we wish. Design potential approach Tu et al. (2000) used the probabilistic performance measure, which is closely related to Birger's safety factor, for RBDO using FORM. Figure 22 summarizes the design potential approach. Initial design (deterministicoptmm Probabilistic Analyses: FORM/SORM Inner Loop: iterative probabilistic analyses SNo liability Cone e? y g .................... .. Outer Loop: design Design Sensitivity Sp/8dd optimization from PA Approximate Reliability [constraints p at DPP dk ) Update Design Using Optimizer No Design Converge?, Yes Stop Figure 22. Design potential approach: reliability constraints approximated at design potential point dpk; reliability analyses still coupled inside design optimization They showed that the search for the optimum design converged faster by driving the probabilistic performance measure to zero than by driving the probability of failure to its target value. Another major difference between the design potential approach and double loop approach is that the reliability constraints are approximated at the design potential point dk (DPP), which defined as the design that renders the probabilistic constraint active, instead of the current design point. Since the DPP is located on the limitstate surface of the probabilistic constraint, the constraint approximation of DPM becomes exact at (DPP). Thus DPM provides a better constraint approximation without additional costly limit state function evaluation. Therefore, a faster rate of convergence can be achieved. Partial safety factor approach (Partial SF) Wu et al. (1998 and 2001) developed a partial safety factor similar to Birger's safety factor in order to replace the RBDO with a series of deterministic optimizations by converting reliability constraints to equivalent deterministic constraints. After performing reliability analysis, the random variables x are replaced by safety factor based values x*e, which is the MPP of the previous reliability analysis. The shift of limit state function G needed to satisfy the reliability constraints is s, which satisfies P(G(x)+s)<0)=Pt. Both x*k and s can be obtained as the byproducts of reliability analysis. Since in design optimization, the random variables x are replaced by x* (just as in the case of traditional deterministic design, where random variables are replaced by deterministic values after applying some safety factor), the method is called partial safety factor approach (Figure 23). The target reliability is achieved by adjusting the limit state function via design optimization. It is seen that the required shift s is similar to the target probabilistic performance measure g*. The significant difference between Partial SF and DPM or nest MPP is that reliability analysis (FORM in the paper, can be any MPPbased method) is decoupled from and driven by the design optimization to improve the efficiency of RBDO. If n iterations are needed for convergence, the approach needs n deterministic optimizations and n probabilistic analyses. However, the convergence rate of subsequent probabilistic analyses is expected to increase after obtaining a reasonable MPP. Wu et al. (2001) demonstrated the efficiency of this approach by optimizing a beam subject to multiple reliability constraints. Deterministic Design Optimization Probabilistic Analyses: FORM/SORM Probabilistic analyses to replace random variable with deterministic values Safety factor based x* x* Deterministic Design Otimization with x* j Deterministic design optimization with x* New design d No ign and Reii Yes Stop Figure 23. Partial safety factor approach: decouple reliability analysis and design optimization Summary Since the reliability analyses involved in our study are for system probability of failure, MCS was used to perform reliability analysis. We developed an analysis RS approach to reduce the high computational cost of MCS and design response surface approach to filter noise in RBDO (Chapter 3). The current RBDO frameworks mostly deal with the probability of failure of individual failure modes, an efficient framework must be developed to address RBDO for the system probability of failure. Chapter 6 developed an inverse reliability measure, the probabilistic sufficiency factor, to improve the computational efficiency of RBDO, to design for low probability of failure, and to estimate the additional resources needed to satisfy the reliability requirement. Chapter 7 demonstrated the use of probabilistic sufficiency factor with multiHidelity techniques for RBDO and converting RBDO to sequential deterministic optimization. The methodology is applied to the RBDO of stiffened panels in chapter 8. CHAPTER 3 RESPONSE SURFACE APPROXIMATIONS FOR RELIABILITYBASED DESIGN OPTIMIZATION Response surface approximation (RSA) methods are used to construct an approximate relationship between a dependent variable f(the response) and a vector x of n independent variables (the predictor variables). The response is generally evaluated experimentally (these experiments may be numerical in nature), in which case denotes the mean or expected response value. It is assumed that the true model of the response may be written as a linear combination of basis functions Z(x) with some unknown coefficients (3 in the form of Z(xy (3 Response surface model can be expressed as Y(x) = Z(x)T b (31) where Z(x) is the assumed basis function vector that usually consists of monomial functions, and b is the least square estimate of p For example, if the a linear response surface model is employed to approximate the response in terms of two independent variables, xl anS X2, the response surface approximation is Y(x)= b, +bhx, +bh2x: (3 2) The three major steps of response surface approximation as summarized by Khuri and Cornell (1996) are *Selecting design points where responses must be evaluated. The points are chosen by statistical design of experiment (DOE), which is performed in such a way that the input parameters are varied in a structured pattern so as to maximize the information that can be extracted from the resulting simulations. Typical DOE for quadratic RSA is central composite design (CCD, Khuri and Cornell 1996). * Determining a mathematical model that best fits the data generated from the design points of DOE by performing statistical test of hypotheses of the model parameters(Khuri and Cornell 1996 and Myers et al. 2002) * Predicting response for given sets of experimental factors or variables by the constructed response surface approximation. Due to the close form nature of the approximation, RSA is particularly attractive for engineering problems that require a large number of computationally expensive analyses, such as structural optimization and reliability analysis. The accuracy of RSA is measured by error statistics such as the adjusted coefficient of multiple determination (R2adj), root mean square error predictor (RMSE), and coefficient of variation (COV =RMSE/Mean of Response). An R2adj close to one and a small COV close to zero usually indicate good accuracy. The RSAs in this dissertation were all constructed by JMP software (SAS Institute., 2000). The above error statistics are readily available from JMP after RSA construction. Khuri and Cornell (1996) presented a detailed discussion on response surface approximation. This chapter presents the response surface approach developed for reliability based design optimization. Stochastic Response Surface (SRS) Approximation for Reliability Analysis Among the available methods to perform reliability analysis, momentbased methods (e.g., FORM/SORM) are not well suited for the composite structures in cryogenic environments because of the existence of multiple failure modes. Direct Monte Carlo simulation requires a relatively large number of analyses to calculate probability of failure, which is computationally expensive. Stochastic response surface approximation is employed here to solve the above problems. To apply RSA to a reliability analysis problem, the limit state function g(x) (usually the stress of displacement in the structures) is approximated by G(x) = Z(x)T b (33) where x is the vector of input random variables. With the polynomial approximation G(x), the probability of failure can then be calculated inexpensively by Monte Carlo simulation or FORM/SORM. Since the RSA is constructed in random variable space, this approach is called stochastic response surface approach. Stochastic RSA can be applied in difference ways. One approach is to construct local RSA around the Most Probable Point (MPP), the region that contributes most to the probability of failure of the structure. The statistical design of experiment (DOE) of this approach is iteratively performed to approach the MPP. Another approach is to construct global response surfaces over the entire range of the random variables, where the mean value of the random variables is usually chosen as the center of DOE. The selection of RSA approach depends on the limit state function of the problem. Global RSA is simpler and efficient to use than local RSA for problems with limit state function that can be well approximated globally. Analysis Response Surface (ARS) Approximation for ReliabilityBased Design Optimization In reliabilitybased design optimization (RBDO), the SRS approach needs to construct response surfaces to limit state functions at each point encountered in the optimization process, which requires a fairly large number of limit state function evaluation and RS construction. The local SRS approach is more computationally expensive than the global SRS approach due to multiple iterations involved in the RSA construction. This dissertation (see also Qu et al., 2000) developed an analysis response surface (ARS) approach in the unified system space (x, d) to reduce the cost of RBDO, where x is the vector of random variables and d is the vector of design variables. By including design variables in the response surface formulation, the efficiency of the RBDO is improved drastically for certain problems. The ARS is fitted to the response (limit state function) in terms of both design variables and random variables G(x) = Z(x, d)T b (3 4) The ARS approach combines probabilistic analyses with design optimization. Using the ARS, the probability of failure at every design point in the design optimization process can be calculated inexpensively by Monte Carlo simulation based on the fitted polynomials. The number of analyses required for ARS depends on the total number of random variables and design variables. Because the ARS fits an approximation in terms of both random variables and design variables it requires more analyses than SRS. For our applications, where the number of random variables is large (around 10) and the number of design variables is small (around four), this typically results in an ARS that is less than three times as expensive to construct as an SRS, which is due to the use of Latin Hypercube sampling that can generate an arbitrary number of design points for RSA construction (explained in last section of this chapter and demonstrated in chapter 5). This compares with a large number (order of 10 to 100) of SRS approximation required in the course of optimization. For a large number of variables (more than 20 to 30), the construction of ARS is hindered by the curse of dimensionality. SRS might be used to reduce the dimensionality of the problem. Besides the computational cost issue, the inclusion of design variables may increase the nonlinearity of the response surface approximation. It might be necessary to use RSA of order higher than quadratic, for which proper DOE must be employed. The DOE issues are discussed in the last section of this chapter. Design Response Surface (DRS) Approximation Direct Monte Carlo simulation introduces noise in computed probability of failure due to limited samples. The noise can be reduced by using a relatively large number of samples, which is computationally made possible by using response surface approximation. The noise can also be filtered out by using another response surface approximation, the design response surface (DRS). DRS fitted to probability of failure P as a function of design variables d can be shown as P(d) = Z(d)T b (35) The use of DRS also reduces the computational cost of RBDO by approximating the reliability constraint by closeform function. The probability of failure is found to change by several orders of magnitude over narrow bands in design space, especially when the random variables have small coefficients of variation (Chapter 5). The steep variation of probability of failure requires DRS to use highorder polynomials for the approximation, such as quintic polynomials, increasing the required number of probability calculations (Qu et al. 2000). An additional problem arises when Monte Carlo simulations (MCS) are used for calculating probabilities. For a given number of simulations, the accuracy of the probability estimates deteriorates as the probability of failure decreases. The numerical problems associated with steep variation of probability of failure led to consideration of alternative measures of safety. The most common one is to use the safety index P, which replaces the probability by using the inverse standard normal transformation, P7= 95' (P) (36) Safety index is the distance measured as the number of standard deviations from the mean of a normal distribution that gives the same probability. Fitting DRS to safety index showed limited improvement of accuracy (Chapter 6), and it has the same problems of accuracy as the probability of failure when based on Monte Carlo simulations. BoxCox transformation (Myers and Montgomery 1995) on the probability of failure was also tested, but showed very limited improvement. A probabilistic sufficiency factor approach is developed as an inverse reliability measure to improve the accuracy of DRS, estimate additional resources required to satisfy the reliability constraint, and convert RBDO to sequential deterministic optimization (Chapter 6 and 7). Analysis and Design Response Surface Approach Figure 31 summarizes the ARS/DRSbased RBDO approach. First the DOE of ARS is performed and ARS is constructed. Then DOE of DRS is performed, which should stay in the range of design variables of the DOE for the ARS, and DRS is constructed. Design optimization is then performed on the DRS. If the design does not converge, the DOE of the DRS can be moved toward the intermediate optimum and its range can be shrunk to improve the accuracy of DRS. If the intermediate optimum is near the boundary of the ARS, the DOE of the ARS needs to be moved to cover the potential optimum region better. The entire process is repeated until the optimization converges and the reliability of the optimum stabilizes. Figure 31. Analysis response surface and design response surface approach: decouple reliability analysis and design optimization Statistical Design of Experiments for Stochastic and Analysis Response Surfaces Statistical design of experiments selects design points for response surface approximation in such a manner that the required accuracy is achieved with a minimum number of design points. However, the exact functional form of the structural response to be approximated is rarely known, the errors in SRS and ARS usually include both variance and bias errors. Structural responses are usually computationally expensive to evaluate. Therefore, the selection of the DOE for ARS are primarily based on the following two considerations * The number of design points in the DOE is flexible, since we want to reduce the number of analyses. * The points in the design space have good spacefilling distribution. The DOE is often used to provide a sampling of the problem space. Higher than quadratic polynomials may also be needed in order to provide good approximation of the response. Both desire spacefilling DOE. ARS needs to include both the design and random variables, the number of variables is relatively large, often exceeding 15 variables. This excludes the use of many DOE, such as Central Composite Design (CCD). The CCD has 2n vertices, 2n axial points, and one center point, so the required number of design points is 2n+2n+1,where n is the number of variables involved. A polynomial of nmth order in terms of n variables has L coefficients, where (n + 1)(n + 2)...(n + m) L = (37) For n = 15, CCD requires 32799 analyses. On the other hand, a quadratic polynomial in 15variable has 136 coefficients. From our experience, in order to estimate these coefficients, the number of analyses only needs to be about twice as large as the number of coefficients, which is less than one percent of the number of vertices for 15variable space. Therefore, other DOEs such as CCD using fractional factorial design (Myers and Montgomery 1995) need to be used. The fractional factorial CCD is intended for the construction of quadratic RSA. Orthogonal arrays (Myers and Montgomery 1995) are used for the construction of higher order RS (Balabanov 1997 and Padmanabhan et al. 2000). Isukapalli (1999) employed orthogonal arrays to construct SRS. For problems where only very limited number of analyses is computationally affordable, BoxBehnken designs or saturated designs can be used (Khuri and Cornell 1996). In the paper of Qu et al. (2000), it is shown that Latin Hypercube sampling is more efficient and flexible than orthogonal arrays. The idea of Latin Hypercube sampling can be explained as follows: assume that we want n samples out of k random variables. First, the range of each random variable is divided into n nonoverlapping intervals on the basis of equal probability. Then one value is selected randomly from each interval. Finally, by *) randomly pairing values of different random variables, the n input vectors each of k dimension for Monte Carlo simulation are generated. Figure 32 illustrates a two dimensional Latin Hypercube sampling. Uniform Normal Figure 32. Latin Hypercube sampling to generate 5 samples from two random variables CHAPTER 4 DETERMINISTIC DESIGN OF COMPOSITE LAMINATES FOR CRYOGENIC ENVIRONMENTS This chapter presents deterministic designs of composite laminates for hydrogen tanks in cryogenic environments. The traditional way of deterministically designing the laminate with safety factors is employed in this chapter in order to investigate the design issues. Reliabilitybased design, explicitly taking account of uncertainties in material properties, is presented in chapter 5. Introduction The use of composite materials for the design of liquid hydrogen tanks at cryogenic temperatures has many challenges. The coefficient of thermal expansion (CTE) along the fiber direction is usually two orders of magnitude smaller than that transverse to fiber direction. In typical composite laminates, the ply angles are different in order to carry load efficiently, which results in a mismatch of the coefficients of thermal expansion. When the laminates are cooled down during manufacturing from stressfree temperature that is near the curing temperature, the mismatch of the coefficients of thermal expansion induces large thermal strains. Cooling to cryogenic temperatures substantially increases the thermal stresses. The residual thermal strains may result in matrix cracking leading to reduction in stiffness and strength of the laminate and possible initiation of delamination. A more detrimental effect of matrix cracking in hydrogen tanks is hydrogen leakage through the wall of the tank. Park and McManus (1996) proposed a micromechanical model based on fracture mechanics and verified the model by experiments. Kwon and Berner (1997) studied matrix damage of crossply laminate by combining a simplified micromechanics model with finite element analysis and showed that the prediction of damage is improved substantially with the incorporation of residual stresses. Aoki et al. (2000) modeled and successfully predicted the leakage through the matrix cracks. The present objective is to investigate options available to minimize the increase in thickness due to thermal residual strains for laminates designed subject to thermal and mechanical loads. Deterministic designs were performed to investigate the following effects: (i) temperature dependant material properties for strains analysis, (ii) laminates designed to allow partial ply failure (matrix cracking), and (iii) auxiliary stiffening solutions that reduce the axial mechanical load on the tank wall laminates. Composite Laminates Analysis under Thermal and Mechanical Loading Since the properties of composite materials, such as coefficients of thermal expansion and elastic moduli, change substantially with temperature, classical lamination theory (CLT) (e.g., Giirdal et al. 1999) is modified to take account of temperature dependent material properties. The stressfree strain of a lamina is defined as EF (Ia F = GAT, Where a is the coefficient of thermal expansion (CTE). When a is a function of temperature T, the stress free strain is given by the expression aF = a(T)dT (41) where Tero is the stressfree temperature of the material and Tserwce, is the service temperature. From the equilibrium equation and vanishing of residual stress resultant, the equilibrium of a symmetric laminate subjected to pure thermal load with uniform temperature profile through the thickness can be expressed by A(T)EO = IQ(T)Ef~5 _N(T) (42) where so~is the nonmechanical strain induced by thermal load. The right hand side of Equation 42 is defined as the thermal load N". From Equation 42, the nonmechanical strain induced by thermal load can be expressed by soN(T)= A'(T)N"(T) (43) The residual thermal stress is given by the constitutive equation aR Q()( ON F (T)) (44) The mechanical strain is expressed by EM(T) = A '(T)NM(T) (45) Therefore, the mechanical stress is given by a" (T) = Q(T)Eh" (T) (46) By the principle of superposition, the residual strain and total stress in the laminate are expressed by eResidual(T Mh~ ()ON (T_ EF (T) (47) ca"" (T) = aR () M (T) (48) Properties of IM600/133 Composite Materials The composite material used in the present study is the IM600/133 graphiteepoxy material system, which has a glasstransition temperature of 3560F. Aoki et al. (2000) tested IM600/133 (material Aa in their paper) composite material system at various temperatures, ranging from 3560F to 452.20F (1800C to 2690C), with mechanical tensile loads. The material properties of IM600/133 were taken from Aoki et al. (2000) and fitted with smooth polynomials as functions of temperatures in order to be used in calculation (Figures 41 and 42). The data points used in the fitting and the individual polynomials are shown in the Appendix A. 25.0 2.5 20. 2.0 15.0 1.5 $~ 1.0 5.0 . B 10.0 a 0. 0.5 425.0 225.0 25.0 175.0 375.0 Temperature (F) El El (Mpsi) ~ E2 (Mpsi) +0 12 (Mpsi) nul 2 Figure 41. Polynomials fit to elastic properties: El, E2, Gl2, and C12 Aoki et al. (2000) showed that the fracture toughness of the material increased at lower temperatures; however, the increased strain energy due to the mismatch in the thermal expansion coefficients also increased the critical energy release rate. They also applied a micromechanics model proposed by Park and McManus (1996) for predicting microcracking and showed good correlation with experiments. Aoki et al. (2000) found that at cryogenic temperatures, quasiisotropic laminates exhibited a large reduction in the transverse mechanical strain 82 that initiates microcracking (0.702% at room temperature to 0.325% at cryogenic temperatures). Experimental data from Aoki et al. (2000) were used to determine the strain allowables. They tested a 16ply quasiisotropic (45/0/45/90)2 sSymmetric laminate in tension in the 00 direction at cryogenic temperatures. The nominal specimen thickness and width were 2.2 mm and 15 mm. The mechanical loads corresponding to matrix cracks (Table 41) were extracted from Figure 5 in Aoki et al. (2000). The strain transverse to the fiber direction, 82, iS assumed to be the strain that induces matrix cracking in the laminate. Based on the load condition and the configuration of the laminate, the transverse strain 82 in the 900 plies is the most critical strain in the laminate. 0.012 0.010 0.008 4 Epsilon1 5 Epsilon2 A Gammal2 0.006 c* 0.004 0.002 0.000 0.00 ) 40 0 0 0 0.004 0.000 Temperature (aF Figure 42. Polynomials fit to coefficients of thermal expansion: a1 and u2 Normally, strain allowables are calculated by loading laminates at room temperatures. However, for microcracking, the residual stresses are of primary importance, so all strains are calculated from the stressfree temperature, assumed to be 300 oF. The calculations are made by integrating the thermal strains from the stress free temperature to the operational temperature as described in the next section. Table 41 shows the transverse strains E2 in the 900 plies corresponding to the loading at the onset of matrix cracking at selected temperatures. Aoki et al. (2000) found that the maximum mechanical strain before matrix cracking is reduced from 0.7% at room temperature to 0.325% at 4520F. Older results (Aoki et al. 1999) (Table 41) indicated that the maximum mechanical strain at cryogenic temperature may be as low as 0.082%. However, the calculation indicates that the total strain (including the residual thermal strain) may vary anywhere from 1.5 to 1.9% depending on the temperature and the measurement. These values appear high, but this is because they include the residual strains that are usually not counted. For the quasiisotropic laminate, these residual strains at room temperature are very high, at 0.86%, and are higher at lower temperatures Table 41. Transverse strains calculated for conditions corresponding to the onset of matrixcracking in the 900 plies of a quasiisotropic (45/0/45/90)2s in Aoki et al. (2000) Room LN2 LHe LHe temperature temperature temperature temperature (77 OF or 25 OC) (320 OF or (452 OF or (452 OF or  196 OC) 269 OC) 269 OC)a Mechanical load 390 330 200 50a (Mpa) Total 82 0.01564 0.01909 0.01760 0.01517a Thermal 82 0.00864 0.01365 0.01435 0.01435a Mechanical 82 0.00700 0.00544 0.00325 0.00082a a Older data obtained from Aoki et al. (1999) The importance of working with strains measured from the stress free temperature is demonstrated in Table 42, which shows the 82 in the angleply laminate (f 25)4S under the same loading condition as Table 41. At room temperature, the residual (thermal) strains are only about 0.4% compared to 0.86% for the quasiisotropic laminate. An analysis based on strains measured from room temperature will not show the additional 0.46% strain that the (f 25)4s laminate can carry compared to a quasiisotropic laminate. Based on the data from Table 41, we selected the allowable strain to be 1.54% for the probabilistic design and 1.1% (1.4 safety factor) for the deterministic design. Table 42. Transverse strains of an angleply laminate (+ 25)4S under the same loading condition as Table Al Room LN2 LHe LHe temperature temperature temperature temperature (77 OF or 25 OC) (320 OF or (452 OF or (452 OF or 196 OC) 269 OC) 269 OC)a Mechanical load 390 330 200 50" (Mpa) Total Ez 0.00261 0.00360 0.00527 0.00656a Thermal Ez 0.00393 0.00669 0.00699 0.00699a Mechanical Ez 0.00654 0.00309 0.00172 0.00043a a Older data obtained from Aoki et al (1999) Table 43 shows the strain allowables for the lamina, where other strain allowables except 82u WeTO prOVided to us by NASA. The strain allowables may appear to be high, but this is because they are applied to strain including residual strains that develop due to cooling from stressfree temperature of 3000F. A quasiisotropic laminate will use up its entire transverse strain allowable of 0.011, when cooled to 4520F. Thus, this value is conservative in view of the experiments by Aoki et al. (2000) that indicated that the laminate can carry 0.325% mechanical strain at cryogenic temperature. Table 43. Strain allowables for IM600/133 at 4230F Strain sy" sy' sy, es' 77,7; Allowablesa 0.0103 0.0109 0.0110 or 0.0154b 0.0130 0.0138 a Strains include residual strains calculated from the stressfree temperature of 300 OF b The value 0.0110 is obtained from the extreme value 0.0154 divided by a safety factor of 1.4 Deterministic Design of AnglePly Laminates It is estimated that the minimum thickness needed to prevent hydrogen leakage is 0.04 inch, so it may be acceptable to permit matrix cracking if the undamaged part of the laminate has a minimum thickness of 0.04 inch. For the cracked part of the laminate, the Figure 43. Geometry and loads for laminates The design problem was formulated as (Thickness are in inches) minimize h= 4(tz + t2 such that 7Y12 I 12 0.005 <; t,, t2 0.040 <; h elastic modulus transverse to fiber direction, E2, and the shear modulus, G12, are reduced by 20 percent and the transverse strain allowable, 82u, iS increased. The rest of the laminate must not have matrix cracking and must provide at least 8 contiguous intact plies (0.04 inch) in order to prevent hydrogen leakage. Optimization Formulation Laminates with two ply angles, [f Bi/f 62 S, See Figure 43, were optimized. The x direction here corresponds to the hoop direction on a cryogenic propellant tank, while the y direction corresponds to the axial direction. The laminates are made of IM600/133 graphiteepoxy material with ply thickness of 0.005 inch and subjected to mechanical load and an operating temperature of 4230F. Nx is 4,800 lb./inch and N, is 2,400 lb ./inch. N\ X I (49) where h is the laminate thickness, superscripts u and I denote upper and lower limits of associated quantities, and E1, 82, and yl2 are the ply strains along fiber direction, transverse to fiber direction, and shear strain, respectively. The stack thickness of plies with plyangle 97, which is allowed to have matrix cracking, is tl. The stack thickness of the plies with plyangle 82, which are not allowed to crack and must provide in total a minimum intact thickness of 0.04 inch to prevent hydrogen leakage, is t2. The four design variables are the ply angles 6, and 62 and their stack thickness tl and t2. The individual stack thickness from a continuous optimizer (SQP in MATLAB) is rounded up to the nearest multiple of 0.005 inch. Optimizations without Matrix Cracking In order to see the effect of mechanical and thermal loads, it is instructive to compare designs for different operational temperatures. Table 44 shows the optimum laminates at these temperatures. In the last row of Table 44, the numbers in the parentheses are the continuous thickness before rounding. Without thermal strains, a crossply laminate with thickness of 0.04 inch can easily (with 0.1% transverse strain as the margin of safety) carry the mechanical loads. When thermal strains are taken into account, the angle between the + 6 plies must decrease in order to reduce the thermal strains. The ply angles do not vary monotonically because both the residual strains and the stiffness of the laminate increase with the decrease of temperature. At cryogenic temperatures the angle decreases to 25.50, and at that angle the axial loads cannot be carried efficiently and the thickness increases to 0.1 inch. Figure 44 shows that the thickness of optimum laminates for temperature dependent and constant material properties at 770F changes substantially with the working temperature for a strain limit 82u Of 0.0110. Using temperature dependent material properties avoided a very conservative design with constant material properties. Table 44. Optimal laminates for different operational temperatures: E2u Of 0.01 10 Mechanical only Mechanical and Thermal load Temperature j")77.00 77.00 61.50 242.00 423.00 8~(dere OF.0 )48 81 35 5 6, (degree) 90.00 34.82 38.13 33.57 25.50 6, (degre) 0.005 33.93 38.13 33.57 25.50 tz (inch) 0.005 0.005 0.005 0.010 0.010 0.040 0.040 0.060 0.080 0.100 ha (mnch) (0.040) (0.040) (0.048) (0.079) (0.093) a Numbers in parentheses indicate unrounded thickness 0.2 S0.18 O x Variable material properties S0.16 O Constant material properties (n0.14 P 0.12 S0.1 O S0.08 S0.06 S0.04 O v MtM O 0.02 500 400 300 200 100 0 100 200 Temperature (oF) Figure 44. The change of optimal thickness (inch) with temperature for variable and constant material properties (EZu Of 0.01 10) Designs must be feasible for the entire range of temperatures, so for all designs discussed in the rest of the dissertation, strain constraints were applied at 21 temperatures, which were uniformly distributed from 770F to 4230F. Table 45 shows that the design problem has multiple optima. Figure 45 shows that the tensile strain limit 82u iS the active constraint at 4230F for the second optimal design of Table 45. Table 45. Optimal laminates for temperature dependent material properties with Ezu Of 0.0110 (optimized for 21 temperatures) 6, 6,t; tha (inch) Probability of failure (degree) (degree) (inch) (inch) 0.00 28.16 0.005 0.020 0.100 (0.103) 0.019338 (0.014541) 27.04 27.04 0.010 0.015 0.100 (0.095) 0.000479 (0.001683) 25.16 27.31 0.005 0.020 0.100 (0.094) 0.000592 (0.001879) a Numbers in parentheses indicate unrounded thickness b The probabilities were calculated by the methodology described in the chapter 5  SEpsilon1 Epsilon2 +Gammal 2  4 M~2 0.012 0.010 0.008 0.006 0.004 0.002 0.000 0.002! 0.004 0.006 Temperature (oF) Figure 45. Strains in optimal laminate for temperature dependent material properties with EZu Of 0.01 10 (second design in Table 43) These optimal laminates have similar thickness but different ply angles. The failure probabilities of the continuous designs are shown in parentheses. The high failure probabilities of the first design (continuous and discrete) clearly show a smaller safety margin than the other two. The second and third designs show that a slight rounding can change the failure probability significantly. Designs with two similar ply angles have much lower failure probabilities than designs with two substantially different ply angles. The failure probabilities of these laminates are too high (compared with 104 to 106), and this provides incentives to conduct reliabilitybased design. Optimizations Allowing Partial Matrix Cracking Plies of angle 61 are the plies allowed for matrix cracking for optimizations allowing partial matrix cracking. The 82ti Of the 61 plies was increased to 0.0154, while the rest of the laminate still used 82ti Of 0.01 1. The lower limit of t2 WAS increased to 0.010 inch (total + 62 thickness of 0.04 inch) to prevent hydrogen leakage. Table 46 shows the optimal design allowing partial matrix cracking. Its thickness is the same as that of the design without partial matrix cracking (Table 45), and the ply angle of the cracked plies increased due to the increased strain limit, 82". However, the failure probability is higher than the design that does not allow matrix cracking, which indicates that this option does not help. The active constraint is still the tensile strain limit 82ti Of 0.011 at cryogenic temperatures for the uncracked plies. Table 46. Optimal laminate for temperature dependent material properties allowing partial matrix cracking: 82ti Of 0.01 1 for uncracked plies and 0.0154 for cracked plies 6, 6,t; tha (inch) Probability of failure (degree) (degree) (inch) (inch) 36.07 25.24 0.015 0.010 0.100 (0.097) 0.003716 (0.004582) a Numbers in parentheses indicate unrounded thickness. Optimizations with Reduced Axial Load Ny With small ply angles, the critical component of the load is the axial load Ny, induced by pressure on the caps of the propellant tank. A smaller axial load may be obtained by using an auxiliary structure to carry part of this load, such as axial stiffeners or a cable connecting the caps. If the auxiliary structure does not directly connect to the wall of the hydrogen tank (such as attached to the caps of the tank), it will not be affected by the mismatch of the thermal expansion coefficients, i.e., the residual thermal strains. Here the possibility of reducing the axial load by half by carrying 1200 lb./inch of the axial load by a cable made of unidirectional material was explored. The required cross sectional area of the composite cable is 5.05 inch2, which is equivalent to a laminate thickness of 0.005 inch for a tank with a 160inch radius. Table 47 lists designs optimized with half of the axial load. The results indicate that reducing axial load is an effective way to reduce the laminate thickness. Higher probabilities of failure reflect rounding down of the thickness. Table 47. Optimal laminates for reduced axial load ofl, 200 lb./inch by using load shunting cables (equivalent laminate thickness of 0.005 inch) 6, 6,t; tha (inch) Probability of failure (degree) (degree) (inch) (inch) 0.00 29.48 0.005 0.005 0.040 (0.043) 0.010311 (0.001156) 27.98 26.20 0.005 0.005 0.040 (0.043) 0.585732 (0.473 536) 30.62 11.31 0.005 0.005 0.040 (0.042) 0.008501 (0.008363) a Numbers in parentheses indicate unrounded thickness. It is seen that the traditional way of deterministically design the laminate with safety factors did not work well for this problem due to various uncertainties and the laminate cracking failure mode. Uncertainties in the material properties are introduced by the fabrication process, the temperature dependence of material properties, the cure reference temperature, and acceptable crack density for design. These uncertainties indicate a need to use reliabilitybased optimization to design laminates for use at cryogemic temperatures. CHAPTER 5 RELIABILITYBASED DESIGN OF COMPOSITE LAMINATES FOR CRYOGENIC ENVIRONMENTS This chapter presents reliabilitybased designs of composite laminates for hydrogen tanks in cryogenic environments, comparison between deterministic and reliabilitybased design, identification of uncertainty parameters that have the largest influence on the optimum design, and quantification of the weight penalty associated with level of uncertainty in those parameters. The results indicate that the most effective measure for reducing thickness is quality control (refer also to Qu et al., 2001). The reliabilitybased optimization is carried out using response surface approximations combined with Monte Carlo simulation described in chapter three. ReliabilityBased Design Optimization Problem Formulation The reliabilitybased optimization is formulated as minimize h= 4(tz +t2) such that P < P" 0.005 < t, 51 0.005 < t2 where h is the laminate thickness, tl is the stack thickness of lamina with plyangle 6, and has a lower limit of 0.005 inch on it, t2 is the stack thickness of lamina with plyangle 82 and has a lower limit of 0.005 inch. The limits on tl and t2 alSo ensure that the laminate has a minimum thickness of 0.04 inch to prevent hydrogen leakage. The reliability constraint is expressed as a limit P" (i.e., P"=104) On the probability of failure, P. The probability of failure is based on firstply failure according to the maximum strain failure criterion. The four design variables are the ply angles Bi and 6, and their stackthickness tl and tz. Reliabilitybased optimization seeks the lightest structure satisfying the reliability constraint. The twelve random variables are four elastic properties (El, E2, G12, and Cl12), two coefficients of thermal expansion (al and a2), five ply strain allowables (sin, E1, E2u, Ez21 and ylu), and the stressfree temperature of the material (Tzero). The mean values of the strain limits are shown in Table 51 except for EZu, which is 0.0154. Table 52 shows the coefficients of variation (CV) of the random variables that are assumed to be normally distributed and uncorrelated. Those CVs are obtained based on limited test data provided to us by the manufactures, and are intended only for illustration. The mean value of the stressfree temperature is 3000F. The mean values of other random variables change as function of temperature and are given in chapter four. Table 51. Strain allowablesa for IM600/133 at 4230F Strain 8/< ey' sy, es' Y12" Allowables 0.0103 0.0109 0.0110 or 0.0154b 0.0130 0.0138 a Strains include residual strains calculated from the stressfree temperature of 3000F b The value 0.0110 is obtained from the extreme value 0.0154 divided by a safety factor of 1.4 Table 52. Coefficients of variation (CV) of the random variables Random 1 variables ~~'GZ~ ~ Z ~ eo61 1EE11 CV 0.035 0.035 0.030 0.06 0.09 Response Surface Approximation for ReliabilityBased Optimization For the present work, response surface approximation of two types was created. The first type is analysis response surface (ARS), which is fitted to the strains in the laminate in terms of both design variables and random variables. Using the ARS, the probability of failure at every design point can be calculated inexpensively by Monte Carlo simulation based on the fitted polynomials. The second type of response surface is design response surface (DRS) that is fitted to probability of failure as a function of design variables. The DRS is created in order to filter out noise induced by the Monte Carlo simulation and is used to calculate the reliability constraint in the design optimization. The details of the ARS/DRS approach are given in chapter three. Analysis Response Surfaces Besides the design and random variables described in the problem formulation, the service temperature was treated as a variable ranging from 770F to 4230F in order to avoid constructing analysis response surfaces at each selected temperature. Therefore, the total number of variables was seventeen. However, the strains in the laminate do not depend on the Hyve strain allowables, so the ARS were fitted to the strains in terms of twelve variables, which included four design variables, four elastic properties, two coefficients of thermal expansion, the stressfree temperature and the service temperature. The range of the design variables (Table 53) for the ARS was chosen based on the values of the optimal deterministic design. Ranges for random variables are automatically handled and explained below. Using the ARS and five strain allowables, probabilities of failure are calculated by Monte Carlo simulations, while the strain constraints were evaluated at 21 uniformly distributed service temperatures between 770F and 4230F. Table 53. Range of design variables for analysis response surfaces Design variables Range 200 to 300 200 to 300 0.0125 to 0.03 inch 0.0125 to 0.03 inch The accuracy of the ARS is evaluated by statistical measures provided by the JMP software (Anon. 2000), which include the adjusted coefficient of multiple determination (R2agi.) and the root mean square error (RMSE) predictor. To improve the accuracy of response surface approximation, polynomial coefficients that were not well characterized were eliminated from the response surface model by using a mixed stepwise regression (Myers and Montgomery 1995). The statistical design of experiment of ARS was Latin Hypercube sampling or Latin Hypercube design (LHS, e.g., Wyss and Jorgensen 1998), where design variables were treated as uniformly distributed variables in order to generate design points (presented in Chapter 3). Since the laminate has two ply angles and each ply has three strains, six ARS were needed in the optimization. A quadratic polynomial of twelve variables has 91 coefficients. The number of sampling points generated by LHS was selected to be twice the number of coefficients. Tables 4 shows that the quadratic response surfaces constructed from LHS with 182 points offer good accuracy. Table 54. Quadratic analysis response surfaces of strains (millistrain) Analysis response surfaces based on 182 LHS points Error Statistics E1 in Bi ez in Bi Yl in Bi E in 82 ez in 82 Y1 in 82 R2agj 0.9977 0.9956 0.9991 0.9978 0.9961 0.9990 RMSE Predictor 0.017 0.06 0.055 0.017 0.055 0.06 Mean of 1.114 8.322 3.13 1.108 8.328 3.14 Response Design Response Surfaces The six quadratic ARS were used to calculate the probabilities of failure with Monte Carlo simulation. Because the fitting errors in design response surfaces (DRS) are generally larger than the random errors from finite sampling in probability calculation, Monte Carlo simulation needs only to be performed until relatively small errors estimated confidence intervals are achieved. Therefore, a sample size of 1,000,000 was employed. The design points of DRS combine Face Center Central Composite Design (FCCCD) and LHS. Table 55 compares the three DRS. Table 55. Design response surfaces for probability of failure (Probability calculated by Monte Carlo simulation with a sample size of 1,000,000) FCCCD 25 points LHS 252 points L 2 pit Error Statistics + FCCCD 25 points quadratic 5th order 5th order R2adj 0.6855 0.9926 0.9982 RMSE Predictor 0.00053 0.000003 0.000012 Mean of 0.00032 0.000016 0.000044 Response The accuracy of the quadratic response surface approximation is unacceptable. The accuracy of fifth order response surface (with 126 unknown coefficient before stepwise regression) was improved by using a reciprocal transformation on the thickness tl and t2, since the probability of failure, like most structural response, is inversely correlated with the stack thickness. We found that LHS might fail to sample points near some corners of the design space, leading to poor accuracy around these corners. We therefore combined LHS with FCCCD that includes all the vertices of the design space. The accuracy of DRS based on LHS combined with FCCCD is slightly worse than that of DRS based on LHS alone, because the probabilities at the corners of the design space are usually extremely low or high, presenting a greater fitting difficulty than without FCCCD. But the extrapolation problem was solved, and the side constraints are set as the range of the ARS shown in Table 53. The error of 0.000012 is much lower than the allowable failure probability of 0.0001. Table 56 compares the reliabilitybased optimum with the three deterministic optima from chapter 4 and their failure probabilities. The optimal thickness increased from 0.100 to 0.120, while the failure probability decreased by about one order of magnitude. Table 56. Comparison of reliabilitybased optimum with deterministic optima Optimal Design Laminate Failure probability Allowable [81, 62, tl, t2] thickness from MCS of ARS probability of (degree and inch) (inch) 1,000,000 samples failure [24.89, 25.16, 0.015, 0.015] 0.120 (0.120) 0.000055 0.0001 [0.00, 28.16, 0.005, 0.020] 0.100 (0.103) 0.019338a Deterministic [27.04, 27.04, 0.010, 0.015] 0.100 (0.095) 0.000479 optima [25.16, 27.31, 0.005, 0.020] 0.100 (0.094) 0.000592 a This deterministic optimum is out of the range of the analysis response surfaces; the probability of failure was calculated by Monte Carlo simulation based on another set of analysis response surfaces. Refining the ReliabilityBased Design The reliabilitybased designs in Table 56 show that the ply angles close to 250 offer designs with low failure probability. Furthermore, good designs require only a single plyangle allowing simplification of the configuration of the laminate from [f 6,/ B O]s to [f B ]s. Table 57 shows the failure probabilities of some chosen designs calculated with Monte Carlo simulation using ARS. The laminates with plyangles of 240, 250, and 260 offer lower probabilities of failure than the rest. These three laminates will be further studied. Table 57. Refined reliabilitybased design [+6]s (Monte Carlo simulation with a sample size of 10,000,000) B (degree) h (inch) Probability of failure 21.000 0.120 0.0001832 22.000 0.120 0.0001083 23.000 0.120 0.0000718 24.000 0.120 0.0000605 25.000 0.120 0.0000565 26.000 0.120 0.0000607 27.000 0.120 0.0000792 Quantifying Errors in Reliability Analysis The reliability analysis has errors due to MCS with limited sample size and due to the approximation of CLT analysis by analysis response surfaces. To evaluate the amount of errors in reliability analysis, the probability of failure of the rounded design was evaluated by using MCS with the exact analysis (classical laminate theory, CLT), but only one million analyses were performed due to the cost of computation. Table 58 compares the results of MCS based on ARS and that based on CLT. The difference is about 1.25x105 Table 58. Comparison of probability of failure from MCS based ARS and CLT Failure Optimal Design Laminate Failure probability probability from [81, 62, tl, t2] thickness from MCS of ARS 7MC Sof CLT (degree and inch) (inch) 1x 107 samples 106 ape [25, 25, 0.015, 0.015] 0.120 (0.120) 0.0000565 0.000069 By assuming each simulation as a Bernoulli trial and the N trails as Binomial distribution, the coefficient of variation (COV) of the probability (PoJ) obtained by MCS can be estimated by (1 Pof )Pof CO V(Pof ) (52) Pof where N is the sample size of the MCS. The accuracy of MCS can also be expressed in terms of percentage error corresponding 95% confidence interval as (1 Pof ") E% = x 196% (53) Nx Pof " where Pof is the true probability of failure. Table 59 shows the accuracy and error bounds for MCS. Together with Table 58 the error calculation indicates that the probability of failure of the rounded design is still below the target probability of failure of 0.0001. The errors can be reduced by more accurate approximations and advanced Monte Carlo simulations. Another reliabilitybased design cycle in a reduced size design region can be performed to obtain more accurate result. Table 59. Accuracy of MCS Coefficient of Variation Percentage errors (Absolute (COV) errors) for 95% CI MCS of lx107 samples 4.2% 8.2% (f4.66x106) MCS of lx106 Samples 12.05% 23.6%(fl.63x10 ) Effects of Quality Control on Laminate Design Comparing deterministic designs to the reliabilitybased design, there is an increase of 20% in the thickness. In addition, the design failure probability of 104 is quite high. In order to improve the design the possibility of limiting the variability in material properties through quality control (QC) is considered. Here, quality control means that materials will be tested by the manufacturer and/or fabricator, and that extremely poor batches will not be accepted. Normal distributions assume the possibility (though very small) of unbounded variation. In practice, quality control truncates the low end of the distribution. That is, specimens with extremely poor properties are rejected. It is also assumed that specimens with exceptional properties are scarcer than those with poor properties. The normal distribution will be truncated on the high side at 30 (excluding 14 out of 10,000 specimens) and on the low side at different values corresponding to different levels of QC. The tradeoff between QC, failure probability and laminate thickness (weight) will be explored. Effects of Quality Control on Probability of Failure Since the primary failure mode of the laminate is microcracking, the tensile strain limit 82u is the first quantity to be improved by quality control. The normal distribution of 82u is truncated at 30 to exclude unrealistically strong specimens, and on the low side QC at 40, 30, and 20 was checked, which corresponds to rejecting 3 specimens out of 100,000, 14 specimens out of 10,000, and 23 specimens out of 1,000, respectively. Table 510 shows the change in the failure probability for selected reliabilitybased designs. Quality control on E2u is a very effective way to reduce the probability of failure. A relatively low cost QC of EZu at 30 will reduce the failure probability by more than two orders of magnitude. Table 510. Effects of quality control of EZu On probability of failure for 0. 12 inchthick (+6)s laminates Probability of failure from MCS 10,000,000 samples B Untruncated Truncate at 40 Truncate at 30 Truncate at 20 Normal (3/100,000) (14/10,000) (23/1,000) 24.00 60.5e6 30.5e6 0.0 0.0 25.00 56.5e6 29.9e6 0.1e6 0.0 26.00 60.7e6 31.0e6 0.5e6 0.0 Table 511 shows that truncating other strain limits even at 20 will not change the laminate failure probability substantially. This reveals the fact that the primary failure mode of the laminate is microcracking. Therefore, E2u is the critical parameter to study further. Table 512 shows that this conclusion applies also to the elastic moduli, coefficients of thermal expansion, and stressfree temperature. By comparing with Table 510, we see that truncating any of the other parameters at 2 o does not change the failure probability as significantly as truncating 82u at 4 o. Note that some probabilities from truncated distributions are slightly larger than those from untruncated distributions, which is due to the sampling errors. Table 511i. Effects of quality control of st", sil, E21, and Y12 On probability of failure of 0.12 inchthick (f6)s laminates Probability of failure from MCS 10,000,000 samples Untruncated Truncated Truncated Truncated Truncated Normal e1U at 20 s/1 at 20 eat at 20 77, at 20 24.00 60.5e6 58.6e6 61.5e6 54.4e6 55.4e6 25.00 56.5e6 53.0e6 52.3e6 54.0e6 53.2e6 26.00 60.7e6 63.4e6 62.0e6 60.1e6 61.0e6 Table 512. Effects of quality control of El, E2, G12, Cl12, Tzero, al, and a2 On probability of failure of 0. 12 inchthick (f6)s laminates Probability of failure from MCS 10,000,000 samples by truncating B at 20 E1 Ez Gol go1 Tzero ao at 24.00 62.2e6 52. 1e6 57.8e6 51.8e6 54.6e6 52.7e6 58.2e6 25.00 52.5e6 48. 1e6 55.1e6 49.7e6 55.1e6 56.8e6 54.4e6 26.00 54.5e6 59. 1e6 60.4e6 59.4e6 59.8e6 63.0e6 60.4e6 Effects of Quality Control on Optimal Laminate Thickness Quality control (QC) can be used to reduce the laminate thickness instead of the probability of failure. Table 513 shows that QC of EZu at 30 will allow 0.1 inchthick laminates with failure probability below the required 0.0001. Table 513. Effects of quality control of EZu On probability of failure for 0. 1 inchthick (+6)s laminates Probability of failure from MCS 1,000,000 samples Untruncated Truncate at Truncate at Truncate at Normal 4a (3/100,000) 3 a(14/10,000) 2.5 a(6/1,000) 24.00 0.002224 0.002163 0.001054 0.000071 25.00 0.001030 0.000992 0.000229 0.000007 26.00 0.000615 0.000629 0.000092 0.000003 Table 514 shows that QC of EZu at 1.60, which corresponds to rejecting 55 specimens out of 1000, will reduce the thickness to 0.08 inch with a failure probability below 0.0001. Therefore, the laminate thickness can be reduced to 0.08 inch if QC is able to find and rej ect 55 specimens out of 1000. Table 514. Effects of quality control of EZu On probability of failure for 0.08 inchthick (+6)s laminates Probability of failure from MCS 1,000,000 samples Untruncated Truncate at Truncate at Truncate at Normal 30(14/10,000) 20(23/1,000) 1.60 (55/1,000) 24.00 0.061204 0.060264 0.039804 0.015017 25.00 0.028289 0.027103 0.008820 0.001019 26.00 0.013595 0.012154 0.001243 0.000071 Effects of Other Improvements in Material Properties Instead of quality control, it is possible to improve the design by using a better material. Table 515 shows the effects of changing the mean value of E2u by f 10 percent of the nominal value of 0.0154. Comparison with Table 510 shows that a 10% improvement has big influence on failure probability but is not as powerful as quality control at 30 level. Table 515. Sensitivity of failure probability to mean value of 82u (CV=0.09) for 0. 12 inchthick 1(f6)s aminates Probability of failure from MCS 10,000,000 samples E(82u)=0.0154 E(82u)=0.01694 E(82u)=0.013 86 24.00 60.5e6 2.5e6 1082.3e6 25.00 56.5e6 3.4e6 996.7e6 26.00 60.7e6 3.4e6 1115.7e6 The failure probability also depends on the coefficient of variation (CV) of 82u. The CV can be improved if the manufacturing could be more consistent. Table 516 shows that the failure probabilities are not as sensitive to changes of coefficient of variation as to changes in the mean value of E2u, but 10 percent reduction in the coefficient of variation can still reduce the failure probability by about a factor of five. Table 516. Sensitivity of failure probability to CV of 82u ( E(82u)=0.0154 ) for 0. 12 inch thick (16)s laminates Probability of failure from MCS 10,000,000 samples CV=0.09 CV=0.099 CV=0.081 24.00 60.5e6 209.5e6 9.8e6 25.00 56.5e6 208.2e6 10.8e6 26.00 60.7e6 224.2e6 11.1e6 Figure 51 combines several effects discussed earlier to show a tradeoff plot of probability of failure, cost (truncating and changing the distribution of 82u), and weight (thickness) for a laminate of [ f25]s. For probability of failure less than le3, quality control at the 20 level is more effective for reducing the probability of failure than increasing the mean value by 10 percent or decreasing the coefficients of variation by 10 percent. The reason is that small failure probability is heavily affected by the tails of the distributions. For large failure probability, increasing the mean value of E2u is more effective. Increasing the mean value of E2u by 10 percent or truncating 82u at 2cr can reduce the laminate thickness to 0.10 inch for a safety level of le4. Combining all three measures together, the laminate thickness can be reduced to 0.08 inch with a safety level of le7. Table 517 shows the changes of maximum E2 calculated by the laminate analyses. Ten percent changes of the mean values of E2, Tzero, and a2 (Same CV) will lead to about 5% change in the maximum E2, which indicate that further study needs to focus on the three quantities. Table 518 shows the probabilities of failure are reduced by a factor of five by 10 percent change of the mean values of E2, Tzero, and a2 (Same CVs). This reduction of probability shows the potential of further improvements via improvements in all three material properties. 1.0E+00 *Nominal 1.0E01 Qual~ity control to 2 Sigma %10% mecrease in allowable 1.0E02 % rex~~ duction in variability~l S1.0E03 1.0E07 1.0E08 0.06 0.08 0.1 0.12 0.14 0.16 Thickness (inch) Figure 51. Tradeoff plot of probability of failure, cost, and weight (laminate thickness) for [+25]s Table 517. Maximum E2 (milliStrain) induced by the change of material properties El, E2, G12, Cl12, Tzero, a1, and a2 for 0.12 inchthick [f250]s laminate NominalMaximum E2 frOm deterministic analyses for 21 temperature maximum es=9.859 E1 Ez Gol pa~ 7,ero au at 9.320 9.399 0. 9*Nominal 9.901 1 0.469 9. 763 9. 909 9. 85 7 (5.47%~) (4. 67%~) 9.313 1.1* Nominal 9. 824 9.960 9. 981 10. 584 9.861 1 0. 333 (5.54%~) Table 518. Probability of failure for 0. 12 inchthick [ f 250]s laminate with improved average material properties (Monte Carlo simulation with a sample size of 10,000,000) All three Nominal 1.1*E(Ez) 0.9*E( Tero) 0.9*E(a ) measures Probability 0.0000605 0.0000117 0.0000116 0.0000110 0.0000003 of~failure Summary The design of hydrogen tanks for cryogenic environments poses a challenge because of large thermal strains that can cause matrixcracking, which may lead to hydrogen leakage. The laminate design must use ply angles that are not too far apart to reduce the thermal residual strains, compromising the ability of the laminate to carry loads in two directions. These small ply angles can cause the laminate thickness to more than double compared to what is needed to carry only the mechanical loads in the application study here. Satisfying reliability constraints increased the thickness further. Improving the probability of failure required increase of thickness. The most influential uncertainty was variability in the tensile strain allowable in the direction transverse to the fiberS, 82u. Limiting this variability can reduce the required thickness. Of the different options studied in the chapter, quality control on the transverse tensile allowable, 82u, prOVed to be the most effective option. Quality control at the 1.60 level of 82u, COrresponding to rej section of about 5.5% of the specimens, can reduce the required thickness by a third. Reductions in the coefficient of variation of sZu, Or increase in its mean value also reduce the failure probability substantially. Increasing the transverse modulus E2, decreasing coefficient of thermal expansion OL2, and reducing the stress free temperature Tzero can also help considerably. CHAPTER 6 PROBABILISTIC SUFFICIENCY FACTOR APPROACH FOR RELIABILITY BASED DESIGN OPTIMIZATION A probabilistic sufficiency factor approach is proposed that combines safety factor and probability of failure for use in reliabilitybased design optimization. The probabilistic sufficiency factor approach represents a factor of safety relative to a target probability of failure. It provides a measure of safety that can be used more readily than probability of failure or safety index by designers to estimate the required weight increase to reach a target safety level. The probabilistic sufficiency factor can be calculated from the results of Monte Carlo simulation with little extra computation. The chapter presents the use of probabilistic sufficiency factor with a design response surface approximation, which fits it as function of design variables. It is shown that the design response surface approximation for the probabilistic sufficiency factor is more accurate than that for the probability of failure or for the safety index. The probabilistic sufficiency factor does not suffer like probability of failure or safety index from accuracy problems in regions of low probability of failure when calculated by Monte Carlo simulation. The use of probabilistic sufficiency factor accelerates the convergence of reliabilitybased design optimization Introduction Recently, there has been interest in using alternative measures of safety in reliabilitybased design optimization. These measures are based on margin of safety or safety factors that are commonly used as measures of safety in deterministic design. Safety factor is generally expressed as the quotient of allowable over response, such as the commonly used central safety factor that is defined as the ratio of the mean value of allowable over the mean value of the response. The selection of safety factor for a given problem involves both objective knowledge such as data on the scatter of material properties and subjective knowledge such as expert opinion. Given a safety factor, the reliability of the design is generally unknown, which may lead to unsafe or inefficient design. Therefore, the use of safety factor in reliabilitybased design optimization seems to be counter productive. Freudenthal (1962) showed that reliability can be expressed in term of the probability distribution function of the safety factor. Elishakoff (2001) surveyed the relationship between safety factor and reliability, and showed that in some cases the safety factor can be expressed explicitly in terms of reliability. The standard safety factor is defined with respect to the response obtained with the mean values of the random variables. Thus a safety factor of 1.5 implies that with the mean values of the random variables we have a 50% margin between the response (e.g., stress) and the capacity (e.g., failure stress). However, the value of the safety factor does not tell us what the reliability is. Therefore, Birger (1970), as reported by Elishakoff (2001), introduced a factor, which we call here the probabilistic sufficiency factor that is more closely related to the target reliability. A probabilistic sufficiency factor of 1.0 implies that the reliability is equal to the target reliability, a probabilistic sufficiency factor larger than one means that the reliability exceeds the target reliability, and probabilistic sufficiency factor less than one means that the system is not as safe as we wish. Specifically, a probabilistic sufficiency factor value of 0.9 means that we need to multiply the response by 0.9 or increase the capacity by 1/0.9 to achieve the target reliability. Tu et al. (2000) used probabilistic performance measure, which is closely related to Birger's safety factor, for RBDO using most probable point (MPP) methods (e.g., first order reliability method). They showed that the search for the optimum design converged faster by driving the safety margin to zero than by driving the probability of failure to its target value. Wu et al. (1998, 2001) used probabilistic sufficiency factors in order to replace the RBDO with a series of deterministic optimizations by converting reliability constraints to equivalent deterministic constraints. The use of the probabilistic sufficiency factor gives a designer more quantitative measure of the resources needed to satisfy the safety requirements. For example, if the requirement is that the probability of failure is below 106 and the designer Einds that the actual probability is 104, he or she cannot tell how much change is required to satisfy the requirement. If instead the designer Einds that a probability of 106 is achieved with a probabilistic sufficiency factor of 0.9, it is easier to estimate the required resources. For a stressdominated linear problem, raising the probabilistic sufficiency factor from 0.9 to 1 typically requires a weight increase of about 10 percent of weight of the overstressed components . Reliability analysis of systems with multiple failure modes often employs Monte Carlo simulation, which generates numerical noise due to limited sample size. Noise in the probability of failure or safety index may cause reliabilitybased design optimization (RBDO) to converge to a spurious optimum. The accuracy of MCS with a given number of samples deteriorates with decreasing probability of failure. For RBDO problems with small target probability of failure, the accuracy of MCS around the optimum is not as good as in regions with high probability of failure. Furthermore, the probability of failure in some regions may be so low that it is calculated to be zero by MCS. This flat zero probability of failure does not provide gradient information to guide the optimization procedure. The probabilistic sufficiency factor is readily available from the results of MCS with little extra computational cost. The noise problems of MCS motivate the use of response surface approximation (RSA, e.g., Khuri and Cornell 1996). Response surface approximations typically employ loworder polynomials to approximate the probability of failure or safety index in terms of design variables in order to filter out noise and facilitate design optimization. These response surface approximations are called design response surface approximation (DRS) and are widely used in the RBDO (e.g., Sues et al. 1996). The probability of failure often changes by several orders of magnitude over narrow bands in design space, especially when the random variables have small coefficients of variation. The steep variation of probability of failure requires DRS to use highorder polynomials for the approximation, such as quintic polynomials (chapter 5), increasing the required number of probability calculations (Qu et al. 2000). An additional problem arises when Monte Carlo simulations (MCS) are used for calculating probabilities. For a given number of simulations, the accuracy of the probability estimates deteriorates as the probability of failure decreases. The numerical problems associated with steep variation of probability of failure led to consideration of alternative measures of safety. The most common one is to use the safety index, which replaces the probability by the distance, which is measured as the number of standard deviations from the mean of a normal distribution that gives the same probability. The safety index does not suffer from steep changes in magnitude, but it has the same problems of accuracy as the probability of failure when based on Monte Carlo simulations. However, the accuracy of probabilistic sufficiency factor is maintained in the region of low probability. The probabilistic sufficiency factor also exhibits less variation than probability of failure or safety index. Thus the probabilistic sufficiency factor can be used to improve design response surface approximations for RBDO. The next section introduces the probabilistic sufficiency factor, followed by the computation of the probabilistic sufficiency factor by Monte Carlo simulation. The methodology is demonstrated by the reliabilitybased beam design problem. Probabilistic Sufficiency Factor The deterministic equivalent of reliability constraint in RBDO can be formulated as g, (, d < g (i d)(61) where gr denotes a response quantity, go represent a capacity (e.g., strength allowable), x is usually the mean value vector of random variables, d is the design vector. The traditional safety factor is defined as g c(x, d) s(x, d) = (62) g, (x, d) and the deterministic design problem requires s(i, d) > s, (63) where s, is the required safety factor, which is usually 1.4 or 1.5 in aerospace applications. The reliability constraint can be formulated as a requirement on the safety factor Prob(s < 1) < P, (64) where Pr is the required probability of failure. Birger' s probabilistic sufficiency factor Pyf is the solution to Prob(s < Pf) = P, (65) It is the safety factor that is violated with the required probability Pr. Figure 61 shows the probability density of the safety factor for a given design. The area under the curve left to s=1 represents the probability that s<1, hence it is equal to actual probability of failure. The shaded area in the figure represents the target probability of failure, Pt. For this example, since it is the area left of the line s=0.8, Pyf= 0.8. The value of 0.8 indicates that the target probability will be achieved if we reduced the response by 20 % or increased the capacity by 25% (1/0.81). For many problems this provides sufficient information for a designer to estimate the additional structural weight. For example, raising the safety factor from 0.8 to 1 of a stressdominated linear problem typically requires a weight increase of about 20% of the weight of the overstressed components . 0.6 0.8 1.0 1.2 1.4 1.6 sy/g Figure 61. Probability density of safety factor. The area under the curve left to s=1 measures the actual probability of failure, while the shaded area is equal to the target probability of failure indicating that probabilistic sufficiency factor = 0.8 Using Probabilistic Sufficiency Factor to Estimate Additional Structural Weight to Satisfy the Reliability Constraint The following cantilever beam example (Figure 62) is taken from Wu et al. (2001) to demonstrate the use of probabilistic sufficiency factor. L=100" Figure 62. Cantilever beam subject to vertical and lateral beading There are two failure modes in the beam design problem. One failure mode is yielding, which is most critical at the corner of the rectangular cross section at the fixed end of the beam 600 600 gs (R, X, Y, w, t) =R r= R ( 2Y + X) (66) wt2 w2 where R is the yield strength, X and Y are the independent horizontal and vertical loads. Another failure mode is the tip deflection exceeding the allowable displacement, Do 227 4L3 Y fX gD (E, X, Y, w, t) = Do D = Do +I 67 Ew t t 2 where E is the elastic modulus. The random variables are defined in Table 61. Table 61. Random variables in the beam design problem Random variables X Y R E Normal Normal Normal Normal Di stribution(40020) (961.5) (500, 100) lb (1000, 100) lb 40,20 (96 .5) \", '/"psi psi The cross sectional area is minimized subject to two reliability constraints, which require the safety indices for strength and deflection constraints to be larger than three Y tX (probability of failure less than 0.00135). The reliabilitybased design optimization problem, with the width w and thickness t of the beam as design variables that are deterministic, can be formulated as minimize A = wt such that p 0.00135 <0(68 based on probability of failure, or minimize A = wt such that 3 < 0(69) based on safety index, where Pis the safety index, or minimize A = wt such that (610) 1 P, <0 based on the probabilistic sufficiency factor. The reliability constraints are formulated in the above three forms, which are equivalently in terms of safety. The details of the beam design are given later in the paper. In order to demonstrate the utility of the P4, for estimating the required weight for correcting a safety deficiency, it is useful to see how the stresses and the displacements depend on the weight (or cross sectional area) for this problem. If we have a given design with dimensions wo and to and a P,f of Pero, which is smaller than one, we can make the structure safer by scaling both w and t uniformly by a constant c w = cw,, t = ct, (611) It is easy to check from (66) and (67) that the stress and the displacement will then change by a factor of c3, and the area by a factor of c Since the P,f is inversely proportional to the most critical stress or displacement, it is easy to obtain the relationship P, = Peo,( f)' (612) where Ao=woto. This indicates that a one percent increase in area (corresponding to 0.5 percent increase in w and t) will improve the Pyfby about 1.5 percent. Since nonuniform increases in the width and thickness may be more efficient than uniform scaling, we may be able to do better than 1.5 percent. Thus, if we have Py0.97, we can expect that we can make the structure safe with a weight increase under two percent. The probabilistic sufficiency factor gives a designer a measure of safety that can be used more readily than the probability of failure or the safety index to estimate the required weight increase to reach a target safety level. The Pyf of a beam design, presented in section 4 in details, is 0.9733 for a target probability of failure of 0.00135, (612) indicate that the deficiency in the Pyf can be corrected by scaling up the area by a factor of 1.0182. Since the area A is equal to c2wt, the dimensions should be scaled by a factor c of 1.0091 (=1.01820.5) to w = 2.7123 and t = 3.5315. Thus the objective function of the scaled design is 9.5785. The probability of failure of the scaled design is 0.001302 (safety index of 3.0110 and probabilistic sufficiency factor of 1.0011) evaluated by MCS with 1,000,000 samples. Such estimation is readily available using the probability of failure (0.003 14) and the safety index (2.7328) of the design. Reliability Analysis Using Monte Carlo Simulation Let g(x) denote the limit state function of a performance criterion (such as strength allowable larger than stress), so that the failure event is defined as g(x) <0, where x is a random variable vector. The probability of failure of a system can be calculated as P,' fx (x)dx: (613) g(x} 0 where fx(x) is the joint probability distribution function (JPDF). This integral is hard to evaluate, because the integration domain defined by g(x) 0 is usually unknown, and integration in high dimension is difficult. Commonly used probabilistic analysis methods are either momentbased methods such as the firstorderreliability method (FORM) and the secondorderreliability method (SORM), or simulation techniques such as Monte Carlo simulation (MCS) (e.g., Melchers 1999). Monte Carlo simulation is a good method to use for system reliability analysis with multiple failure modes. The present chapter focuses on the use of MCS with response surface approximation in RBDO. Monte Carlo simulation utilizes randomly generated samples according to the statistical distribution of the random variables, and the probability of failure is obtained by calculating the statistics of the sample simulation. Fig. 3 illustrated the Monte Carlo simulation of a problem with two random variables. The probability of failure of the problem is calculated as the ratio of the number of samples in the unsafe region over the total number of samples. A small probability requires a large number of samples for MCS to achieve low relative error. Therefore, for fixed number of simulations, the accuracy of MCS deteriorates with the decrease of probability of failure. For example, with 106 simulations, a probability estimate of 10 has a relative error of a few percent, while a probability estimate of 10 has a relative error of the order of 100 percent. In RBDO, the required probability of failure is often very low, thus the probability (or safety index) calculated by MCS is inaccurate near the optimum. Furthermore, the probabilities of failure in some design regions may be so low that they are calculated as zero by MCS. This flat zero probability of failure or infinite safety index cannot provide useful gradient information to the optimization. x2 g(x)<0 Unsafe Region g(x)= 0 *. **Limit State ,' ** g (x) > ** *Safe Region X1 Figure 63. Monte Carlo simulation of problem with two random variables Calculation of Probabilistic Sufficiency Factor by Monte Carlo Simulation Here we propose the use of probabilistic sufficiency factor to solve the problems associated with probability calculation by MCS. Pyf can be estimated by MCS as follows. Define the nth safety factor of MCS as s~n =nt m ein sx (614) where M~ is the sample size of MCS, and the nth min means the nth Smallest safety factor among M safety factors from MCS. Thus so, is the nt'horder statistics ofM safety factors from MCS, which corresponds to a probability of nM \ of s(x) < s ,. That is, we seek to find the safety factor that is violated with the required probability Pr. The probabilistic sufficiency factor is then given as P~= st, for n =PSIM (615) For example, if the required probability P, is 104 and the sample size of Monte Carlo simulation M~ is 106, Ps, is equal to the highest safety factor among the 100 samples (n=PM with the lowest safety factors. The calculation of Ps, requires only sorting the lowest safety factors in the Monte Carlo samples. While the probability of failure changes by several orders of magnitude the probabilistic sufficiency factor usually varies by less than one order of magnitude in a given design space. For problems with k reliability constraints, the most critical safety factor is calculated first for each Monte Carlo sample, s(xi) = nu (616) Then the sorting of the Ilth minimum safety factor can be proceeded as in (614). When n is small, it may be more accurate to calculate Ps, as the average between the Ilth and (n+1)th l0WeSt safety factor in the Monte Carlo samples. The probabilistic sufficiency factor provides more information than probability of failure or safety index. Even in the regions where the probability of failure is so small that it cannot be estimated accurately by the MCS with given sample size M, the accuracy ofPs,is maintained. Using the probabilistic sufficiency factor also gives designers useful insights on how to change the design to satisfy safety requirements as shown in section 2.1. The estimate is not readily available from the probability of failure or the safety index. The probabilistic sufficiency factor is based on the ratio of allowable to response, which exhibits much less variation than the probability of failure or safety index. Therefore, approximating probabilistic sufficiency factor in design optimization is easier than approximating probability of failure or safety index as discussed in the next section. Monte Carlo Simulation Using Response Surface Approximation Monte Carlo simulation is easy to implement, robust, and accurate with sufficiently large samples, but it requires a large number of analyses to obtain a good estimate of small failure probabilities. Monte Carlo simulation also produces a noisy response and hence is difficult to use in optimization. Response surface approximations solve the two problems, namely simulation cost and noise from random sampling. Response surface approximations fit a closedform approximation to the limit state function to facilitate reliability analysis. Therefore, response surface approximation is particularly attractive for computationally expensive problems such as those requiring complex Einite element analyses. Response surface approximations usually fit loworder polynomials to the structural response in terms of random variables g(x) = Z(x)T b (617) where g(x) denotes the approximation to the limit state function g(x), Z(x) is the basis function vector that usually consists of monomials, and b is the coefficient vector estimated by least square regression. The probability of failure can then be calculated inexpensively by Monte Carlo simulation or momentbased methods using the fitted polynomials. Response surface approximations (RSA) can be used in different ways. One approach is to construct local RSA around the Most Probable Point (MPP) that contributes most to the probability of failure of the structure. The statistical design of experiment (DOE) of this approach is iteratively performed to approach the MPP on the failure boundary. For example, Bucher and Bourgund (1990), and Sues (1996, 2000) constructed progressively refined local RSA around the MPP by an iterative method. This local RSA approach can produce satisfactory results given enough iterations. Another approach is to construct global RSA over the entire range of random variables, i.e., design of experiment around the mean values of the random variables. Fox (1993, 1994, 1996) used BoxBehnken design to construct global response surfaces and summarized 12 criteria to evaluate the accuracy of RSA. Romero and Bankston (1998) employed progressive lattice sampling as the design of experiments to construct global RSA. With this approach, the accuracy of response surface approximation around the MPP is unknown, and caution must be taken to avoid extrapolation near the MPP. Both approaches can be used to perform reliability analysis for computationally expensive problems. The selection of RSA approach depends on the limit state function of the problem. The global RSA is simpler and efficient to use than local response surface approximation for problems with limit state function that can be well approximated globally. However, the reliability analysis needs to be performed and hence the RSA needs to be constructed at every design point visited by the optimizer, which requires a fairly large number of response surface constructions and thus limit state evaluations. The local RSA approach is even more computationally expensive than the global approach in the design environment. Qu et al. (2000) developed a global analysis response surface (ARS) approach in unified space of design and random variables to reduce the number of RSA substantially and achieve higher efficiency than the previous approach. This analysis response surface can be written as g(x,d)= Z(x, d)Tb (618) x and d are the random variable and design variable vectors, respectively. They recommended Latin Hypercube sampling as the statistical design of experiments. The number of response surface approximations constructed in optimization process is reduced substantially by introducing design variables into the response surface approximation formulation. The selection of RSA approach depends on the limit state function of the problem and target probability of failure. The global RSA approach is more efficient than local RSA, but it is limited to problems with relatively high probability or limit state function that can be well approximated by regression analysis based on simple basis functions. To avoid the extrapolation problems, RSA generally needs to be constructed around important region or MPP to avoid large errors in the results of MCS induced by fitting errors in RS. Therefore, an iterative RSA is desirable for general reliability analysis problem. Design response surface approximations (DRS) are fitted to probability of failure to filter out noise in MCS and facilitate optimization. Based on past experience, highorder DRS (such as quintic polynomials) are needed in order to obtain a reasonably accurate approximation of the probability of failure. Constructing highly accurate DRS is difficult because the probability of failure changes by several orders of magnitude over small distance in design space. Fitting to safety index /7=0 (p), where p is the probability of failure and & is the cumulative distribution function of normal distribution, improves the accuracy of the DRS to a limited extent. The probabilistic sufficiency factor can be used to improve the accuracy of DRS approximation. Beam Design Example The details of the beam design problem mentioned in section 2 are presented here. Since the limit state of the problem is available in closed form as shown by (66) and (6 7), the direct Monte Carlo simulation with a sufficient large number of samples is used here (without analysis response surface) in order to in order to better demonstrate the advantage of probabilistic sufficiency factors over probability of failure or safety index better. By using the exact limit state function, the errors in the results of Monte Carlo simulation are purely due to the convergence errors, which can be easily controlled by changing the sample size. In applications where analysis response surface approximation must be used, the errors introduced by approximation can be reduced by sequentially improving the approximation as the optimization progresses. The reliability constraints, shown by (68) to (610), are approximated by design response surface approximates that fit to probability of failure, safety index, and probabilistic sufficiency factor. The accuracy of the design response surface approximations is then compared. The design response surface approximations are in two design variables w and t. A quadratic polynomial in two variables has six coefficients to be estimated. Since Face Center Central Composite Design (FCCCD, Khuri and Cornell 1996) is often used to construct quadratic response surface approximation, a FCCCD with 9 points was employed here first with poor results. Based on our previous experience, higherorder design response surface approximations are needed to fit the probability of failure or the safety index, and the number of points of a typical design of experiments should be about twice the number of coefficients. A cubic polynomial in two variables has 10 coefficients that require about 20 design points. Latin Hypercube sampling can be used to construct higher order response surface (Qu et al. 2000). We found that Latin Hypercube sampling might fail to sample points near some corners of the design space, leading to poor accuracy around these corners. To deal with this extrapolation problem, all four vertices of the design space were added to 16 Latin Hypercube sampling points for a total of 20 points. Mixed stepwise regression (Myers and Montgomery 1995) was employed to eliminate poorly characterized terms in the response surface models. Design with Strength Constraint The range for the design response surface, shown in Table 62, was selected based on the meanbased deterministic design, w = 1.9574" and t = 3.9149". The probability of failure was calculated by direct Monte Carlo simulation with 100,000 samples based on the exact stress in (66). Table 62. Range of design variables for design response surface System variables w t Range 1.5" to 3.0" 3.5" to 5.0" Cubic design response surfaces with 10 coefficients were constructed and their statistics are shown in Table 63. An R a4, close to one and an average percentage error (defined as the ratio of root mean square error (RMSE) predictor and mean of response) close to zero indicate good accuracy of the response surfaces. It is seen that the design response surfaces for the probabilistic sufficiency factor has the highest R a4, and the smallest average percentage error. The standard error in probability calculated by Monte Carlo simulation can be estimated as p(1 p) cr (619) IM where p is the probability of failure, and M~ is the sample size of the Monte Carlo simulation. If a probability of failure of 0.2844 is to be calculated by Monte Carlo simulation of 100,000 samples (the mean probability of failure in Table 63), the standard error due to the limited sampling is 0.00143. The RMSE error of the probability design response surface is of 0. 1103. Thus the error induced by the limited sampling (100,000) is much smaller than error of the response surface approximation to the probability of failure . Table 63. Comparison of cubic design response surface approximations of probability of failure, safety index and probabilistic sufficiency factor for single strength failure mode (based on Monte Carlo simulation of 100,000 samples) 16 Latin Hypercube sampling points + 4 vertices Error Statistics Probabilistic Probability RS Safety index RS sufficiency factor RS R2adj 0.9228 0.9891 0.9999 RMSE Predictor 0.1103 0.3027 0.002409 Mean of Response 0.2844 1.9377 1.0331 APE (Average Percentage Error=RMSE 38.78% 15.62% 0.23% Predictor/Mean of Response) APE in Pof (=RMSE Predictor of 38.78% 12.04% N/A PoflMean ofPof) The probabilistic sufficiency factor design response surface has an average error less than one percent, while the safety index design response surface has an average error of about 15.6 percent. It must be noted, however, that the average percent errors of the three design response surface cannot be directly compared, because one percent error in probabilistic sufficiency factor does not correspond to one percent error in probability of failure or safety index. Errors in safety index design response surface were transformed to errors in terms of probability as shown in Table 63. It is seen that safety index design response surface approximation is more accurate than the probability design response surface approximation. Besides the average errors over the design space, it is instructive to compare errors measured in probability of failure in the important region of the design space. For optimization problems, the important region is defined as the region containing the optimum. Here it is the curve of target reliability according to each design response surface, on which the reliability constraint is satisfied critically, and the probability of failure should be 0.00135 if design response surface approximation does not have errors. For each design response surface approximation, 11 test points were selected along a curve of target reliability and given in the Appendix. The average percentage errors at these test points, shown in Table 64, demonstrate the accuracy advantage of the probabilistic sumfciency factor approach. For the target reliability, the standard error due to Monte Carlo simulation of 100,000 samples is 8.6%, which is comparable to the response surface error for the Pyf. For the other two response surfaces, the errors are apparently dominated by the modeling errors due to the cubic polynomial approximation. Table 64. Averaged errors in cubic design response surface approximations of probabilistic sumfciency factor, safety index and probability of failure at 1 1 points on the curves of target reliability Design Response Probability of Probabilistic Safety Index (Pof) Surface of failure L suffciency factor Average Percentage Error in Probability 213.86% 92.38% 10.32% of Failure The optima found by using the design response surface approximations of Table 6 3 are compared in Table 65. The probabilistic sufficiency factor design response surface clearly led to a better design, which has a safety index of 3.02 according to Monte Carlo simulation. It is seen that the design from probabilistic sumfciency factor design response surface approximation is very close to the exact optimum. Note that the values of Pyffor the probability based optimum and safety index based optimum provide a good estimate to the required weight increments. For example, with a Pyf0.9663 the safety index based design has a safety factor shortfall of 3.37 percent, indicating that it should not require more than 2.25 percent weight increment to remedy the problem. Indeed the optimum design is 2.08 percent heavier. This would have been difficult to infer from a probability of failure of 0.00408, which is three times larger than the target probability of failure. Table 65. Comparisons of optimum designs based on cubic design response surface approximations of probabilistic sufficiency factor, safety index and probability of failure Desgn espnseMinimize obj ective function F while P 2 3 or 0.0013 5 > pof surface of Obj ective Pof/Safety index/Safety factor Optima function F=wt from MCS of 100,000 samples w=2.6350, Probability '9.2225 0.00690/2.4624/0. 9481 t3.5000 w=2.6645, Safety index '9.3258 0.00408/2.6454/0.9663 t3.5000 Probabilistic w=2.4526, '9.5367 0.00128/3.0162/1.0021 sufficiency factor t3.8884 Exact optimum w=2.4484, '9.5204 0.00135/3.00/1.00 (Wu et al. 2001) t3.8884 Design with Strength and Displacement Constraints For system reliability problem with strength and displacement constraints, the probability of failure is calculated by direct Monte Carlo simulation with 100,000 samples based on the exact stress and exact displacement in (66) and (67). The allowable tip displacement Do is chosen to be 2.25" in order to have two competing constraints (Wu et al. 2001). The three cubic design response surface approximations in the range of design variables shown in Table 62 were constructed and their statistics are shown in Table 66. Table 66. Comparison of cubic design response surface approximations of the first design iteration for probability of failure, safety index and probabilistic sumfciency factor for system reliability (strength and displacement) 16 Latin Hypercube sampling points + 4 vertices ErrorStatiticsProbability Safety index Probabilistic response response suffciency factor surface surface response surface R"a4 0.9231 0.9887 0.9996 RMSE Predictor 0.1234 0.3519 0.01055 Mean of Response 0.3839 1.3221 0.9221 APE (Average Percentage Error=RMSE 32.14% 26.62% 1.14% Predictor/Mean of Response) APE in Pof (=RMSE Predictor of 32.14% 10.51% N/A Pof/Mean of Pof) It is seen that the R2adj of probabilistic sufficiency factor response surface approximation is the highest among the three response surface approximations, which implies probabilistic sufficiency factor design response surface approximation is the most accurate in terms of averaged errors in the entire design space as shown by Table 62. The critical errors of the three design response surfaces are also compared. For each design response surface approximation, 51 test points were selected along a curve of target reliability (probability of failure = 0.00135). The average percentage errors at these test points, shown in Table 67, demonstrate that the probabilistic sufficiency factor design response surface approximation is more accurate than the probability of failure and safety index response surface approximations. Table 67. Averaged errors in cubic design response surface approximations of probabilistic sufficiency factor, safety index and probability of failure at 51 points on the curves of target reliability Design Response Probability of Probabilistic Safet Index (Pof Surface of failure ae sufficiency factor Average Percentage Error in Probability 334.78% 96.49% 39.11% of Failure The optima found by using the design response surface approximations of Table 6 6 are compared in Table 68. The probabilistic sufficiency factor design response surface led to a better design than the probability or safety index design response surface in terms of reliability. The probability of failure of the Pyf design is 0.00314 evaluated by Monte Carlo simulation, which is higher than the target probability of failure of 0.00135. The deficiency in reliability in the Pyf design is induced by the errors in the probabilistic sufficiency factor design response surface approximation. The probabilistic sufficiency factor can be used to estimate the additional weight to satisfy the reliability constraint. A scaled design of w = 2.7123 and t = 3.5315 was obtained in section 2.1. The objective function of the scaled design is 9.5785. The probability of failure of the scaled design is 0.001302 (safety index of 3.0110 and probabilistic sufficiency factor of 1.0011) evaluated by MCS with 1,000,000 samples. Table 68. Comparisons of optimum designs based on cubic design response surface approximations of the first design iteration for probabilistic sufficiency factor, safety index and probability of failure Desig resonseMinimize objective function F while P 2 3 or 0.00135 > pof surface of Obj ective Pof/Safety index/Safety factor Optimafunction F=w~t from MCS of 100,000 samples w=2.6591, Probability '9.3069 0.00522/2. 5609/0.9589 t3.5000 w=2.6473, Safety index '9.2654 0.00630/2.4949/0.95 19 t3.5000 Probabilistic w=2.6881, 94084 0.00314/2.7328/0.9733 sufficiency factor t3.500 The design can be improved by performing another design iteration, which would reduce the errors in design response surface by shrinking the design space around the current design. The reduced range of design response surface approximations is shown in Table 69 for the next design iteration. The design response surface approximations iteration shown in Table 610 are compared in Table 611. It is seen that the design converges in two iterations with probabilistic sufficiency factor response design surface due to its superior accuracy over the probability of failure and safety index design response surfaces. Table 611. Comparisons of optimum designs based on cubic design response surfaces of the second design iteration for probabilistic sufficiency factor, safety index and probability of failure Desig reasons Minimize obj ective function F while P > 3 or 0.0013 5 > pof surfce f Oti. Obj ective_ Pof/Safety index/Safety factor function F =wt from MVCS of 100,000 samples w=2.7923, Probability '9.3368 0.0051 1/2.5683/0.9658 t3.3438 w=2.6878, Safety index '9.4821 0.00177/2.916 5/0.9920 t3.5278 Probabilistic w=2.6041, sufficiency '9.5691 0.00130/3.0115/1.0009 t3.6746 factor constructed are compared in Table 610. It is observed again that the probabilistic sufficiency factor response surface approximation is the most accurate. Table 69. Range of design variables for design response surface approximations of the second design iteration System variables w t Range 2.2" to 3.0" 3.2" to 4.0" Table 610. Comparison of cubic design response surface approximations of the second design iteration for probability of failure, safety index and probabilistic sufficiency factor for system reliability (strength and displacement) 16 Latin Hypercube sampling points + 4 vertices ErrorStatiticsProbability Safety index Probabilistic response response sufficiency factor surface surface response surface R2a4 0.9569 0.9958 0.9998 RMSE Predictor 0.06378 0.1329 0.003183 Mean of Response 0.1752 2.2119 0.9548 APE (Average Percentage ErrorRMSE Predictor/Mean 36.40% 6.01% 0.33% of Response) The optima based on design response surface approximations for the second design Summary This chapter presented a probabilistic sufficiency factor as a measure of the safety level relative to a target safety level, which can be obtained from the results of Monte Carlo simulation with little extra computation. It was shown that a design response surface approximation can be more accurately fitted to the probabilistic sufficiency factor than to the probability of failure or the safety index. Using the beam design example with single or system reliability constraints, it was demonstrated that the design response surface approximation based on probabilistic sufficiency factor has superior accuracy and accelerates the convergence of reliabilitybased design optimization. The probabilistic sufficiency factor also provides more information in regions of such low probability that the probability of failure or safety index cannot be estimated by Monte Carlo simulation with a given sample size, which is helpful in guiding the optimizer. Finally it was shown that the probabilistic sufficiency factor can be employed by the designer to estimate the required additional weight to achieve a target safety level, which might be difficult with probability of failure or safety index. CHAPTER 7 RELIABILITYBASED DESIGN OPTIMIZATION USING DETERMINISTIC OPTIMIZATION AND MULTIFIDELITY TECHNIQUE Introduction The probabilistic sufficiency factor (PSF) developed in Chapter six is integrated to reliabilitybased design optimization (RBDO) framework in this chapter. The classical RBDO is performed in coupled double loop fashion, where the inner loop performs reliability analysis and the outer loop performs design optimization. RBDO using double loop framework requires many reliability analyses and is computationally expensive. Wu et al. (1998, 2001) developed a safetyfactor based approach for performing RBDO in a decoupled single loop fashion, where the reliability constraints are converted to equivalent deterministic constraints by using the concept of safety factor. The similarity between Wu's approach and the probabilistic sufficiency factor approach indicates that it may be worthwhile to study the use of probabilistic sufficiency factor converting RBDO to sequential deterministic optimization. For many problems the required probability of failure is very low, so that good estimates require a very large MCS sample. In addition, the design response surface (DRS) must be extremely accurate in order to estimate well a very low probability of failure. Thus we may require an expensive MCS at a large number of design points in order to construct the DRS. A multifidelity technique using probabilistic sufficiency factor for RBDO is investigated to alleviate the computational cost. The two approaches of reducing computational cost of RBDO for low probability of failure are compared. ReliabilityBased Design Optimization Using Sequential Deterministic Optimization with Probabilistic Sufficiency Factor Wu et al. (1998, 2001) proposed a decoupled approach using partial safety factor to replace reliability constraints by equivalent deterministic constraints. After performing reliability analysis, the random variables x are replaced by safetyfactor based values x*, which is the most probable point (MPP) of the previous reliability analysis. The required shift of limit state function g in order to satisfy the reliability constraints is s, which satisfy P(g(x)+s)<0)=Pt. Both x*k and s can be obtained as the byproducts of reliability analysis. The target reliability is achieved by adjusting the limit state function via design optimization. It is seen that the required shift s is similar to the probabilistic sufficiency factor (Qu and Haftka 2003) presented in Chapter six. The significant difference between Wu's partial safety factor and coupled RBDO is that reliability analysis is decoupled from and driven by the design optimization to improve the efficiency of RBDO. Thus RBDO is performed in a deterministic fashion and corrected by reliability analysis after optimization. The PSF is employed in this chapter to convert RBDO to equivalent deterministic optimization. Converting RBDO to equivalent deterministic optimization enables further exploration of the design space for those problems where the design space is characterized to have multiple local optima and only limited number of analyses are available due to its high computational cost, such as design of stiffened panels addressed in chapter eight. By starting from a mean value based design, where the deterministic safety factor is one, an initial design was found by deterministic optimization. Reliability analysis using Monte Carlo simulation shows the deficiency in probability of failure and probabilistic sufficiency factor. In the next design iteration, the safety factor of the next deterministic optimization is chosen to be s~x~d(k+1)S(x d(k)(71 which is used to reduce the yield strength of the material, R. The optimization problem is formulated as minimize A = wt such that R (72) cr < 0 s(x, d)(k+1) The process is repeated until the optimum converges and the reliability constraint is sati sfied. ReliabilityBased Design Optimization Using MultiFidelity Technique with Probabilistic Sufficiency Factor For problems with very low probability of failure, a good estimate of probability requires a very large MCS sample. In addition, the DRS must be extremely accurate in order to estimate well a very low probability of failure. Thus we may require an expensive MCS at a large number of design points in order to construct the DRS. The deterministic optimization may be used to reduce the computational cost associated with RBDO for very low probability of failure. However, since it does not use any derivative information for the probabilities, it is not likely to converge to an optimum design when competing failure modes are disparate in terms of the cost of improving their safety. A compromise between the deterministic optimization and the full probabilistic optimization is afforded by the Pyfby using an intermediate target probability PI, which is higher than the required probability P, and can be estimated via a less expensive MCS and less accurate DRS. Then the Pyf can recalibrated by a single expensive MCS. This is a variablefidelity technique, with a large number of inexpensive MCS combined with a small number of expensive MCS. A compromise between the deterministic optimization and the full probabilistic optimization is afforded by the probabilistic sufficiency factor Pyf by using an intermediate target probability PI, which is higher than the required probability Pr and can be estimated via a less expensive MCS and less accurate DRS. Then the Pyf can re calibrated by a single expensive MCS. This is a variablefidelity technique, with a large number of inexpensive MC S combined with a small number of expensive MC S. For the beam example we illustrate the process by setting a low required probability of 0.0000135, and using as intermediate probability 0.00135, the value used as required probability for the previous examples. We start by finding an initial optimum design with the intermediate probability as the required probability. This involves the generation of a response surface approximation of~s Pyl forthe in~termidiate probability as well as finding the optimum based on this response surface. We then perform an expensive MCS which is adequate for estimating the required probability. Here we use MCS with 107 samples. We now calculate the Psf from this accurate MCS, and denote it Psf. At that design the Psf predictled by uthe response: surfacet approximation is about 1, because the initial optimization was performed with a lower limit of 1 on the Psf. In contrast, uthe ac~curalte P/ will in general be different for several reasons. These include the higher accuracy of the MCS, the response surface errors, and most important the lower probability requirements. For example, with 107 samples, at this initial design we may get P/=1.0I1 for the intlltermedate probabIiliy (based on Lthe 13500 lowest safety factors) and" P =0.89 for the required probability (based on the 135 lowest safety factors) . With a value of of Psr' and Ps/ at the same point, we can define a scale factor/f as the ratio of these two numbers .f = c (73) This ratio can be used to correct the response surface approximation during the optimization process. Once an optimum design is found with a givenJ fa new accurate MCS can be calculated at the optimum, a new value of f can be calculated from Equation (73) at the new point, and the process repeated until convergence. As further refinement, we have also updated the response surface for the intermediate probability, centering it about the new optima. Beam Design Example The following cantilever beam example (Figure 71) is taken from Wu et al. (2001) to demonstrate the use of probabilistic sufficiency factor. L=100" =I W Figure 71. Cantilever beam subj ect to vertical and lateral beading There are two failure modes in the beam design problem. One failure mode is yielding, which is most critical at the corner of the rectangular cross section at the fixed end of the beam tX 