<%BANNER%>

Reliability-Based Structural Optimization Using Response Surface Approximations and Probabilistic Sufficiency Factor


PAGE 1

RELIABILITY-BASED STRUCTURAL OPTIMIZATION USING RESPONSE SURFACE APPROXIMATIONS AND PROBAB ILISTIC SUFFICIENCY FACTOR By XUEYONG QU A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2004

PAGE 2

Copyright 2004 by Xueyong Qu

PAGE 3

This dissertation is dedicated to my lovely wife, Guiqin Wang.

PAGE 4

ACKNOWLEDGMENTS I want to thank Dr. Raphael T. Haftka for offering me the opportunity to complete my Ph.D. study under his exceptional guidance. He provided the necessary funding to complete my doctoral studies and support me to attend many academic conferences. Without his patience, guidance, knowledge, and constant encouragement, this work would not have been possible. Dr. Haftka made an immense contribution to this dissertation and my academic growth, as well as my professional and personal life. I would also like to thank the members of my supervisory committee: Dr. Peter G. Ifju, Dr. Theodore F. Johnson, Dr. Andr I. Khuri, and Dr. Bhavani V. Sankar. I am grateful for their willingness to review my research and provide constructive comments that helped me to complete this dissertation. Special thanks go to Dr. David Bushnell for his help with the PANDA2 program and stiffened panel analysis and design. Special thanks go to Dr. Vicente J. Romero for many helpful discussions and collaboration in writing papers. Financial support provided by grant NAG-1-2177, contract L-9889 and grant URETI from NASA is gratefully acknowledged. My colleagues in the Structural and Multidisciplinary Optimization Research Group at the University of Florida also deserve thanks for their help and many fruitful discussions. Special thanks go to Palaniappan Ramu, Thomas Singer, and Dr. Satchi Venkataraman for their collaboration in publishing papers. My parents deserve my deepest appreciation for their constant love and support. iv

PAGE 5

Lastly, I would like to thank my beautiful and loving wife, Guiqin Wang. Without her love, patience and support I would not have completed this dissertation. v

PAGE 6

TABLE OF CONTENTS page ACKNOWLEDGMENTS.................................................................................................iv LIST OF TABLES.............................................................................................................ix LIST OF FIGURES.........................................................................................................xiii ABSTRACT.......................................................................................................................xv CHAPTER 1 INTRODUCTION........................................................................................................1 Focus.............................................................................................................................2 Objectives and Scope....................................................................................................4 2 LITERATURE SURVEY: METHODS FOR RELIABILITY ANALYSIS AND RELIABILITY-BASED DESIGN OPTIMIZATION..................................................6 Review of Methods for Reliability Analysis................................................................7 Problem Definition................................................................................................7 Monte Carlo Simulation........................................................................................7 Monte Carlo Simulation Using Variance Reduction Techniques.........................8 Moment-Based Methods.......................................................................................9 Response Surface Approximations......................................................................10 Reliability-Based Design Optimization Frameworks.................................................12 Double Loop Approach.......................................................................................12 Inverse Reliability Approach...............................................................................14 Design potential approach............................................................................15 Partial safety factor approach (Partial SF)...................................................16 Summary.....................................................................................................................17 3 RESPONSE SURFACE APPROXIMATIONS FOR RELIABILITY-BASED DESIGN OPTIMIZATION........................................................................................19 Stochastic Response Surface (SRS) Approximation for Reliability Analysis............20 Analysis Response Surface (ARS) Approximation for Reliability-Based Design Optimization..........................................................................................................21 Design Response Surface (DRS) Approximation.......................................................23 vi

PAGE 7

Analysis and Design Response Surface Approach.....................................................24 Statistical Design of Experiments for Stochastic and Analysis Response Surfaces...25 4 DETERMINISTIC DESIGN OF COMPOSITE LAMINATES FOR CRYOGENIC ENVIRONMENTS.....................................................................................................28 Introduction.................................................................................................................28 Composite Laminates Analysis under Thermal and Mechanical Loading.................29 Properties of IM600/133 Composite Materials..........................................................30 Deterministic Design of Angle-Ply Laminates...........................................................34 Optimization Formulation...................................................................................35 Optimizations without Matrix Cracking..............................................................36 Optimizations Allowing Partial Matrix Cracking...............................................39 Optimizations with Reduced Axial Load Ny......................................................39 5 RELIABILITY-BASED DESIGN OF COMPOSITE LAMINATES FOR CRYOGENIC ENVIRONMENTS............................................................................41 Reliability-Based Design Optimization......................................................................41 Problem Formulation...........................................................................................41 Response Surface Approximation for Reliability-Based Optimization..............43 Analysis Response Surfaces................................................................................43 Design Response Surfaces...................................................................................45 Refining the Reliability-Based Design................................................................46 Quantifying Errors in Reliability Analysis..........................................................47 Effects of Quality Control on Laminate Design.........................................................48 Effects of Quality Control on Probability of Failure...........................................49 Effects of Quality Control on Optimal Laminate Thickness...............................50 Effects of Other Improvements in Material Properties...............................................51 Summary.....................................................................................................................54 6 PROBABILISTIC SUFFICIENCY FACTOR APPROACH FOR RELIABILITY-BASED DESIGN OPTIMIZATION..........................................................................56 Introduction.................................................................................................................56 Probabilistic Sufficiency Factor.................................................................................60 Using Probabilistic Sufficiency Factor to Estimate Additional Structural Weight to Satisfy the Reliability Constraint.................................................................62 Reliability Analysis Using Monte Carlo Simulation..................................................64 Calculation of Probabilistic Sufficiency Factor by Monte Carlo Simulation......66 Monte Carlo Simulation Using Response Surface Approximation.....................68 Beam Design Example...............................................................................................71 Design with Strength Constraint.........................................................................72 Design with Strength and Displacement Constraints..........................................75 Summary.....................................................................................................................79 vii

PAGE 8

7 RELIABILITY-BASED DESIGN OPTIMIZATION USING DETERMINISTIC OPTIMIZATION AND MULTI-FIDELITY TECHNIQUE.....................................80 Introduction.................................................................................................................80 Reliability-Based Design Optimization Using Sequential Deterministic Optimization with Probabilistic Sufficiency Factor.....................................................................81 Reliability-Based Design Optimization Using Multi-Fidelity Technique with Probabilistic Sufficiency Factor.............................................................................82 Beam Design Example...............................................................................................84 Reliability-Based Design Optimization Using Sequential Deterministic Optimization with Probabilistic Sufficiency Factor.....................................................................86 Reliability-Based Design Optimization Using Coarse MCS with Probabilistic Sufficiency Factor..................................................................................................87 Summary.....................................................................................................................88 8 RELIABILITY-BASED DESIGN OPTIMIZATION OF STIFFENED PANELS USING PROBABILISTIC SUFFICIENCY FACTOR..............................................90 Introduction.................................................................................................................90 Aluminum Isogrid Panel Design Example.................................................................92 Reliability-Based Design Problem Formulation.................................................92 Uncertainties........................................................................................................94 Analysis Response Surface Approximation........................................................95 Design Response Surfaces...................................................................................97 Optimum Panel Design........................................................................................98 Composite Isogrid Panel Design Example.................................................................98 Deterministic Design.........................................................................................100 Analysis Response Surface Approximation......................................................101 Reliability-Based Design Optimization Using Sequential Deterministic Optimization with Probabilistic Sufficiency Factor......................................103 Reliability-Based Design Optimization using DIRECT Optimization.....................106 DIRECT Global Optimization Algorithm.........................................................106 Reliability-Based Design Optimization Using Direct Optimization with Safety Factor Corrected by Probabilistic Sufficiency Factor....................................108 Summary...................................................................................................................108 APPENDIX A MATERIAL PROPERTIES OF IM600/133...........................................................110 B CONTOUR PLOTS OF THREE DESIGN RESPONSE SURFACE APPROXIMATIONS AND TEST POINTS ALONG THE CURVE OF TAR GET RELIABILITY.........................................................................................................113 LIST OF REFERENCES.................................................................................................115 BIOGRAPHICAL SKETCH...........................................................................................120 viii

PAGE 9

LIST OF TABLES Table page 4-1 Transverse strains calculated for conditions corresponding to the onset of matrix-cracking in the 90 plies of a quasi-isotropic (45/0/-45/90)2s in Aoki et al. (2000).33 4-2 Transverse strains of an angle-ply laminate ( 25)4S under the same loading condition as Table A1..............................................................................................34 4-3 Strain allowables for IM600/133 at F.............................................................34 4-4 Optimal laminates for different operational temperatures: 2u of 0.0110.................37 4-5 Optimal laminates for temperature dependent material properties with 2u of 0.0110 (optimized for 21 temperatures)...............................................................................38 4-6 Optimal laminate for temperature dependent material properties allowing partial matrix cracking: 2u of 0.011 for uncracked plies and 0.0154 for cracked plies......39 4-7 Optimal laminates for reduced axial load of1, 200 lb./inch by using load shunting cables (equivalent laminate thickness of 0.005 inch)...............................................40 5-1 Strain allowablesa for IM600/133 at F............................................................42 5-2 Coefficients of variation (CV) of the random variables...........................................42 5-3 Range of design variables for analysis response surfaces........................................44 5-4 Quadratic analysis response surfaces of strains (millistrain)....................................44 5-5 Design response surfaces for probability of failure (Probability calculated by Monte Carlo simulation with a sample size of 1,000,000)..................................................45 5-6 Comparison of reliability-based optimum with deterministic optima......................46 5-7 Refined reliability-based design []S (Monte Carlo simulation with a sample size of 10,000,000)..........................................................................................................47 5-8 Comparison of probability of failure from MCS based ARS and CLT....................47 5-9 Accuracy of MCS.....................................................................................................48 ix

PAGE 10

5-10 Effects of quality control of 2u on probability of failure for 0.12 inch-thick ()S laminates...................................................................................................................49 5-11 Effects of quality control of 1u, 1l, 2l, and 12 on probability of failure of 0.12 inch-thick ()S laminates........................................................................................50 5-12 Effects of quality control of E1, E2, G12, 12, Tzero, 1, and 2 on probability of failure of 0.12 inch-thick ()S laminates................................................................50 5-13 Effects of quality control of 2u on probability of failure for 0.1 inch-thick ()S laminates...................................................................................................................51 5-14 Effects of quality control of 2u on probability of failure for 0.08 inch-thick ()S laminates...................................................................................................................51 5-15 Sensitivity of failure probability to mean value of 2u (CV=0.09) for 0.12 inch-thick l()S aminates.........................................................................................................52 5-16 Sensitivity of failure probability to CV of 2u ( E(2u)=0.0154 ) for 0.12 inch-thick ()S laminates.........................................................................................................52 5-17 Maximum 2 (millistrain) induced by the change of material properties E1, E2, G12, 12, Tzero, 1, and 2 for 0.12 inch-thick [25]S laminate.......................................54 5-18 Probability of failure for 0.12 inch-thick [ 25]S laminate with improved average material properties (Monte Carlo simulation with a sample size of 10,000,000)....54 6-1 Random variables in the beam design problem........................................................62 6-2 Range of design variables for design response surface............................................72 6-3 Comparison of cubic design response surface approximations of probability of failure, safety index and probabilistic sufficiency factor for single strength failure mode (based on Monte Carlo simulation of 100,000 samples)................................73 6-4 Averaged errors in cubic design response surface approximations of probabilistic sufficiency factor, safety index and probability of failure at 11 points on the curves of tar get reliability....................................................................................................74 6-5 Comparisons of optimum designs based on cubic design response surface approximations of probabilistic sufficiency factor, safety index and probability of failure.......................................................................................................................75 6-6 Comparison of cubic design response surface approximations of the first design iteration for probability of failure, safety index and probabilistic sufficiency factor for system reliability (strength and displacement)...................................................76 x

PAGE 11

6-7 Averaged errors in cubic design response surface approximations of probabilistic sufficiency factor, safety index and probability of failure at 51 points on the curves of tar get reliability....................................................................................................76 6-8 Comparisons of optimum designs based on cubic design response surface approximations of the first design iteration for probabilistic sufficiency factor, safety index and probability of failure.....................................................................77 6-9 Range of design variables for design response surface approximations of the second design iteration.........................................................................................................78 6-10 Comparison of cubic design response surface approximations of the second design iteration for probability of failure, safety index and probabilistic sufficiency factor for system reliability (strength and displacement)...................................................78 6-11 Comparisons of optimum designs based on cubic design response surfaces of the second design iteration for probabilistic sufficiency factor, safety index and probability of failure.................................................................................................78 7-1 Random variables in the beam design problem........................................................85 7-2 Optimum designs for strength failure mode obtained from double loop RBDO.....86 7-3 Design history of RBDO based on sequential deterministic optimization with probabilistic sufficiency factor under strength constraint for tar get probability of failure of 0.00135.....................................................................................................86 7-4 Design history of RBDO based on sequential deterministic optimization with probabilistic sufficiency factor under strength constraint for tar get probability of failure of 0.0000135.................................................................................................87 7-5 RBDO using variable fidelity technique with probabilistic sufficiency factor under strength constraint....................................................................................................88 7-6 Range of design variables for design response surface............................................88 8-1 Amplitudes of geometric imperfection handled by PANDA2 software...................94 8-2 Uncertainties in material properties (Al 2219-T87) modeled as normal random variables...................................................................................................................94 8-3 Uncertainties in manufacturing process modeled as uniformly distributed random design variables around design (mean) value..........................................................94 8-4 Deterministic Optimum............................................................................................95 8-5 Range of analysis response surface approximations (inch)......................................96 xi

PAGE 12

8-6 Quadratic analysis response surface approximation to the most critical margins using Latin Hypercube sampling of 72 points.........................................................96 8-7 Probabilities of failure calculated by Monte Carlo simulation with 1106 samples97 8-8 Range of design response surface approximations (inch)........................................97 8-9 Cubic design response surface approximation to the probability of failure and probabilistic sufficiency factor (calculated by Monte Carlo sampling of 1106 samples)....................................................................................................................97 8-10 Optimum panel design..............................................................................................98 8-11 Probabilities of failure calculated by Monte Carlo simulation of 1106 samples....98 8-12 Uncertainties in material elastic properties (AS4) modeled as normal distribution with coefficient of variation of 0.03.......................................................................100 8-13 Uncertainties in material strength properties (AS4) modeled as normal distribution with coefficient of variation of 0.05.......................................................................100 8-14 Variation of the random design variables around nominal design value...............100 8-15 Safety factors used in deterministic design............................................................101 8-16 Deterministic Optimum (inch, degree, lb)..............................................................101 8-17 Quadratic analysis response surface approximation to the worst margins using Latin Hypercube sampling of 342 points.........................................................................102 8-18 Probabilities of failure calculated by Monte Carlo simulation of 106 samples (material and manufacturing uncertainties)............................................................102 8-19 Design history of RBDO based on sequential deterministic optimization using probabilistic sufficiency factor to correct safety factor directly by Equation (8-3)105 8-20 Design history of RBDO based on sequential deterministic optimization using probabilistic sufficiency factor to correct safety factor by actual safety margin using Equation (8-4)........................................................................................................105 8-21 Design history of RBDO based on DIRECT deterministic optimization with probabilistic sufficiency factor correct safety factor by actual safety margin using Equation (8-4)........................................................................................................108 xii

PAGE 13

LIST OF FIGURES Figure page 2-1 Double loop approach: reliability analysis coupled inside design optimization......12 2-2 Design potential approach: reliability constraints approximated at design potential point dpk; reliability analyses still coupled inside design optimization...................15 2-3 Partial safety factor approach: decouple reliability analysis and design optimization..............................................................................................................17 3-1 Analysis response surface and design response surface approach: decouple reliability analysis and design optimization.............................................................25 3-2 Latin Hypercube sampling to generate 5 samples from two random variables........27 4-1 Polynomials fit to elastic properties: E1, E2, G12, and 12....................................31 4-2 Polynomials fit to coefficients of thermal expansion: 1 and 2............................32 4-3 Geometry and loads for laminates............................................................................35 4-4 The change of optimal thickness (inch) with temperature for variable and constant material properties (2u of 0.0110)...........................................................................37 4-5 Strains in optimal laminate for temperature dependent material properties with 2u of 0.0110 (second design in Table 4-3)....................................................................38 5-1 Tradeoff plot of probability of failure, cost, and weight (laminate thickness) for [25]S........................................................................................................................53 6-1 Probability density of safety factor. The area under the curve left to s=1 measures the actual probability of failure, while the shaded area is equal to the tar get probability of failure indicating that probabilistic sufficiency factor = 0.8.............61 6-2 Cantilever beam subject to vertical and lateral beading...........................................62 6-3 Monte Carlo simulation of problem with two random variables..............................66 7-1 Cantilever beam subject to vertical and lateral beading...........................................84 xiii

PAGE 14

8-1 Isogrid-stiffened cylindrical shell with internal isogrid and external rings with isogrid pattern oriented along circumferential direction for increased bending stiffness in hoop direction........................................................................................93 8-2 Isogrid-stiffened cylindrical shell with internal isogrid and isogrid pattern oriented along circumferential direction for increased bending stiffness in hoop direction; the zero degree direction for the composite laminates in isogrid and skin panel are shown.......................................................................................................................99 8-3 1st iteration of DIRECT windowing for 2 dimensional example, Goldstein-Price (GP) function (Finkel, 2003), and (b) further iterations on GP function with potentially optimal boxes shaded and subsequently divided along longest dimension(s)...........................................................................................................107 A-1 Quadratic fit to 1 (1.0E-6/F)................................................................................110 A-2 Sixth-order fit to 2 (1.0E-4/F)..............................................................................110 A-3 Quadratic fit to E1 (Mpsi).......................................................................................111 A-4 Quartic fit to E2 (Mpsi)...........................................................................................111 A-5 Cubic fit to G12 (Mpsi)............................................................................................112 A-6 Quartic fit to 12......................................................................................................112 B-1 Contour plot of probabilistic safety factor design response surface approximation and test points along the curve of tar get reliability................................................113 B-2 Contour plot of probability of failure design response surface approximation and test points along the curve of tar get reliability. The negative values of probability of failure are due to the interpolation errors of the design response surface approximation.........................................................................................................114 B-3 Contour plot of safety index design response surface approximation and test points along the curve of tar get reliability........................................................................114 xiv

PAGE 15

Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy RELIABILITY-BASED STRUCTURAL OPTIMIZATION USING RESPONSE SURFACE APPROXIMATIONS AND PROBABILISTIC SUFFICIENCY FACTOR By Xueyong Qu August 2004 Chair: Raphael T. Haftka Major Department: Mechanical and Aerospace Engineering Uncertainties exist practically everywhere from structural design to manufacturing, product life-time service, and maintenance. Uncertainties can be introduced by errors in modeling and simulation; by manufacturing imperfections (such as variability in material properties and structural geometric dimensions); and by variability in loading. Structural design by safety factors using nominal values without considering uncertainties may lead to designs that are either unsafe, or too conservative and thus not efficient. The focus of this dissertation is reliability-based design optimization (RBDO) of composite structures. Uncertainties are modeled by the probabilistic distributions of random variables. Structural reliability is evaluated in term of the probability of failure. RBDO minimizes cost such as structural weight subject to reliability constraints. Since engineering structures usually have multiple failure modes, Monte Carlo simulation (MCS) was used to calculate the system probability of failure. Response surface (RS) approximation techniques were used to solve the difficulties associated with xv

PAGE 16

MCS. The high computational cost of a large number of MCS samples was alleviated by analysis RS, and numerical noise in the results of MCS was filtered out by design RS. RBDO of composite laminates is investigated for use in hydrogen tanks in cryogenic environments. The major challenge is to reduce the large residual strains developed due to thermal mismatch between matrix and fibers while maintaining the load carrying capacity. RBDO is performed to provide laminate designs, quantify the effects of uncertainties on the optimum weight, and identify those parameters that have the largest influence on optimum design. Studies of weight and reliability tradeoffs indicate that the most cost-effective measure for reducing weight and increasing reliability is quality control. A probabilistic sufficiency factor (PSF) approach was developed to improve the computational efficiency of RBDO, to design for low probability of failure, and to estimate the additional resources required to satisfy the reliability requirement. The PSF is a safety factor needed to meet the reliability target. The methodology is applied to the RBDO of composite stiffened panels for the fuel tank design of reusable launch vehicles. Examples are used to demonstrate the advantages of the PSF over other RBDO techniques xvi

PAGE 17

CHAPTER 1 INTRODUCTION Aerospace structures are designed under stringent weight requirement. Structural optimization is usually employed to minimize the structural weight subjected to performance constraints such as strength and deflection. Deterministically optimized structures can be sensitive to uncertainties such as variability in material. Uncertainties exist practically everywhere from engineering design to product manufacturing, product life-time service condition and maintenance. Uncertainties can be introduced by manufacturing process such as variability in material properties and structural geometric dimensions; by errors in modeling and simulation; and by service conditions such as loading changes. Deterministic optimization can use large safety factors to accommodate uncertainties, but the safety and performance of the optimized structure under uncertainties (such as reliability) are not known, and the resulting structural design may be too conservative, and thus not efficient. To address this problem, reliability-based design optimization (RBDO) became popular in the last decade (Rackwitz 2000). The safety of the design is evaluated in terms of the probability of failure, with uncertainties modeled by probabilistic distribution of random variables. RBDO minimizes costs such as structural weight subject to reliability constraints, which are usually expressed as limits on the probability of failure of performance measures. 1

PAGE 18

2 Focus The focus of this dissertation is reliability-based structural optimization for use in the reusable launch vehicles (RLV). RLV is being developed as a safer and cheaper replacement of space shuttles, which suffer from high probability of failure and operating cost. For example, the probability of failure per mission launch is about 0.01 based on the shuttle launch history. The catastrophic failures both space shuttles, Challenger and Columbia, were all initiated from structural failure. The limited reuse of rocket boosters and fuel tanks also increase the operating cost of space shuttle. In order to reduce the operating cost, cryogenic fuel tank must be structurally integrated into the RLV, which motivates the use of composite materials. Composite materials are widely used in aerospace structures because of their high stiffness to weight ratio and the flexibility to tailor the design to the application. This extra flexibility can render deterministically optimized composite laminates very sensitive to uncertainties in material properties and load conditions (e.g., Grdal et al. 1999). For example, the ply angles of a composite laminate deterministically optimized under unidirectional loading are all aligned with the loading direction, which leads to poor design for even small loading transverse to the fiber direction. Design of composite structures for RLV poses a major challenge because the feasibility of these vehicles depends critically on structural weight. With traditional deterministic design based on safety factors, it is possible to achieve a safe design, but it may be too heavy for the RLV to take off. Therefore, reliability-based design optimization based is required for the structural design of RLV structures in order to satisfy both safety and weight constraints. The advantages of reliability-based design over deterministic design have been demonstrated (e.g., Ponslet et al., 1995). For designs with

PAGE 19

3 stringent weight requirements, it is also important to provide guidelines for controlling the magnitudes of uncertainties for the purpose of reducing structural weight. Deterministic structural optimization is computationally expensive due to the need to perform multiple structural analyses. However, reliability-based optimization adds an order of magnitude to the computational expense, because a single reliability analysis requires many structural analyses. Commonly used reliability analysis methods are based on either simulation techniques such as Monte Carlo simulation, or moment-based methods such as the first-order-reliability-method (e.g., Melchers, 1999). Monte Carlo simulation is easy to implement, robust, and accurate with sufficiently large samples, but it requires a large number of analyses to obtain a good estimate of low failure probability. Monte Carlo simulation also produces a noisy estimate of probability and hence is difficult to use with gradient-based optimization. Moment-based methods do not have these problems, but they are not well suited for problems with many competing critical failure modes. Response surface approximations solve the two problems of Monte Carlo simulation, namely simulation cost and noise from random sampling. Response surface approximations (Khuri and Cornell 1996) typically fit low order polynomials to a number of response simulations to approximate response. Response surface approximations usually fit the structural response such as stresses in terms of random variables for reliability analyses. The probability of failure can then be calculated inexpensively by Monte Carlo simulation using the fitted response surfaces. Response surface approximations can also be fitted to probability of failure in terms of design variables, which replace the reliability constraints in RBDO to filter out numerical noise in the

PAGE 20

4 probability of failure induced by Monte Carlo simulation and reduce the computational cost. Different ways of using response surface approximations for reliability analysis and reliability-based design optimization will be presented in subsequent chapters. Objectives and Scope The main purpose of this dissertation is to address the challenges associated with the reliability-based design of composite panels for reusable launch vehicles. The problems encountered include the high computational cost for calculating probabilities of failure and for performing reliability-based design optimization, and the control of structural weight penalty due to uncertainties. Therefore the main objectives are: 1. Investigate response surface approximation for use in reliability analysis and design optimization. Analysis and design response surface approximation are developed. 2. Develop methods that allow more efficient reliability-based design optimization when the probability of failure must be low. This motivates the development of a probabilistic sufficiency factor approach. 3. Explore the potential of uncertainty control for reducing structural weight for unstiffened and stiffened panels. 4. Provide reliability-based designs of selected composite panels A literature survey of methods for reliability analysis and reliability-based design optimization is presented in Chapter 2. Chapter 3 introduces the response surface approximation techniques developed for the efficient RBDO (objective 2). Chapter 4 presents deterministic design optimization for composite laminates in cryogenic environment. Chapter 5 demonstrates the reliability-based design the composite laminate for use in cryogenic environments, and tradeoffs of weight and reliability via the control of uncertainty (objective 1). Chapter 6 proposes a probabilistic sufficiency factor

PAGE 21

5 approach for more efficient reliability-based design optimization. Chapter 7 demonstrates the use of probabilistic sufficiency factor for RBDO. Chapter 8 provides reliability-based designs of selected composite stiffened panels for the fuel tank design of reusable launch vehicles.

PAGE 22

CHAPTER 2 LITERATURE SURVEY: METHODS FOR RELIABILITY ANALYSIS AND RELIABILITY-BASED DESIGN OPTIMIZATION The basic conceptual structure of the reliability-based design optimization (RBDO) problem, called RBDO framework, can be formulated as )d(minimizeFF P)x(Pthat suchtjjpnj,1, (2-1) where F is the objective function, d is a vector of design variables, Pj is the probability of failure of the jth failure mode, Pjt is the allowable probability of failure of the jth failure mode, np is the total number of failure modes, and x is a vector of random variables. To perform RBDO, reliability analyses must be performed to evaluate the probability of failure, which requires multiple evaluations of system performance (such as stresses in a structure). Depending on the specific reliability analysis method, the computational cost of a single reliability analysis is usually comparable to or higher than the cost of performing a deterministic local optimization. Furthermore, RBDO requires multiple reliability analyses, thus the computational cost of performing RBDO by directly coupling design optimization with reliability analysis is at least an order of magnitude higher than deterministic optimization. Efficient frameworks must be developed to overcome this computational burden. This chapter presents a literature review of state-of-the-art reliability analysis methods and RBDO frameworks, and concludes with the motivation to develop the methodologies in Chapters 3, 6, and 7 for solving the problems. 6

PAGE 23

7 Review of Methods for Reliability Analysis The most common techniques for reliability analysis are Monte Carlo simulation, approaches based on most probable point (MPP), and decoupled Monte Carlo sampling of a response surface approximation fit to samples from some experimental design. Different techniques are preferable under different circumstances. Problem Definition The limit state function of the reliability analysis problem is defined as G(x), where G(x) represents a performance criterion and x is a random variable vector. Failure occurs when G(x)<0. Thus the failure surface or limit state of interest can be described as G(x)=0. The probability of failure can be calculated as xxxXdfPGf0)()( (2-2) where fX(x) is the joint probability distribution function (JPDF). This integral is hard to evaluate because the integration domain defined by G(x)<0 is usually unknown and integration in high dimension is very difficult. Commonly used probabilistic analysis methods are based on either simulation techniques such as Monte Carlo simulation or moment-based methods such as the first-order-reliability-method (FORM) or second-order-reliability-method (SORM) (Melchers 1999). Monte Carlo Simulation Monte Carlo simulation (MCS) (e.g., Rubinstein 1981) generates a number of samples of the random variables x by using a random number generator. The number of samples required is usually determined by confidence interval analysis. Simulations (e.g., structural analyses) are then performed for each of these samples. Statistics such as mean,

PAGE 24

8 variance, and probability of failure can then be calculated from the results of simulations. This method is also called direct MCS or MCS with simple random sampling (SRS). Direct MCS is simple to implement; is robust; and is accurate with sufficiently large samples. But the usefulness of direct MCS in reliability analysis is quite limited because of its relatively low efficiency. For example, the probability of failure in engineering applications is usually very small, thus the number of limit state function evaluations required to obtain acceptable accuracy using direct MCS is very large (Chapter 5), which makes direct MCS very time-consuming. Direct MCS is usually used as a benchmark to verify the accuracy and compare the efficiency of other methods using approximation concepts. To improve the accuracy and efficiency of simple random sampling, various simulation methods using Variance Reduction Techniques (VRT) have been developed to reduce the variance of the output random variables. Monte Carlo Simulation Using Variance Reduction Techniques Rubinstein (1981) and in Melchers (1999) gave good overviews of VRT for general Monte Carlo sampling. The VRT can be classified into different categories, such as sampling method, correlation method, conditional expectation method, and specific method. Sampling methods reduce the variance of the output by constraining samples to be representative of (or distorting the samples to emphasize the important region of) the performance function. Commonly used sampling methods include importance sampling (Harbitz 1986), adaptive sampling, stratified sampling, Latin Hypercube sampling, and spherical sampling. Correlation methods use techniques to achieve correlation among random observations, functions, or different simulations to improve the accuracy of the estimators. Some commonly used techniques are antithetic variate, common random

PAGE 25

9 numbers, control variates, and rotation sampling. Conditional expectation methods utilize the independence of random variables to reduce the order of probabilistic integration to achieve higher efficiency. Some common techniques are conditional expectation, generalized conditional expectation, and adaptive conditional expectation. Specific methods include response surface method and internal control variables techniques. The VRTs can be combined further to increase the efficiency of simulation. A comparison of the accuracy and efficiency of several common VRT methods can be found in Kamal and Ayyub (2000). Latin hypercube sampling and response surface methods are studied in this dissertation. The VRT requires fewer limit state function evaluations to achieve the desired level of accuracy, but the simplicity of simulation is lost, and the computational complexity of each simulation cycle is increased. Moment-Based Methods Besides VRT, moment-based methods also reduce the computational cost drastically compared to MCS. The first-order-reliability method (FORM) and second order reliability-method (SORM) are well-established methods that can solve many practical applications (Rackwitz 2000). FORM and SORM methods first transform the random variables from the original space (X-space) to the uncorrelated standard normal space (U-space). An optimization problem is then solved to find the minimum distance point (most probable point, MPP) on the limit state surface (Z=0) to the origin of the U-space. The minimum distance, is called the safety index. The probability of failure is then calculated by using the normal cumulative distribution function )(1 fP in

PAGE 26

10 FORM (Rackwitz and Fiessler 1978), or by using second order correction in SORM (Breitung 1984). Thus the safety index can be used directly as a measure of reliability. One disadvantage of FORM and SORM methods is that there is no readily available error estimate. The accuracy of FORM and SORM must be verified by other methods, such as MCS. The errors of FORM and SORM may come from the errors associated with MPP search and the nonlinearity of the limit state. The search of MPP requires solving a nonlinear optimization problem, which is difficult to solve for some problems. Wrong MPP usually leads to poor probability estimates, which is common problem for MPP-based reliability analysis methods. FORM and SORM methods are also not well suited for problems with many competing critical failure modes (i.e., multiple MPPs). Due to the limitations of first-order and second-order approximations, FORM and SORM methods do not perform well when the limit state surface is highly nonlinear around MPP. This nonlinearity may come from the inherent nonlinearity of the problem or may be induced by the transformation from X-space to U-space (Thacker et al. 2001). For example, transforming a uniform random variable to a standard normal variable usually increases the nonlinearity of the problem. When FORM and SORM methods encounter difficulties, sampling methods with VRT such as Importance Sampling can be employed to obtain/improve results with a reasonable amount of computational cost compared to direct MCS. Response Surface Approximations Response surface approximations (RSA) (Khuri and Cornell 1996) can be used to obtain a closed-form approximation to the limit state function to facilitate reliability analysis. Response surface approximations usually fit low order polynomials to the structural response in terms of random variables. The probability of failure can then be

PAGE 27

11 calculated inexpensively by Monte Carlo simulation or FORM and SORM using the fitted polynomials. Therefore, RSA is particularly attractive for computationally expensive problems (such as those requiring complex finite element analyses). The design points where the response is evaluated are chosen by statistical design of experiments (DOE) so as to maximize the information that can be extracted from the resulting simulations. Response surface approximations can be applied in different ways. One approach is to construct local response surfaces around the MPP region that contributes most to the probability of failure of the structure. The DOE of this approach is iteratively adjusted to approach the MPP. Typical DOEs for this approach are Central Composite Design (CCD) and saturated design. For example, Bucher and Bourgund (1990), and Rajashekhar and Ellingwood (1993) constructed progressively refined local response surfaces around the MPP. This local RSA approach can produce good probability estimates given enough iterations. Another approach is to construct global RSA over the entire range of random variables (i.e., DOE around the mean values of the random variables). Fox (1993, 1994, and 1996) used Box-Behnken DOE to construct global RSA and summarized 12 criteria to evaluate the accuracy of response surfaces. Romero and Bankston (1998) used progressive lattice sampling, where the initial DOE is progressively supplemented by new design points, as the statistical design of experiments to construct global response surfaces. With the global approach, the accuracy of the RSA around the MPP is usually unknown, thus caution should be taken to avoid extrapolation around the MPP. Both the

PAGE 28

12 global and local approaches provide substantial savings in the number of total function evaluations. Reliability-Based Design Optimization Frameworks This section summarizes several popular RBDO frameworks. These frameworks are based on the concepts of design sensitivity analyses, approximated limit state function, approximated reliability constraints, and partial safety factor concept to convert reliability constraints to approximately equivalent deterministic constraints, and RSA. Double Loop Approach Outer Loop: design optimization Initial Design(deterministic optimum) Probabilistic Analyses:FORM/SORM Design Sensitivity /dfrom PA Reliability Converge? YesNoNo Design Converge?Yes Stop Inner Loop: iterative probabilistic analyses Approximate ReliabilityConstraints at DPdk Update Designusing Optimizer Outer Loop: design optimization Initial Design(deterministic optimum) Probabilistic Analyses:FORM/SORM Design Sensitivity /dfrom PA Reliability Converge? YesNoNo Design Converge?Yes Stop Inner Loop: iterative probabilistic analyses Approximate ReliabilityConstraints at DPdk Update Designusing Optimizer Figure 2-1. Double loop approach: reliability analysis coupled inside design optimization The traditional approach of RBDO is to perform a double loop optimization: outer loop for the design optimization (DO) and inner sub-optimization that performs reliability analyses using methods such as FORM or SORM. This nested approach is rigorous and

PAGE 29

13 popular, but it is computationally expensive and sometimes troubled by convergence problems, etc. (Tu et. al. 2000). The computational cost of RBDO with nested MPP may be reduced by sensitivity analysis. The sensitivity of the safety index to design variables can be obtained with little extra computation as by-products of reliability analysis (Kwak and Lee 1987). A simplified formula that ignores the higher order terms in the estimation equation was proposed by Sorensen (1987). Yang and Nikolaidis (1991) used this sensitivity analysis and optimized an aircraft wing with FORM subjected to system reliability constraint. Figure 2-1 shows the typical procedure for the double loop approach. With this approach, the reliability constraints are approximated at the current design point (DP) dk. For problems requiring expensive finite element analysis, this approach may still be computationally prohibitive; and FORM (e.g., classical FORM such as Hasofer-Lind method) may converge very slowly (Rackwitz 2000). Wang and Grandhi (1994) developed an efficient safety index calculation procedure for RBDO that expands limit state function in terms of intermediate design variables to obtain more accurate approximation. Reliability constraints can also be approximated to reduce the computational cost of RBDO. Wang and Grandhi (1994) approximate reliability constraints with multi-point splines within a double loop RBDO. Another way of improving the efficiency of multi-level optimization is to integrate the iterative procedures of reliability analysis and design optimization into one where the iterative reliability analysis stops before full convergence at each step of the optimization, as suggested by Haftka (1989). Maglaras and Nikolaidis (1990) proposed an integrated analysis and design approach for stochastic optimization, where reliability constraints are

PAGE 30

14 approximated to different levels of accuracy in optimization. Even combined with above approaches, nested MPP approach still suffers the problems of high computational cost and convergence. Several RBDO approaches are being developed to solve these problems. Inverse Reliability Approach Recently, there has been interest in using alternative measures of safety in RBDO. These measures are based on margin of safety or safety factors that are commonly used as measures of safety in deterministic design. The safety factor is generally expressed as the quotient of allowable over response, such as the commonly used central safety factor that is defined as the ratio of the mean value of allowable over the mean value of the response. The selection of safety factor for a given problem involves both objective knowledge (such as data on the scatter of material properties) and subjective knowledge (such as expert opinion). Given a safety factor, the reliability of the design is generally unknown, which may lead to unsafe or inefficient design. Therefore, using safety factor in reliability-based design optimization seems to be counter productive. Freudenthal (1962) showed that reliability can be expressed in terms of the probability distribution function of the safety factor. Elishakoff (2001) surveyed the relationship between safety factor and reliability, and showed that in some cases the safety factor can be expressed explicitly in terms of reliability. The standard safety factor is defined with respect to the response obtained with the mean values of the random variables. Thus a safety factor of 1.5 implies that with the mean values of the random variables, we have a 50% margin between the response (e.g., stress) and the capacity (e.g., failure stress). However, the value of the safety factor does not tell us what the reliability is. Birger (1970), as reported by Elishakoff (2001), introduced a factor, which

PAGE 31

15 we call here the Birgers safety factor that is more closely related to the target reliability. A Birgers safety factor of 1.0 implies that the reliability is equal to the target reliability; a Birgers safety factor larger than 1.0 means that the reliability exceeds the target reliability; and Birgers safety factor less than 1.0 means that the system is not as safe as we wish. Design potential approach Tu et al. (2000) used the probabilistic performance measure, which is closely related to Birgers safety factor, for RBDO using FORM. Figure 2-2 summarizes the design potential approach. Outer Loop: design optimization Initial design(deterministic optimum) Probabilistic Analyses:FORM/SORM Design Sensitivity /dfrom PA Reliability Converge? YesNoNo Design Converge?Yes Stop Inner Loop: iterative probabilistic analyses Approximate Reliabilityconstraints at DPPdkp Update DesignUsing Optimizer Outer Loop: design optimization Initial design(deterministic optimum) Probabilistic Analyses:FORM/SORM Design Sensitivity /dfrom PA Reliability Converge? YesNoNo Design Converge?Yes Stop Inner Loop: iterative probabilistic analyses Approximate Reliabilityconstraints at DPPdkp Update DesignUsing Optimizer Figure 2-2. Design potential approach: reliability constraints approximated at design potential point dpk; reliability analyses still coupled inside design optimization They showed that the search for the optimum design converged faster by driving the probabilistic performance measure to zero than by driving the probability of failure to

PAGE 32

16 its ta998 and 2001) developed a partial safety factor similar to Birgers h a series of deterministic optimizations by convesis. The shift of limit r, can be any MPP-based method) is decoupled rget value. Another major difference between the design potential approach and double loop approach is that the reliability constraints are approximated at the design potential point dpk (DPP), which defined as the design that renders the probabilistic constraint active, instead of the current design point. Since the DPP is located on the limit-state surface of the probabilistic constraint, the constraint approximation of DPM becomes exact at (DPP). Thus DPM provides a better constraint approximation without additional costly limit state function evaluation. Therefore, a faster rate of convergence can be achieved. Partial safety factor approach (Partial SF) Wu et al. (1 safety factor in order to replace the RBDO wit rting reliability constraints to equivalent deterministic constraints. After performing reliability analysis, the random variables x are replaced by safety-factor based values x*, which is the MPP of the previous reliability analy state function G needed to satisfy the reliability constraints is s, which satisfies P(G(x)+s)<0)=Pt. Both x* and s can be obtained as the byproducts of reliability analysis. Since in design optimization, the random variables x are replaced by x* (just as in the case of traditional deterministic design, where random variables are replaced by deterministic values after applying some safety factor), the method is called partial safety factor approach (Figure 2-3). The target reliability is achieved by adjusting the limit state function via design optimization. It is seen that the required shift s is similar to the target probabilistic performance measure g*. The significant difference between Partial SF and DPM or nest MPP is that reliability analysis (FORM in the pape

PAGE 33

17 from optimfailure, MCS was used to perform relysis. We developed an analysis RS appro and driven by the design optimization to improve the efficiency of RBDO. If n iterations are needed for convergence, the approach needs n deterministic optimizations and n probabilistic analyses. However, the convergence rate of subsequent probabilistic analyses is expected to increase after obtaining a reasonable MPP. Wu et al. (2001) demonstrated the efficiency of this approach by optimizing a beam subject to multiple reliability constraints. Deterministic Design Optimization Probabilistic Analyses:FORM/SORM Deterministic DesignOptimization with x*No Design and ReliabilityConverge?Yes Stop New design d Safety factor based x* Probabilistic analyses to replace random variable with deterministic values x*Deterministic design optimization with x* Deterministic Design Optimization Probabilistic Analyses:FORM/SORM Deterministic DesignOptimization with x*No Design and ReliabilityConverge?Yes Stop New design d Safety factor based x* Probabilistic analyses to replace random variable with deterministic values x*Deterministic design optimization with x* Figure 2-3. Partial safety factor approach: decouple reliability analysis and design ization Summary Since the reliability analyses involved in our study are for system probability of liability ana ach to reduce the high computational cost of MCS and design response surface approach to filter noise in RBDO (Chapter 3).

PAGE 34

18 The current RBDO frameworks mostly deal with the probability of failure of individual failure modes, an efficient framewor k must be developed to address RBDO for the sy stem probability of failure. Chapter 6 developed an inverse reliability measure, the probabilistic sufficiency factor, to improve the computational efficiency of RBDO, to design for low probability of failure, and to estimate the additional resources needed to satisfy the reliability requirement. Chapter 7 demonstrated the use of probabilistic sufficiency factor with multi-fidelity techniques for RBDO and converting RBDO to sequential deterministic optimization. The methodology is applied to the RBDO of stiffened panels in chapter 8.

PAGE 35

CHAPTER 3 RESPONSE SURFACE APPROXIMATIONS FOR RELIABILITY-BASED DESIGN OPTIMIZATION Response surface approximation (RSA) methods are used to construct an approximate relationship between a dependent variable f (the response) and a vector x of n independent variables (the predictor variables). The response is generally evaluated experimentally (these experiments may be numerical in nature), in which case f denotes the mean or expected response value. It is assumed that the true model of the response may be written as a linear combination of basis functions with some unknown coefficients in the form of )(~xZ xZT)( ~ Response surface model can be expressed as (3-1) bxxTZY)()( where Z(x) is the assumed basis function vector that usually consists of monomial functions, and b is the least square estimate of For example, if the a linear response surface model is employed to approximate the response in terms of two independent variables, x1 ans x2, the response surface approximation is ~ (3-2) 22110)(xbxbbYx The three major steps of response surface approximation as summarized by Khuri and Cornell (1996) are Selecting design points where responses must be evaluated. The points are chosen by statistical design of experiment (DOE), which is performed in such a way that the input parameters are varied in a structured pattern so as to maximize the information that can be extracted from the resulting simulations. Typical DOE for quadratic RSA is central composite design (CCD, Khuri and Cornell 1996). 19

PAGE 36

20 Determining a mathematical model that best fits the data generated from the design points of DOE by performing statistical test of hypotheses of the model parameters(Khuri and Cornell 1996 and Myers et al. 2002) Predicting response for given sets of experimental factors or variables by the constructed response surface approximation. Due to the close form nature of the approximation, RSA is particularly attractive for engineering problems that require a large number of computationally expensive analyses, such as structural optimization and reliability analysis. The accuracy of RSA is measured by error statistics such as the adjusted coefficient of multiple determination (R2adj), root mean square error predictor (RMSE), and coefficient of variation (COV =RMSE/Mean of Response). An R2adj close to one and a small COV close to zero usually indicate good accuracy. The RSAs in this dissertation were all constructed by JMP software (SAS Institute., 2000). The above error statistics are readily available from JMP after RSA construction. Khuri and Cornell (1996) presented a detailed discussion on response surface approximation. This chapter presents the response surface approach developed for reliability-based design optimization. Stochastic Response Surface (SRS) Approximation for Reliability Analysis Among the available methods to perform reliability analysis, moment-based methods (e.g., FORM/SORM) are not well suited for the composite structures in cryogenic environments because of the existence of multiple failure modes. Direct Monte Carlo simulation requires a relatively large number of analyses to calculate probability of failure, which is computationally expensive. Stochastic response surface approximation is employed here to solve the above problems. To apply RSA to a reliability analysis problem, the limit state function g(x) (usually the stress of displacement in the structures) is approximated by

PAGE 37

21 (3-3) bxxTZG)()( where x is the vector of input random variables. With the polynomial approximation )(xG the probability of failure can then be calculated inexpensively by Monte Carlo simulation or FORM/SORM. Since the RSA is constructed in random variable space, this approach is called stochastic response surface approach. Stochastic RSA can be applied in difference ways. One approach is to construct local RSA around the Most Probable Point (MPP), the region that contributes most to the probability of failure of the structure. The statistical design of experiment (DOE) of this approach is iteratively performed to approach the MPP. Another approach is to construct global response surfaces over the entire range of the random variables, where the mean value of the random variables is usually chosen as the center of DOE. The selection of RSA approach depends on the limit state function of the problem. Global RSA is simpler and efficient to use than local RSA for problems with limit state function that can be well approximated globally. Analysis Response Surface (ARS) Approximation for Reliability-Based Design Optimization In reliability-based design optimization (RBDO), the SRS approach needs to construct response surfaces to limit state functions at each point encountered in the optimization process, which requires a fairly large number of limit state function evaluation and RS construction. The local SRS approach is more computationally expensive than the global SRS approach due to multiple iterations involved in the RSA construction. This dissertation (see also Qu et al., 2000) developed an analysis response surface (ARS) approach in the unified system space (x, d) to reduce the cost of RBDO, where x

PAGE 38

22 is the vector of random variables and d is the vector of design variables. By including design variables in the response surface formulation, the efficiency of the RBDO is improved drastically for certain problems. The ARS is fitted to the response (limit state function) in terms of both design variables and random variables bdxZxTG),()( (3-4) The ARS approach combines probabilistic analyses with design optimization. Using the ARS, the probability of failure at every design point in the design optimization process can be calculated inexpensively by Monte Carlo simulation based on the fitted polynomials. The number of analyses required for ARS depends on the total number of random variables and design variables. Because the ARS fits an approximation in terms of both random variables and design variables it requires more analyses than SRS. For our applications, where the number of random variables is large (around 10) and the number of design variables is small (around four), this typically results in an ARS that is less than three times as expensive to construct as an SRS, which is due to the use of Latin Hypercube sampling that can generate an arbitrary number of design points for RSA construction (explained in last section of this chapter and demonstrated in chapter 5). This compares with a large number (order of 10 to 100) of SRS approximation required in the course of optimization. For a large number of variables (more than 20 to 30), the construction of ARS is hindered by the curse of dimensionality. SRS might be used to reduce the dimensionality of the problem. Besides the computational cost issue, the inclusion of design variables may increase the nonlinearity of the response surface approximation. It might be necessary to use RSA of order higher than quadratic, for

PAGE 39

23 which proper DOE must be employed. The DOE issues are discussed in the last section of this chapter. Design Response Surface (DRS) Approximation Direct Monte Carlo simulation introduces noise in computed probability of failure due to limited samples. The noise can be reduced by using a relatively large number of samples, which is computationally made possible by using response surface approximation. The noise can also be filtered out by using another response surface approximation, the design response surface (DRS). DRS fitted to probability of failure P as a function of design variables d can be shown as (3-5) bdZdTP)()( The use of DRS also reduces the computational cost of RBDO by approximating the reliability constraint by close-form function. The probability of failure is found to change by several orders of magnitude over narrow bands in design space, especially when the random variables have small coefficients of variation (Chapter 5). The steep variation of probability of failure requires DRS to use high-order polynomials for the approximation, such as quintic polynomials, increasing the required number of probability calculations (Qu et al. 2000). An additional problem arises when Monte Carlo simulations (MCS) are used for calculating probabilities. For a given number of simulations, the accuracy of the probability estimates deteriorates as the probability of failure decreases. The numerical problems associated with steep variation of probability of failure led to consideration of alternative measures of safety. The most common one is to use the

PAGE 40

24 safety index which replaces the probability by using the inverse standard normal transformation, ) (3-6) (1P Safety index is the distance measured as the number of standard deviations from the mean of a normal distribution that gives the same probability. Fitting DRS to safety index showed limited improvement of accuracy (Chapter 6), and it has the same problems of accuracy as the probability of failure when based on Monte Carlo simulations. Box-Cox transformation (Myers and Montgomery 1995) on the probability of failure was also tested, but showed very limited improvement. A probabilistic sufficiency factor approach is developed as an inverse reliability measure to improve the accuracy of DRS, estimate additional resources required to satisfy the reliability constraint, and convert RBDO to sequential deterministic optimization (Chapter 6 and 7). Analysis and Design Response Surface Approach Figure 3-1 summarizes the ARS/DRS-based RBDO approach. First the DOE of ARS is performed and ARS is constructed. Then DOE of DRS is performed, which should stay in the range of design variables of the DOE for the ARS, and DRS is constructed. Design optimization is then performed on the DRS. If the design does not converge, the DOE of the DRS can be moved toward the intermediate optimum and its range can be shrunk to improve the accuracy of DRS. If the intermediate optimum is near the boundary of the ARS, the DOE of the ARS needs to be moved to cover the potential optimum region better. The entire process is repeated until the optimization converges and the reliability of the optimum stabilizes.

PAGE 41

25 Stop Analyses & build ARS DOE toward optimum MCS & construct DRS Optimization Converge? YesNoNo DOE toward optimum Stop Analyses & build ARS DOE toward optimum MCS & construct DRS Optimization Converge? YesNoNo DOE toward optimum Analyses & build ARS DOE toward optimum MCS & construct DRS Optimization Converge? YesNoNo DOE toward optimum Figure 3-1. Analysis response surface and design response surface approach: decouple reliability analysis and design optimization Statistical Design of Experiments for Stochastic and Analysis Response Surfaces Statistical design of experiments selects design points for response surface approximation in such a manner that the required accuracy is achieved with a minimum number of design points. However, the exact functional form of the structural response to be approximated is rarely known, the errors in SRS and ARS usually include both variance and bias errors. Structural responses are usually computationally expensive to evaluate. Therefore, the selection of the DOE for ARS are primarily based on the following two considerations The number of design points in the DOE is flexible, since we want to reduce the number of analyses. The points in the design space have good space-filling distribution. The DOE is often used to provide a sampling of the problem space. Higher than quadratic polynomials may also be needed in order to provide good approximation of the response. Both desire space-filling DOE.

PAGE 42

26 ARS needs to include both the design and random variables, the number of variables is relatively large, often exceeding 15 variables. This excludes the use of many DOE, such as Central Composite Design (CCD). The CCD has 2n vertices, 2n axial points, and one center point, so the required number of design points is 2n+2n+1,where n is the number of variables involved. A polynomial of mth order in terms of n variables has L coefficients, where !))...(2)(1(mmnnnL (3-7) For n = 15, CCD requires 32799 analyses. On the other hand, a quadratic polynomial in 15-variable has 136 coefficients. From our experience, in order to estimate these coefficients, the number of analyses only needs to be about twice as large as the number of coefficients, which is less than one percent of the number of vertices for 15-variable space. Therefore, other DOEs such as CCD using fractional factorial design (Myers and Montgomery 1995) need to be used. The fractional factorial CCD is intended for the construction of quadratic RSA. Orthogonal arrays (Myers and Montgomery 1995) are used for the construction of higher order RS (Balabanov 1997 and Padmanabhan et al. 2000). Isukapalli (1999) employed orthogonal arrays to construct SRS. For problems where only very limited number of analyses is computationally affordable, Box-Behnken designs or saturated designs can be used (Khuri and Cornell 1996). In the paper of Qu et al. (2000), it is shown that Latin Hypercube sampling is more efficient and flexible than orthogonal arrays. The idea of Latin Hypercube sampling can be explained as follows: assume that we want n samples out of k random variables. First, the range of each random variable is divided into n nonoverlapping intervals on the basis of equal probability. Then one value is selected randomly from each interval. Finally, by

PAGE 43

27 randomly pairing values of different random variables, the n input vectors each of k dimension for Monte Carlo simulation are generated. Figure 3-2 illustrates a two-dimensional Latin Hypercube sampling. NormalUniform NormalUniform Figure 3-2. Latin Hypercube sampling to generate 5 samples from two random variables

PAGE 44

CHAPTER 4 DETERMINISTIC DESIGN OF COMPOSITE LAMINATES FOR CRYOGENIC ENVIRONMENTS This chapter presents deterministic designs of composite laminates for hydrogen tanks in cryogenic environments. The traditional way of deterministically designing the laminate with safety factors is employed in this chapter in order to investigate the design issues. Reliability-based design, explicitly taking account of uncertainties in material properties, is presented in chapter 5. Introduction The use of composite materials for the design of liquid hydrogen tanks at cryogenic temperatures has many challenges. The coefficient of thermal expansion (CTE) along the fiber direction is usually two orders of magnitude smaller than that transverse to fiber direction. In typical composite laminates, the ply angles are different in order to carry load efficiently, which results in a mismatch of the coefficients of thermal expansion. When the laminates are cooled down during manufacturing from stress-free temperature that is near the curing temperature, the mismatch of the coefficients of thermal expansion induces large thermal strains. Cooling to cryogenic temperatures substantially increases the thermal stresses. The residual thermal strains may result in matrix cracking leading to reduction in stiffness and strength of the laminate and possible initiation of delamination. A more detrimental effect of matrix cracking in hydrogen tanks is hydrogen leakage through the wall of the tank. Park and McManus (1996) proposed a micro-mechanical model based on fracture mechanics and verified the model by experiments. Kwon and 28

PAGE 45

29 Berner (1997) studied matrix damage of cross-ply laminate by combining a simplified micro-mechanics model with finite element analysis and showed that the prediction of damage is improved substantially with the incorporation of residual stresses. Aoki et al. (2000) modeled and successfully predicted the leakage through the matrix cracks. The present objective is to investigate options available to minimize the increase in thickness due to thermal residual strains for laminates designed subject to thermal and mechanical loads. Deterministic designs were performed to investigate the following effects: (i) temperature dependant material properties for strains analysis, (ii) laminates designed to allow partial ply failure (matrix cracking), and (iii) auxiliary stiffening solutions that reduce the axial mechanical load on the tank wall laminates. Composite Laminates Analysis under Thermal and Mechanical Loading Since the properties of composite materials, such as coefficients of thermal expansion and elastic moduli, change substantially with temperature, classical lamination theory (CLT) (e.g., Grdal et al. 1999) is modified to take account of temperature dependent material properties. The stress-free strain of a lamina is defined as where TF TF is the coefficient of thermal expansion (CTE). When is a function of temperature T, the stress free strain is given by the expression (4-1) TdTservicezeroTTF)( where Tzero is the stress-free temperature of the material and Tservice is the service temperature. From the equilibrium equation and vanishing of residual stress resultant, the equilibrium of a symmetric laminate subjected to pure thermal load with uniform temperature profile through the thickness can be expressed by

PAGE 46

30 )()()(220TdzTTNhhFNNQA (4-2) where is the non-mechanical strain induced by thermal load. The right hand side of Equation 4-2 is defined as the thermal load NN. From Equation 4-2, the non-mechanical strain induced by thermal load can be expressed by N0 ) (4-3) ()()(10TTTNNNA The residual thermal stress is given by the constitutive equation )()()()(0TTTTFNRQ (4-4) The mechanical strain is expressed by ) (4-5) ()()(1TTTMMNA Therefore, the mechanical stress is given by )()()(TTTMMQ (4-6) By the principle of superposition, the residual strain and total stress in the laminate are expressed by )()()()(0ResidualTTTTFNM (4-7) )()()(TTTMRTotal (4-8) Properties of IM600/133 Composite Materials The composite material used in the present study is the IM600/133 graphite-epoxy material system, which has a glass-transition temperature of 356F. Aoki et al. (2000) tested IM600/133 (material Aa in their paper) composite material system at various temperatures, ranging from 356F to .2F (180C to -269C), with mechanical tensile loads. The material properties of IM600/133 were taken from Aoki et al. (2000)

PAGE 47

31 and fitted with smooth polynomials as functions of temperatures in order to be used in calculation (Figures 4-1 and 4-2). The data points used in the fitting and the individual polynomials are shown in the Appendix A. 0.05.010.015.020.025.0-425.0-225.0-25.0175.0375.0Temperature (F)E1 Elastic Modulus in fiber direction-0.50.00.51.01.52.02.5E2-Elastic modulus in matrix direction, G12 Shear Modulus, nu12 Poissons Ratio E1 (Mpsi) E2 (Mpsi) G12 (Mpsi) nu12 Figure 4-1. Polynomials fit to elastic properties: E1, E2, G12, and 12 Aoki et al. (2000) showed that the fracture toughness of the material increased at lower temperatures; however, the increased strain energy due to the mismatch in the thermal expansion coefficients also increased the critical energy release rate. They also applied a micro-mechanics model proposed by Park and McManus (1996) for predicting micro-cracking and showed good correlation with experiments. Aoki et al. (2000) found that at cryogenic temperatures, quasi-isotropic laminates exhibited a large reduction in the transverse mechanical strain 2 that initiates micro-cracking (0.702% at room temperature to 0.325% at cryogenic temperatures). Experimental data from Aoki et al. (2000) were used to determine the strain allowables. They tested a 16-ply quasi-isotropic (45/0/-45/90)2s symmetric laminate in

PAGE 48

32 tension in the 0 direction at cryogenic temperatures. The nominal specimen thickness and width were 2.2 mm and 15 mm. The mechanical loads corresponding to matrix cracks (Table 4-1) were extracted from Figure 5 in Aoki et al. (2000). The strain transverse to the fiber direction, 2, is assumed to be the strain that induces matrix-cracking in the laminate. Based on the load condition and the configuration of the laminate, the transverse strain 2 in the 90 plies is the most critical strain in the laminate. -0.006-0.004-0.0020.0000.0020.0040.0060.0080.0100.012-500-400-300-200-1000100Temperature (F)Strains Epsilon1 Epsilon2 Gamma12 Figure 4-2. Polynomials fit to coefficients of thermal expansion: 1 and 2 Normally, strain allowables are calculated by loading laminates at room temperatures. However, for micro-cracking, the residual stresses are of primary importance, so all strains are calculated from the stress-free temperature, assumed to be 300 F. The calculations are made by integrating the thermal strains from the stress free temperature to the operational temperature as described in the next section.

PAGE 49

33 Table 4-1 shows the transverse strains 2 in the 90 plies corresponding to the loading at the onset of matrix cracking at selected temperatures. Aoki et al. (2000) found that the maximum mechanical strain before matrix cracking is reduced from 0.7% at room temperature to 0.325% at -452F. Older results (Aoki et al. 1999) (Table 4-1) indicated that the maximum mechanical strain at cryogenic temperature may be as low as 0.082%. However, the calculation indicates that the total strain (including the residual thermal strain) may vary anywhere from 1.5 to 1.9% depending on the temperature and the measurement. These values appear high, but this is because they include the residual strains that are usually not counted. For the quasi-isotropic laminate, these residual strains at room temperature are very high, at 0.86%, and are higher at lower temperatures Table 4-1. Transverse strains calculated for conditions corresponding to the onset of matrix-cracking in the 90 plies of a quasi-isotropic (45/0/-45/90)2s in Aoki et al. (2000) Room temperature (77 F or 25 C) LN2 temperature (-320 F or -196 C) LHe temperature (-452 F or 269 C) LHe temperature (-452 F or 269 C)a Mechanical load (Mpa) 390 330 200 50a Total 2 0.01564 0.01909 0.01760 0.01517a Thermal 2 0.00864 0.01365 0.01435 0.01435a Mechanical 2 0.00700 0.00544 0.00325 0.00082a a Older data obtained from Aoki et al. (1999) The importance of working with strains measured from the stress free temperature is demonstrated in Table 4-2, which shows the 2 in the angle-ply laminate (25)4S under the same loading condition as Table 4-1. At room temperature, the residual (thermal) strains are only about 0.4% compared to 0.86% for the quasi-isotropic laminate. An analysis based on strains measured from room temperature will not show the additional 0.46% strain that the (25)4S laminate can carry compared to a quasi-isotropic laminate.

PAGE 50

34 Based on the data from Table 4-1, we selected the allowable strain to be 1.54% for the probabilistic design and 1.1% (1.4 safety factor) for the deterministic design. Table 4-2. Transverse strains of an angle-ply laminate ( 25)4S under the same loading condition as Table A1 Room temperature (77 F or 25 C) LN2 temperature (-320 F or -196 C) LHe temperature (-452 F or C) LHe temperature (-452 F or C)a Mechanical load (Mpa) 390 330 200 50a Total 2 -0.00261 0.00360 0.00527 0.00656a Thermal 2 0.00393 0.00669 0.00699 0.00699a Mechanical 2 -0.00654 -0.00309 -0.00172 -0.00043a a Older data obtained from Aoki et al (1999) Table 4-3 shows the strain allowables for the lamina, where other strain allowables except 2u were provided to us by NASA. The strain allowables may appear to be high, but this is because they are applied to strain including residual strains that develop due to cooling from stress-free temperature of 300F. A quasi-isotropic laminate will use up its entire transverse strain allowable of 0.011, when cooled to -452F. Thus, this value is conservative in view of the experiments by Aoki et al. (2000) that indicated that the laminate can carry 0.325% mechanical strain at cryogenic temperature. Table 4-3. Strain allowables for IM600/133 at F Strain 1u 1l 2u 2l 12u Allowablesa 0.0103 -0.0109 0.0110 or 0.0154b -0.0130 0.0138 a Strains include residual strains calculated from the stress-free temperature of 300 F b The value 0.0110 is obtained from the extreme value 0.0154 divided by a safety factor of 1.4 Deterministic Design of Angle-Ply Laminates It is estimated that the minimum thickness needed to prevent hydrogen leakage is 0.04 inch, so it may be acceptable to permit matrix cracking if the undamaged part of the laminate has a minimum thickness of 0.04 inch. For the cracked part of the laminate, the

PAGE 51

35 elastic modulus transverse to fiber direction, E2, and the shear modulus, G12, are reduced by 20 percent and the transverse strain allowable, 2u, is increased. The rest of the laminate must not have matrix cracking and must provide at least 8 contiguous intact plies (0.04 inch) in order to prevent hydrogen leakage. Optimization Formulation Laminates with two ply angles, [ 1/ 2]S, see Figure 4-3, were optimized. The x direction here corresponds to the hoop direction on a cryogenic propellant tank, while the y direction corresponds to the axial direction. The laminates are made of IM600/133 graphite-epoxy material with ply thickness of 0.005 inch and subjected to mechanical load and an operating temperature of F. Nx is 4,800 lb./inch and Ny is 2,400 lb./inch. y xxy+ 12 Figure 4-3. Geometry and loads for laminates The design problem was formulated as (Thickness are in inches) 214tth minimize httuulul040.0,0.005such that211212222111 (4-9)

PAGE 52

36 where h is the laminate thickness, superscripts u and l denote upper and lower limits of associated quantities, and 1, 2, and 12 are the ply strains along fiber direction, transverse to fiber direction, and shear strain, respectively. The stack thickness of plies with ply-angle 1, which is allowed to have matrix cracking, is t1. The stack thickness of the plies with ply-angle 2, which are not allowed to crack and must provide in total a minimum intact thickness of 0.04 inch to prevent hydrogen leakage, is t2. The four design variables are the ply angles 1 and 2 and their stack thickness t1 and t2. The individual stack thickness from a continuous optimizer (SQP in MATLAB) is rounded up to the nearest multiple of 0.005 inch. Optimizations without Matrix Cracking In order to see the effect of mechanical and thermal loads, it is instructive to compare designs for different operational temperatures. Table 4-4 shows the optimum laminates at these temperatures. In the last row of Table 4-4, the numbers in the parentheses are the continuous thickness before rounding. Without thermal strains, a cross-ply laminate with thickness of 0.04 inch can easily (with 0.1% transverse strain as the margin of safety) carry the mechanical loads. When thermal strains are taken into account, the angle between the plies must decrease in order to reduce the thermal strains. The ply angles do not vary monotonically because both the residual strains and the stiffness of the laminate increase with the decrease of temperature. At cryogenic temperatures the angle decreases to 25.5, and at that angle the axial loads cannot be carried efficiently and the thickness increases to 0.1 inch. Figure 4-4 shows that the thickness of optimum laminates for temperature dependent and constant material properties at 77F changes substantially with the working temperature for a strain limit

PAGE 53

37 2u of 0.0110. Using temperature dependent material properties avoided a very conservative design with constant material properties. Table 4-4. Optimal laminates for different operational temperatures: 2u of 0.0110 Mechanical only Mechanical and Thermal load Temperature (F ) 77.00 77.00 -61.50 -242.00 -423.00 1 (degree) 90.00 34.82 38.13 33.57 25.50 2 (degree) 0.00 33.93 38.13 33.57 25.50 t1 (inch) 0.005 0.005 0.005 0.010 0.010 t2 (inch) 0.005 0.005 0.010 0.010 0.015 ha (inch) 0.040 (0.040) 0.040 (0.040) 0.060 (0.048) 0.080 (0.079) 0.100 (0.093) a Numbers in parentheses indicate unrounded thickness Figure 4-4. The change of optimal thickness (inch) with temperature for variable and constant material properties (2u of 0.0110) Desig temperatures, so for all designs discu2u is the active constraint at F for the second optimal design of Table 4-5. 00.020.040.060.080.10.120.140.160.180.2-500-400-300-200-1000100200Temperature (F)Optimal Thickness (inch) Variable material properties Constant material properties ns must be feasible for the entire range of ssed in the rest of the dissertation, strain constraints were applied at 21 temperatures, which were uniformly distributed from 77F to F. Table 4-5 shows that the design problem has multiple optima. Figure 4-5 shows that the tensile strain limit

PAGE 54

38 Table 4-5. Optimal laminates for temperature dependent material properties with 2u of 1 2 t1 t2 ha (inch) Probability of f 0.0110 (optimized for 21 temperatures) (degree) (degree) (inch) (inch) ailureb 0.00 28.16 0.005 0.020 0.100 (0.103) 0.019338 (0.014541) 27.04 27.04 0.010 0.015 0.100 (0.095) 0.000479 (0.001683) 94) 25.16 27.31 0.005 0.020 0.100 (0.0 0.000592 (0.001879) a insete ued thbrobabereted ethbed These optimal laminates have similar thickness but different ply angles. The failure proba Numbers parenthe s indica nround ickness odology descri The p ilities w calcula by the m in the chapter 5 -0.006-0.004-0.0020.0000.0020.0040.0060.008-500-400-300-200-1000100Temperature (F)Strains 0.012 0.010 Epsilon1 Epsilon2 Gamma12 Figure 4-5. Strains in optimal laminate for temperature dependent material properties with 2u of 0.0110 (second design in Table 4-3) probabilities of the continuous designs are shown in parentheses. The high failure bilities of the first design (continuous and discrete) clearly show a smaller safety margin than the other two. The second and third designs show that a slight rounding can change the failure probability significantly. Designs with two similar ply angles have much lower failure probabilities than designs with two substantially different ply angles. The failure probabilities of these laminates are too high (compared with 10-4 to 10-6), and this provides incentives to conduct reliability-based design.

PAGE 55

39 Optimizations Allowing Partial Matrix Cracking Plies of angle 1 are the plies allowed for matrix cracking for optimizations lies was increased to 0.0154, while the reopartial matrix cracking: 2 of 0.011 for uncracked plies and 0.0154 for tte allowing partial matrix cracking. The 2u of the 1 p st of the laminate still used 2u of 0.011. The lower limit of t2 was increased to 0.010 inch (total 2 thickness of 0.04 inch) to prevent hydrogen leakage. Table 4-6 shows the optimal design allowing partial matrix cracking. Its thickness is the same as that of the design withut partial matrix cracking (Table 4-5), and the ply angle of the cracked plies increased due to the increased strain limit, 2u. However, the failure probability is higher than the design that does not allow matrix cracking, which indicates that this option does not help. The active constraint is still the tensile strain limit 2u of 0.011 at cryogenic temperatures for the un-cracked plies. Table 4-6. Optimal laminate for temperature dependent material properties allowing ucracked plies 1(degree) 2(degree) 1 (inch) 2 (inch) ha (inch) Probability of failur 36.07 82) 25.24 0.015 0.010 0.100 (0.097) 0.003716 (0.0045 a Numbers in parentheses indicate unrounded thickness. OoexiaNy f the load is the axial load Ny, llant tank. A smaller axial load may be obtain ptimizati ns with R duced A l Load With small ply angles, the critical component o induced by pressure on the caps of the prope ed by using an auxiliary structure to carry part of this load, such as axial stiffeners or a cable connecting the caps. If the auxiliary structure does not directly connect to the wall of the hydrogen tank (such as attached to the caps of the tank), it will not be affected by the mismatch of the thermal expansion coefficients, i.e., the residual thermal strains. Here the possibility of reducing the axial load by half by carrying 1200 lb./inch of the

PAGE 56

40 axial load by a cable made of unidirectional material was explored. The required cross-sectional area of the composite cable is 5.05 inch2, which is equivalent to a laminate thickness of 0.005 inch for a tank with a 160-inch radius. Table 4-7 lists designs optimized with half of the axial load. The results indicate that reducing axial load is an effective way to reduce the laminate thickness. Higher probabilities of failure reflect rounding down of the thickness. Table 4-7. Optimal laminates for reduced axial load of1, 200 lb./inch by using load 1 2 t1 shunting cables (equivalent laminate thickness of 0.005 inch) (degree) (degree) (inch) t2 (inch) ha (inch) Probability of failure 0.00 29.48 0.005 0.005 0.040 (0.043) 0.010311 (0.00115 6) 2798 2 (0.473536) 42) .30.62 26.20 0.005 0.005 0.040 (0.043) 0.58573 11.31 0.005 0.005 0.040 (0.0 0.008501 (0.008363) a insate ded is st th of ly dth safety factors did not work well for this problem due to various uncertainties and the lamin Numb ers paren the es indic unroun thickness. It een tha e tradition al way deterministical esign the laminate wi ate cracking failure mode. Uncertainties in the material properties are introduced by the fabrication process, the temperature dependence of material properties, the cure reference temperature, and acceptable crack density for design. These uncertainties indicate a need to use reliability-based optimization to design laminates for use at cryogenic temperatures.

PAGE 57

CHAPTER 5 RELIABILITY-BASED DESIGN OF COMPOSITE LAMINATES FOR CRYOGENIC ENVIRONMENTS This chapter presents reliability-based designs of composite laminates for hydrogen tanks in cryogenic environments, comparison between deterministic and reliability-based design, identification of uncertainty parameters that have the largest influence on the optimum design, and quantification of the weight penalty associated with level of uncertainty in those parameters. The results indicate that the most effective measure for reducing thickness is quality control (refer also to Qu et al., 2001). The reliability-based optimization is carried out using response surface approximations combined with Monte Carlo simulation described in chapter three. Reliability-Based Design Optimization Problem Formulation The reliability-based optimization is formulated as 214tth minimize (5-1) 21005.0005.0such thatttPPu where h is the laminate thickness, t1 is the stack thickness of lamina with ply-angle 1 and has a lower limit of 0.005 inch on it, t2 is the stack thickness of lamina with ply-angle 2 and has a lower limit of 0.005 inch. The limits on t1 and t2 also ensure that the laminate has a minimum thickness of 0.04 inch to prevent hydrogen leakage. The reliability 41

PAGE 58

42 constraint is expressed as a limit Pu (i.e., Pu=10-4) on the probability of failure, P. The probability of failure is based on first-ply failure according to the maximum strain failure criterion. The four design variables are the ply angles 1 and 2 and their stack-thickness t1 and t2. Reliability-based optimization seeks the lightest structure satisfying the reliability constraint. The twelve random variables are four elastic properties (E1, E2, G12, and 12), two coefficients of thermal expansion (1 and 2), five ply strain allowables (1u, 1l, 2u, 2l, and 12u), and the stress-free temperature of the material (Tzero). The mean values of the strain limits are shown in Table 5-1 except for 2u, which is 0.0154. Table 5-2 shows the coefficients of variation (CV) of the random variables that are assumed to be normally distributed and uncorrelated. Those CVs are obtained based on limited test data provided to us by the manufactures, and are intended only for illustration. The mean value of the stress-free temperature is 300F. The mean values of other random variables change as function of temperature and are given in chapter four. Table 5-1. Strain allowablesa for IM600/133 at F Strain 1u 1l 2u 2l 12u Allowables 0.0103 -0.0109 0.0110 or 0.0154b -0.0130 0.0138 a Strains include residual strains calculated from the stress-free temperature of 300F b The value 0.0110 is obtained from the extreme value 0.0154 divided by a safety factor of 1.4 Table 5-2. Coefficients of variation (CV) of the random variables Random variables E1 E2 G12 12 1 2 Tzero 1u 1l 2u 2l 12u CV 0.035 0.035 0.030 0.06 0.09

PAGE 59

43 Response Surface Approximation for Reliability-Based Optimization For the present work, response surface approximation of two types was created. The first type is analysis response surface (ARS), which is fitted to the strains in the laminate in terms of both design variables and random variables. Using the ARS, the probability of failure at every design point can be calculated inexpensively by Monte Carlo simulation based on the fitted polynomials. The second type of response surface is design response surface (DRS) that is fitted to probability of failure as a function of design variables. The DRS is created in order to filter out noise induced by the Monte Carlo simulation and is used to calculate the reliability constraint in the design optimization. The details of the ARS/DRS approach are given in chapter three. Analysis Response Surfaces Besides the design and random variables described in the problem formulation, the service temperature was treated as a variable ranging from 77F to F in order to avoid constructing analysis response surfaces at each selected temperature. Therefore, the total number of variables was seventeen. However, the strains in the laminate do not depend on the five strain allowables, so the ARS were fitted to the strains in terms of twelve variables, which included four design variables, four elastic properties, two coefficients of thermal expansion, the stress-free temperature and the service temperature. The range of the design variables (Table 5-3) for the ARS was chosen based on the values of the optimal deterministic design. Ranges for random variables are automatically handled and explained below. Using the ARS and five strain allowables, probabilities of failure are calculated by Monte Carlo simulations, while the strain constraints were evaluated at 21 uniformly distributed service temperatures between 77F and F.

PAGE 60

44 Table 5-3. Range of design variables for analysis response surfaces Design variables 1 2 t1 t2 Range 20 to 30 20 to 30 0.0125 to 0.03 inch 0.0125 to 0.03 inch The accuracy of the ARS is evaluated by statistical measures provided by the JMP software (Anon. 2000), which include the adjusted coefficient of multiple determination (R2adj.) and the root mean square error (RMSE) predictor. To improve the accuracy of response surface approximation, polynomial coefficients that were not well characterized were eliminated from the response surface model by using a mixed stepwise regression (Myers and Montgomery 1995). The statistical design of experiment of ARS was Latin Hypercube sampling or Latin Hypercube design (LHS, e.g., Wyss and Jorgensen 1998), where design variables were treated as uniformly distributed variables in order to generate design points (presented in Chapter 3). Since the laminate has two ply angles and each ply has three strains, six ARS were needed in the optimization. A quadratic polynomial of twelve variables has 91 coefficients. The number of sampling points generated by LHS was selected to be twice the number of coefficients. Tables 4 shows that the quadratic response surfaces constructed from LHS with 182 points offer good accuracy. Table 5-4. Quadratic analysis response surfaces of strains (millistrain) Analysis response surfaces based on 182 LHS points Error Statistics 1 in 1 2 in 1 12 in 1 1 in 2 2 in 2 12 in 2 R2adj 0.9977 0.9956 0.9991 0.9978 0.9961 0.9990 RMSE Predictor 0.017 0.06 0.055 0.017 0.055 0.06 Mean of Response 1.114 8.322 -3.13 1.108 8.328 -3.14

PAGE 61

45 Design Response Surfaces The six quadratic ARS were used to calculate the probabilities of failure with Monte Carlo simulation. Because the fitting errors in design response surfaces (DRS) are generally larger than the random errors from finite sampling in probability calculation, Monte Carlo simulation needs only to be performed until relatively small errors estimated confidence intervals are achieved. Therefore, a sample size of 1,000,000 was employed. The design points of DRS combine Face Center Central Composite Design (FCCCD) and LHS. Table 5-5 compares the three DRS. Table 5-5. Design response surfaces for probability of failure (Probability calculated by Monte Carlo simulation with a sample size of 1,000,000) FCCCD 25 points LHS 252 points LHS 252 points + FCCCD 25 points Error Statistics quadratic 5th order 5th order R2adj 0.6855 0.9926 0.9982 RMSE Predictor 0.00053 0.000003 0.000012 Mean of Response 0.00032 0.000016 0.000044 The accuracy of the quadratic response surface approximation is unacceptable. The accuracy of fifth order response surface (with 126 unknown coefficient before stepwise regression) was improved by using a reciprocal transformation on the thickness t1 and t2, since the probability of failure, like most structural response, is inversely correlated with the stack thickness. We found that LHS might fail to sample points near some corners of the design space, leading to poor accuracy around these corners. We therefore combined LHS with FCCCD that includes all the vertices of the design space. The accuracy of DRS based on LHS combined with FCCCD is slightly worse than that of DRS based on LHS alone, because the probabilities at the corners of the design space are usually extremely low or high, presenting a greater fitting difficulty than without FCCCD. But the

PAGE 62

46 extrapolation problem was solved, and the side constraints are set as the range of the ARS shown in Table 5-3. The error of 0.000012 is much lower than the allowable failure probability of 0.0001. Table 5-6 compares the reliability-based optimum with the three deterministic optima from chapter 4 and their failure probabilities. The optimal thickness increased from 0.100 to 0.120, while the failure probability decreased by about one order of magnitude. Table 5-6. Comparison of reliability-based optimum with deterministic optima Optimal Design [1, 2, t1, t2] (degree and inch) Laminate thickness (inch) Failure probability from MCS of ARS 1,000,000 samples Allowable probability of failure [24.89, 25.16, 0.015, 0.015] 0.120 (0.120) 0.000055 0.0001 [0.00, 28.16, 0.005, 0.020] 0.100 (0.103) 0.019338a [27.04, 27.04, 0.010, 0.015] 0.100 (0.095) 0.000479 [25.16, 27.31, 0.005, 0.020] 0.100 (0.094) 0.000592 Deterministic optima a This deterministic optimum is out of the range of the analysis response surfaces; the probability of failure was calculated by Monte Carlo simulation based on another set of analysis response surfaces. Refining the Reliability-Based Design The reliability-based designs in Table 5-6 show that the ply angles close to 25 offer designs with low failure probability. Furthermore, good designs require only a single ply-angle allowing simplification of the configuration of the laminate from [1/2]S to [ ]S. Table 5-7 shows the failure probabilities of some chosen designs calculated with Monte Carlo simulation using ARS. The laminates with ply-angles of 24, 25, and 26 offer lower probabilities of failure than the rest. These three laminates will be further studied.

PAGE 63

47 Table 5-7. Refined reliability-based design []S (Monte Carlo simulation with a sample size of 10,000,000) (degree) h (inch) Probability of failure 21.00 0.120 0.0001832 22.00 0.120 0.0001083 23.00 0.120 0.0000718 24.00 0.120 0.0000605 25.00 0.120 0.0000565 26.00 0.120 0.0000607 27.00 0.120 0.0000792 Quantifying Errors in Reliability Analysis The reliability analysis has errors due to MCS with limited sample size and due to the approximation of CLT analysis by analysis response surfaces. To evaluate the amount of errors in reliability analysis, the probability of failure of the rounded design was evaluated by using MCS with the exact analysis (classical laminate theory, CLT), but only one million analyses were performed due to the cost of computation. Table 5-8 compares the results of MCS based on ARS and that based on CLT. The difference is about 1.2510-5. Table 5-8. Comparison of probability of failure from MCS based ARS and CLT Optimal Design [1, 2, t1, t2] (degree and inch) Laminate thickness (inch) Failure probability from MCS of ARS 1x107 samples Failure probability from MCS of CLT 1x106 samples [25, 25, 0.015, 0.015] 0.120 (0.120) 0.0000565 0.000069 By assuming each simulation as a Bernoulli trial and the N trails as Binomial distribution, the coefficient of variation (COV) of the probability (Pof) obtained by MCS can be estimated by PofNPofPofPofCOV)1()( (5-2)

PAGE 64

48 where N is the sample size of the MCS. The accuracy of MCS can also be expressed in terms of percentage error corresponding 95% confidence interval as %196)1(%TTPofNPof (5-3) where PofT is the true probability of failure. Table 5-9 shows the accuracy and error bounds for MCS. Together with Table 5-8 the error calculation indicates that the probability of failure of the rounded design is still below the target probability of failure of 0.0001. The errors can be reduced by more accurate approximations and advanced Monte Carlo simulations. Another reliability-based design cycle in a reduced size design region can be performed to obtain more accurate result. Table 5-9. Accuracy of MCS Coefficient of Variation (COV) Percentage errors (Absolute errors) for 95% CI MCS of 1x107 samples 4.2% 8.2% (4.66x10-6) MCS of 1x106 samples 12.05% 23.6%(1.63x10-5) Effects of Quality Control on Laminate Design Comparing deterministic designs to the reliability-based design, there is an increase of 20% in the thickness. In addition, the design failure probability of 10-4 is quite high. In order to improve the design the possibility of limiting the variability in material properties through quality control (QC) is considered. Here, quality control means that materials will be tested by the manufacturer and/or fabricator, and that extremely poor batches will not be accepted. Normal distributions assume the possibility (though very small) of unbounded variation. In practice, quality control truncates the low end of the distribution. That is, specimens with extremely poor properties are rejected. It is also

PAGE 65

49 assumed that specimens with exceptional properties are scarcer than those with poor properties. The normal distribution will be truncated on the high side at 3 (excluding 14 out of 10,000 specimens) and on the low side at different values corresponding to different levels of QC. The tradeoff between QC, failure probability and laminate thickness (weight) will be explored. Effects of Quality Control on Probability of Failure Since the primary failure mode of the laminate is micro-cracking, the tensile strain limit 2u is the first quantity to be improved by quality control. The normal distribution of 2u is truncated at 3 to exclude unrealistically strong specimens, and on the low side QC at -4, -3, and -2 was checked, which corresponds to rejecting 3 specimens out of 100,000, 14 specimens out of 10,000, and 23 specimens out of 1,000, respectively. Table 5-10 shows the change in the failure probability for selected reliability-based designs. Quality control on 2u is a very effective way to reduce the probability of failure. A relatively low cost QC of 2u at 3 will reduce the failure probability by more than two orders of magnitude. Table 5-10. Effects of quality control of 2u on probability of failure for 0.12 inch-thick ()S laminates Probability of failure from MCS 10,000,000 samples Un-truncated Normal Truncate at -4 (3/100,000) Truncate at -3 (14/10,000) Truncate at -2 (23/1,000) 24.0 60.5e-6 30.5e-6 0.0 0.0 25.0 56.5e-6 29.9e-6 0.1e-6 0.0 26.0 60.7e-6 31.0e-6 0.5e-6 0.0 Table 5-11 shows that truncating other strain limits even at -2 will not change the laminate failure probability substantially. This reveals the fact that the primary failure mode of the laminate is micro-cracking. Therefore, 2u is the critical parameter to study

PAGE 66

50 further. Table 5-12 shows that this conclusion applies also to the elastic moduli, coefficients of thermal expansion, and stress-free temperature. By comparing with Table 5-10, we see that truncating any of the other parameters at -2 does not change the failure probability as significantly as truncating 2u at -4 Note that some probabilities from truncated distributions are slightly larger than those from untruncated distributions, which is due to the sampling errors. Table 5-11. Effects of quality control of 1u, 1l, 2l, and 12 on probability of failure of 0.12 inch-thick ()S laminates Probability of failure from MCS 10,000,000 samples Un-truncated Normal Truncated 1u at -2 Truncated 1l at -2 Truncated 2l at -2 Truncated 12 at -2 24.0 60.5e-6 58.6e-6 61.5e-6 54.4e-6 55.4e-6 25.0 56.5e-6 53.0e-6 52.3e-6 54.0e-6 53.2e-6 26.0 60.7e-6 63.4e-6 62.0e-6 60.1e-6 61.0e-6 Table 5-12. Effects of quality control of E1, E2, G12, 12, Tzero, 1, and 2 on probability of failure of 0.12 inch-thick ()S laminates Probability of failure from MCS 10,000,000 samples by truncating at -2 E1 E2 G12 12 Tzero 1 2 24.0 62.2e-6 52.1e-6 57.8e-6 51.8e-6 54.6e-6 52.7e-6 58.2e-6 25.0 52.5e-6 48.1e-6 55.1e-6 49.7e-6 55.1e-6 56.8e-6 54.4e-6 26.0 54.5e-6 59.1e-6 60.4e-6 59.4e-6 59.8e-6 63.0e-6 60.4e-6 Effects of Quality Control on Optimal Laminate Thickness Quality control (QC) can be used to reduce the laminate thickness instead of the probability of failure. Table 5-13 shows that QC of 2u at -3 will allow 0.1 inch-thick laminates with failure probability below the required 0.0001.

PAGE 67

51 Table 5-13. Effects of quality control of 2u on probability of failure for 0.1 inch-thick ()S laminates Probability of failure from MCS 1,000,000 samples Un-truncated Normal Truncate at -4 (3/100,000) Truncate at -3 (14/10,000) Truncate at -2.5 (6/1,000) 24.0 0.002224 0.002163 0.001054 0.000071 25.0 0.001030 0.000992 0.000229 0.000007 26.0 0.000615 0.000629 0.000092 0.000003 Table 5-14 shows that QC of 2u at -1.6, which corresponds to rejecting 55 specimens out of 1000, will reduce the thickness to 0.08 inch with a failure probability below 0.0001. Therefore, the laminate thickness can be reduced to 0.08 inch if QC is able to find and reject 55 specimens out of 1000. Table 5-14. Effects of quality control of 2u on probability of failure for 0.08 inch-thick ()S laminates Probability of failure from MCS 1,000,000 samples Un-truncated Normal Truncate at -3 (14/10,000) Truncate at -2 (23/1,000) Truncate at -1.6 (55/1,000) 24.0 0.061204 0.060264 0.039804 0.015017 25.0 0.028289 0.027103 0.008820 0.001019 26.0 0.013595 0.012154 0.001243 0.000071 Effects of Other Improvements in Material Properties Instead of quality control, it is possible to improve the design by using a better material. Table 5-15 shows the effects of changing the mean value of 2u by 10 percent of the nominal value of 0.0154. Comparison with Table 5-10 shows that a 10% improvement has big influence on failure probability but is not as powerful as quality control at -3 level.

PAGE 68

52 Table 5-15. Sensitivity of failure probability to mean value of 2u (CV=0.09) for 0.12 inch-thick l()S aminates Probability of failure from MCS 10,000,000 samples E(2u)=0.0154 E(2u)=0.01694 E(2u)=0.01386 24.0 60.5e-6 2.5e-6 1082.3e-6 25.0 56.5e-6 3.4e-6 996.7e-6 26.0 60.7e-6 3.4e-6 1115.7e-6 The failure probability also depends on the coefficient of variation (CV) of 2u. The CV can be improved if the manufacturing could be more consistent. Table 5-16 shows that the failure probabilities are not as sensitive to changes of coefficient of variation as to changes in the mean value of 2u, but 10 percent reduction in the coefficient of variation can still reduce the failure probability by about a factor of five. Table 5-16. Sensitivity of failure probability to CV of 2u ( E(2u)=0.0154 ) for 0.12 inch-thick ()S laminates Probability of failure from MCS 10,000,000 samples CV=0.09 CV=0.099 CV=0.081 24.0 60.5e-6 209.5e-6 9.8e-6 25.0 56.5e-6 208.2e-6 10.8e-6 26.0 60.7e-6 224.2e-6 11.1e-6 Figure 5-1 combines several effects discussed earlier to show a tradeoff plot of probability of failure, cost (truncating and changing the distribution of 2u), and weight (thickness) for a laminate of [ 25]S. For probability of failure less than 1e-3, quality control at the -2 level is more effective for reducing the probability of failure than increasing the mean value by 10 percent or decreasing the coefficients of variation by 10 percent. The reason is that small failure probability is heavily affected by the tails of the

PAGE 69

53 distributions. For large failure probability, increasing the mean value of 2u is more effective. Increasing the mean value of 2u by 10 percent or truncating 2u at 2 can reduce the laminate thickness to 0.10 inch for a safety level of 1e-4. Combining all three measures together, the laminate thickness can be reduced to 0.08 inch with a safety level of 1e-7. Table 5-17 shows the changes of maximum 2 calculated by the laminate analyses. Ten percent changes of the mean values of E2, Tzero, and 2 (same CV) will lead to about 5% change in the maximum 2, which indicate that further study needs to focus on the three quantities. Table 5-18 shows the probabilities of failure are reduced by a factor of five by 10 percent change of the mean values of E2, Tzero, and 2 (same CVs). This reduction of probability shows the potential of further improvements via improvements in all three material properties. 1.0E-081.0E-071.0E-061.0E-051.0E-041.0E-031.0E-021.0E-011.0E+000.060.080.10.120.140.16Thickness (inch)Failure Probability Nominal Quality control to -2 Sigma 10% increase in allowable 10% reduction in variability All Figure 5-1. Tradeoff plot of probability of failure, cost, and weight (laminate thickness) for [25]S

PAGE 70

54 Table 5-17. Maximum 2 (millistrain) induced by the change of material properties E1, E2, G12, 12, Tzero, 1, and 2 for 0.12 inch-thick [25]S laminate Maximum 2 from deterministic analyses for 21 temperature Nominal maximum 2=9.859 E1 E2 G12 12 Tzero 1 2 0.9*Nominal 9.901 10.469 9.763 9.909 9.320 (5.47%) 9.857 9.399 (4.67%) 1.1*Nominal 9.824 9.313 (5.54%) 9.960 9.981 10.584 9.861 10.333 Table 5-18. Probability of failure for 0.12 inch-thick [ 25]S laminate with improved average material properties (Monte Carlo simulation with a sample size of 10,000,000) Nominal 1.1*E(E2) 0.9*E(Tzero) 0.9*E(2) All three measures Probability of failure 0.0000605 0.0000117 0.0000116 0.0000110 0.0000003 Summary The design of hydrogen tanks for cryogenic environments poses a challenge because of large thermal strains that can cause matrix-cracking, which may lead to hydrogen leakage. The laminate design must use ply angles that are not too far apart to reduce the thermal residual strains, compromising the ability of the laminate to carry loads in two directions. These small ply angles can cause the laminate thickness to more than double compared to what is needed to carry only the mechanical loads in the application study here. Satisfying reliability constraints increased the thickness further. Improving the probability of failure required increase of thickness. The most influential uncertainty was variability in the tensile strain allowable in the direction transverse to the fibers, 2u. Limiting this variability can reduce the required thickness. Of the different options studied in the chapter, quality control on the transverse tensile allowable, 2u, proved to be the most effective option. Quality control at the .6 level of 2u, corresponding to rejection of about 5.5% of the specimens, can reduce the required

PAGE 71

55 thickness by a third. Reductions in the coefficient of variation of 2u, or increase in its mean value also reduce the failure probability substantially. Increasing the transverse modulus E2, decreasing coefficient of thermal expansion 2, and reducing the stress free temperature Tzero can also help considerably.

PAGE 72

CHAPTER 6 PROBABILISTIC SUFFICIENCY FACTOR APPROACH FOR RELIABILITY-BASED DESIGN OPTIMIZATION A probabilistic sufficiency factor approach is proposed that combines safety factor and probability of failure for use in reliability-based design optimization. The probabilistic sufficiency factor approach represents a factor of safety relative to a target probability of failure. It provides a measure of safety that can be used more readily than probability of failure or safety index by designers to estimate the required weight increase to reach a target safety level. The probabilistic sufficiency factor can be calculated from the results of Monte Carlo simulation with little extra computation. The chapter presents the use of probabilistic sufficiency factor with a design response surface approximation, which fits it as function of design variables. It is shown that the design response surface approximation for the probabilistic sufficiency factor is more accurate than that for the probability of failure or for the safety index. The probabilistic sufficiency factor does not suffer like probability of failure or safety index from accuracy problems in regions of low probability of failure when calculated by Monte Carlo simulation. The use of probabilistic sufficiency factor accelerates the convergence of reliability-based design optimization Introduction Recently, there has been interest in using alternative measures of safety in reliability-based design optimization. These measures are based on margin of safety or safety factors that are commonly used as measures of safety in deterministic design. 56

PAGE 73

57 Safety factor is generally expressed as the quotient of allowable over response, such as the commonly used central safety factor that is defined as the ratio of the mean value of allowable over the mean value of the response. The selection of safety factor for a given problem involves both objective knowledge such as data on the scatter of material properties and subjective knowledge such as expert opinion. Given a safety factor, the reliability of the design is generally unknown, which may lead to unsafe or inefficient design. Therefore, the use of safety factor in reliability-based design optimization seems to be counter productive. Freudenthal (1962) showed that reliability can be expressed in term of the probability distribution function of the safety factor. Elishakoff (2001) surveyed the relationship between safety factor and reliability, and showed that in some cases the safety factor can be expressed explicitly in terms of reliability. The standard safety factor is defined with respect to the response obtained with the mean values of the random variables. Thus a safety factor of 1.5 implies that with the mean values of the random variables we have a 50% margin between the response (e.g., stress) and the capacity (e.g., failure stress). However, the value of the safety factor does not tell us what the reliability is. Therefore, Birger (1970), as reported by Elishakoff (2001), introduced a factor, which we call here the probabilistic sufficiency factor that is more closely related to the target reliability. A probabilistic sufficiency factor of 1.0 implies that the reliability is equal to the target reliability, a probabilistic sufficiency factor larger than one means that the reliability exceeds the target reliability, and probabilistic sufficiency factor less than one means that the system is not as safe as we wish. Specifically, a probabilistic sufficiency

PAGE 74

58 factor value of 0.9 means that we need to multiply the response by 0.9 or increase the capacity by 1/0.9 to achieve the target reliability. Tu et al. (2000) used probabilistic performance measure, which is closely related to Birgers safety factor, for RBDO using most probable point (MPP) methods (e.g., first order reliability method). They showed that the search for the optimum design converged faster by driving the safety margin to zero than by driving the probability of failure to its target value. Wu et al. (1998, 2001) used probabilistic sufficiency factors in order to replace the RBDO with a series of deterministic optimizations by converting reliability constraints to equivalent deterministic constraints. The use of the probabilistic sufficiency factor gives a designer more quantitative measure of the resources needed to satisfy the safety requirements. For example, if the requirement is that the probability of failure is below 10-6 and the designer finds that the actual probability is 10-4, he or she cannot tell how much change is required to satisfy the requirement. If instead the designer finds that a probability of 10-6 is achieved with a probabilistic sufficiency factor of 0.9, it is easier to estimate the required resources. For a stress-dominated linear problem, raising the probabilistic sufficiency factor from 0.9 to 1 typically requires a weight increase of about 10 percent of weight of the overstressed components. Reliability analysis of systems with multiple failure modes often employs Monte Carlo simulation, which generates numerical noise due to limited sample size. Noise in the probability of failure or safety index may cause reliability-based design optimization (RBDO) to converge to a spurious optimum. The accuracy of MCS with a given number of samples deteriorates with decreasing probability of failure. For RBDO problems with

PAGE 75

59 small target probability of failure, the accuracy of MCS around the optimum is not as good as in regions with high probability of failure. Furthermore, the probability of failure in some regions may be so low that it is calculated to be zero by MCS. This flat zero probability of failure does not provide gradient information to guide the optimization procedure. The probabilistic sufficiency factor is readily available from the results of MCS with little extra computational cost. The noise problems of MCS motivate the use of response surface approximation (RSA, e.g., Khuri and Cornell 1996). Response surface approximations typically employ low-order polynomials to approximate the probability of failure or safety index in terms of design variables in order to filter out noise and facilitate design optimization. These response surface approximations are called design response surface approximation (DRS) and are widely used in the RBDO (e.g., Sues et al. 1996). The probability of failure often changes by several orders of magnitude over narrow bands in design space, especially when the random variables have small coefficients of variation. The steep variation of probability of failure requires DRS to use high-order polynomials for the approximation, such as quintic polynomials (chapter 5), increasing the required number of probability calculations (Qu et al. 2000). An additional problem arises when Monte Carlo simulations (MCS) are used for calculating probabilities. For a given number of simulations, the accuracy of the probability estimates deteriorates as the probability of failure decreases. The numerical problems associated with steep variation of probability of failure led to consideration of alternative measures of safety. The most common one is to use the

PAGE 76

60 safety index, which replaces the probability by the distance, which is measured as the number of standard deviations from the mean of a normal distribution that gives the same probability. The safety index does not suffer from steep changes in magnitude, but it has the same problems of accuracy as the probability of failure when based on Monte Carlo simulations. However, the accuracy of probabilistic sufficiency factor is maintained in the region of low probability. The probabilistic sufficiency factor also exhibits less variation than probability of failure or safety index. Thus the probabilistic sufficiency factor can be used to improve design response surface approximations for RBDO. The next section introduces the probabilistic sufficiency factor, followed by the computation of the probabilistic sufficiency factor by Monte Carlo simulation. The methodology is demonstrated by the reliability-based beam design problem. Probabilistic Sufficiency Factor The deterministic equivalent of reliability constraint in RBDO can be formulated as ) ,(),(dxdxcrgg (6-1) where gr denotes a response quantity, gc represent a capacity (e.g., strength allowable), is usually the mean value vector of random variables, d is the design vector. The traditional safety factor is defined as x ),(),(),(dxdxdxrcggs (6-2) and the deterministic design problem requires (6-3) rss),(dx where sr is the required safety factor, which is usually 1.4 or 1.5 in aerospace applications. The reliability constraint can be formulated as a requirement on the safety factor

PAGE 77

61 rPs )1(Prob (6-4) where Pr is the required probability of failure. Birgers probabilistic sufficiency factor Psf is the s olution to Ps rP sf )(Probhows the probability density of the safety factor for a given design. The area under the curve left to s=1 represents the proresents the target proba (6-5) It is the safety factor that is violated with the required probability Pr. Figure 6-1 s bability that s<1, hence it is equal to actual probability of failure. The shaded area in the figure rep bility of failure, Pt. For this example, since it is the area left of the line s=0.8, Psf = 0.8. The value of 0.8 indicates that the target probability will be achieved if we reduced the response by 20 % or increased the capacity by 25% (1/0.8-1). For many problems this provides sufficient information for a designer to estimate the additional structural weight. For example, raising the safety factor from 0.8 to 1 of a stress-dominated linear problem typically requires a weight increase of about 20% of the weight of the overstressed components. Figure 6-1. Probability density of safety factor. The area under the curve left to s=1 measures the actual probability of failure, while the shaded area is equal to the target probability of failure indicating that probabilistic sufficiency factor = 0.8

PAGE 78

62 Using Probabilistic Sufficiency Factor to Estimate Additional Structural Weight to Satisfy the Reliability Constraint The following cantilever beam example (Figure 6-2) is taken from Wu et al. (2001) to demonstrate the use of probabilistic sufficiency factor. Figure 6-2. Cantilever beam subject to vertical and lateral beading s yielding, which is m There are two failure modes in the beam design problem. One failure mode i ost critical at the corner of the rectangular cross section at the fixed end of the beam )600600(),,,,(22XtwYwtRRtwYXRgS (6-6) where R is the yield strength, X and Y are the independent horizontal and vertical loads. Another failure mode is the tip deflection exceeding the allowable displacement, D0 2 2200wtEwtD 234),,,,(XYLDDDtwYXEg (6-7) whereRandom variables X Y R E E is the elastic modulus. The random variables are defined in Table 6-1. Table 6-1. Random variables in the beam design problem Distribution Normal (500, 100) lb Normal (1000, 100) lb Normal (40000,2000) Normal (29E6, 1.45E6) psi psi The cross sectional area is minimized subject to two reliability constraints, which e require the safety indices for strength and deflection constraints to be larger than thre

PAGE 79

63 (probability of failure less than 0.00135). The reliability-based design optimization problem, with the width w and thickness t of the beam as design variables that are deterministic, can be formulated as wtA minimize 05 0013 .0that (6-8) based on probability of failure, or p such wtA minimize 03 (6-9) based on safety index, where is the safety index, or thatsuch wtA minimize such 01sfPthat (6-10) based on the probabilistic sufficiency factor. Tthe above three forms, which are equivalently in terms of safety. The details of the beam design are given later in the paper. In order to demonstrate the utility of the P for estimating the required weight for correcting a safety deficiency, it is useful to see how the stresses and the displacements problem. If we have a given design with dimensions w0 and t0 and a Psf of Psf0, which is smaller than one, we can make the structure safer by scaling both w and t un he reliability constraints are formulated in sf depend on the weight (or cross sectional area) for this iformly by a constant c 00,cttcww (6-11) It is easy to check from (6-6) and (6-7) that the stress and the displacement will then change by a factor of c3, an d the area by a factor of c2. Since the Psf is inversely

PAGE 80

64 propo rtional to the most critical stress or displacement, it is easy to obtain the relationship 5.10)(APP (6-12) where A0=w0t0. This indicates that a one percent increase in area (corresponding to 0.5 percent increase in w and t) will improve the Psf by about 1.5 per 0Asfsfcent. Since non-uniform increases in the width and thickness may be mobe able to do better than 1.5 percent. Thus, if we have P=0.97, we can expect that we can make the structure safe with a weight increase under two percent. The probabilistic sufficiency factor gives a designer a measure of safety that can be used more readily than the probability of failure or the safety index to estimate the required weight increase to reach a target safety level.sf1302 (safety index re efficient than uniform scaling, we may sf The P of a beam design, presented in section 4 in details, is 0.9733 for a target probability of failure of 0.00135, (6-12) indicate that the deficiency in the Psf can be corrected by scaling up the area by a factor of 1.0182. Since the area A is equal to c2wt, the dimensions should be scaled by a factor c of 1.0091 (=1.01820.5) to w = 2.7123 and t = 3.5315. Thus the objective function of the scaled design is 9.5785. The probability of failure of the scaled design is 0.00 of 3.0110 and probabilistic sufficiency factor of 1.0011) evaluated by MCS with 1,000,000 samples. Such estimation is readily available using the probability of failure (0.00314) and the safety index (2.7328) of the design. Reliability Analysis Using Monte Carlo Simulation Let g(x) denote the limit state function of a performance criterion (such as strength allowable larger than stress), so that the failure event is defined as g(x) <0, where x is a random variable vector. The probability of failure of a system can be calculated as

PAGE 81

65 xxxXdfPgf0)()( (6-13) X where f(x) is the joint probability distribution function (JPDF). This integral is hard to evaluate, because the in and integration in high dimCarlo simulation (MCS) (e.g., Melchers 1999). Monte Carlo simulation is a good method to use for system reliability analysis with multiple failure modes. The present chapter achieve low relativccurate near the optimum. Furthermore, the probabilities of failure in some design regions may be so low that they are calculated as zero by MCS. tegration domain defined by g(x) < 0 is usually unknown, ension is difficult. Commonly used probabilistic analysis methods are either moment-based methods such as the first-order-reliability method (FORM) and the second-order-reliability method (SORM), or simulation techniques such as Monte focuses on the use of MCS with response surface approximation in RBDO. Monte Carlo simulation utilizes randomly generated samples according to the statistical distribution of the random variables, and the probability of failure is obtained by calculating the statistics of the sample simulation. Fig. 3 illustrated the Monte Carlo simulation of a problem with two random variables. The probability of failure of the problem is calculated as the ratio of the number of samples in the unsafe region over the total number of samples. A small probability requires a large number of samples for MCS to e error. Therefore, for fixed number of simulations, the accuracy of MCS deteriorates with the decrease of probability of failure. For example, with 106 simulations, a probability estimate of 10-3 has a relative error of a few percent, while a probability estimate of 10-6 has a relative error of the order of 100 percent. In RBDO, the required probability of failure is often very low, thus the probability (or safety index) calculated by MCS is ina

PAGE 82

66 This f lat zero probability of failure or infinite safety index cannot provide useful gradient information to the optimization. Figure 6-3. Monte Carlo simulation of problem with two random variables Calculation of Probabilistic Sufficiency Factor by Monte Carlo Simulation Here we propose the use of probabilistic sufficiency factor to solve the problems associated with probability calculation by MCS. Psf can be estimated by MCS as follows. Define the nth safety factor of MCS as )ix(1in )(sMimthnns (6-14) where M is the sample size of MCS, and the nth min means the nth smallest safety factor among actors M safety factors from MCS. Thus )(ns is the nth-order statistics of M safety f from MCS, which corresponds to a probability of n/M of )()(nss x. That is, w e seek to find the safr ety factor that is violated with the required probability P. The probabilistic sufficiency factor is then given as MPnsPrnsf for )( (6-15)

PAGE 83

67 For example, if the required probability Pr is 10-4 and the sample size of Monte Carlo simulation M is 106, Psf is equal to the highest safety factor among the 100 samples mpy factor ubility constraints, the most critical safety factor is calculated first for each Monte Carlo sample, (n=PrM) with the lowest safety factors. The calculation of Psf requires only sorting the lowest safety factors in the Monte Carlo sales. While the probability of failure changes by several orders of magnitude the probabilistic sufficiencsually varies by less than one order of magnitude in a given design space. For problems with k relia ickiggnmis1 )(ix (6-16) ththth riThen the sorting of the n minimum safety factor can be proceeded as in (6-14). When n is small, it may be more accurate to calculate Psf as the average between the n and (n+1) lowest safety factor in the Monte Carlo samples. The probabilistic sufficiency factor provides more information than probability of failure or safety index. Even in the regions where the probability of failure is so small that it cannot be estimated accurately by the MCS with given sample size M, the accuracy of Psf is maintained. Using the probabilistic sufficiency factor also gives designers useful insights on how to change the design to satisfy sa2.1. The estimate is not readily available from the probability of failure or the safety index. The probabilistic sufficiency factor is based on the ratio of allowable to response, which exhibits much less variation than the probability of failure or safety index. Therefore, approximating probabilistic sufficiency factor in design optimization is easier than approximating probability of failure or safety index as discussed in the next section. fety requirements as shown in section

PAGE 84

68 Monte Carlo Simulation Using Response Surface Approximation Monte Carlo simulation is easy to implement, robust, and accurate with sufficiently large samples, but it requires a large number of analyses to obtain a good estimate of small failure probabilities. Monte Carlo simulation also produces a noisy response and hence is difficult to use in optimization. Response surface approximations solve the two problems, namely simulation cost and noise from random sampling. Response surface approximations fit a closed-form approximation to the limit state function to facilitate reliability analysis. Therefore, response surface approximation is as those requiring comp coefficient vector estime Point (MPP) that contributes most to the probability of failure oent particularly attractive for computationally expensive problems such lex finite element analyses. Response surface approximations usually fit low-order polynomials to the structural response in terms of random variables bxxTZg)()( (6-17) where)(xgdenotes the approximation to the limit state function g(x), Z(x) is the basis function vector that usually consists of monomials, and b is the ated by least square regression. The probability of failure can then be calculated inexpensively by Monte Carlo simulation or moment-based methods using the fitted polynomials. Response surface approximations (RSA) can be used in different ways. One approach is to construct local RSA around the Most Probabl f the structure. The statistical design of experim (DOE) of this approach is iteratively performed to approach the MPP on the failure boundary. For example, Bucher and Bourgund (1990), and Sues (1996, 2000) constructed progressively refined local RSA around the MPP by an iterative method. This

PAGE 85

69 local RSA approach can produce satisfactory results given enough iterations. Another approach is to construct global RSA over the entire range of random variables, i.e., desigurface can be written as n of experiment around the mean values of the random variables. Fox (1993, 1994, 1996) used Box-Behnken design to construct global response surfaces and summarized 12 criteria to evaluate the accuracy of RSA. Romero and Bankston (1998) employed progressive lattice sampling as the design of experiments to construct global RSA. With this approach, the accuracy of response surface approximation around the MPP is unknown, and caution must be taken to avoid extrapolation near the MPP. Both approaches can be used to perform reliability analysis for computationally expensive problems. The selection of RSA approach depends on the limit state function of the problem. The global RSA is simpler and efficient to use than local response surface approximation for problems with limit state function that can be well approximated globally. However, the reliability analysis needs to be performed and hence the RSA needs to be constructed at every design point visited by the optimizer, which requires a fairly large number of response surface constructions and thus limit state evaluations. The local RSA approach is even more computationally expensive than the global approach in the design environment. Qu et al. (2000) developed a global analysis response surface (ARS) approach in unified space of design and random variables to reduce the number of RSA substantially and achieve higher efficiency than the previous approach. This analysis response s bdxdxTZg),(),( (6-18)

PAGE 86

70 x and d are the random variable and design variable vectors, respectively. They recommended Latin Hypercube sampling as the statistical design of experiments. The number of response surface approximations constructed in optimization process is reduced substantially by introducing design variables into the response surface approximation formulation. The selection of RSA approa ch depends on the limit state function of the problem and target probability of failure. The global RSiterative RSA is desirable for general reliability analysis probl is the cumulative distribution function of normal distribution, improves the accur A approach is more efficient than local RSA, but it is limited to problems with relatively high probability or limit state function that can be well approximated by regression analysis based on simple basis functions. To avoid the extrapolation problems, RSA generally needs to be constructed around important region or MPP to avoid large errors in the results of MCS induced by fitting errors in RS. Therefore, an em. Design response surface approximations (DRS) are fitted to probability of failure to filter out noise in MCS and facilitate optimization. Based on past experience, high-order DRS (such as quintic polynomials) are needed in order to obtain a reasonably accurate approximation of the probability of failure. Constructing highly accurate DRS is difficult because the probability of failure changes by several orders of magnitude over small distance in design space. Fitting to safety index =-1(p), where p is the probability of failure and acy of the DRS to a limited extent. The probabilistic sufficiency factor can be used to improve the accuracy of DRS approximation.

PAGE 87

71 Beam Design Example The details of the beam design problem mentioned in section 2 are presented here. Since the limit state of the problem is available in closed form as shown by (6-6) and (6-7), the direct Monte Carlo simulation with a sufficient large number of samples is used here (without analysis response surface) in order to in order to better demonstrate the advantage of probabilistic sufficiency factors ov er probability of failure or safety index better. By using the exact limit in the results of Monte Carlo simulFCCCD, Khuri and Cornell 1996)sampling can be used to construct higher order response surface (Qu et al. 2000). We state function, the errors ation are purely due to the convergence errors, which can be easily controlled by changing the sample size. In applications where analysis response surface approximation must be used, the errors introduced by approximation can be reduced by sequentially improving the approximation as the optimization progresses. The reliability constraints, shown by (6-8) to (6-10), are approximated by design response surface approximates that fit to probability of failure, safety index, and probabilistic sufficiency factor. The accuracy of the design response surface approximations is then compared. The design response surface approximations are in two design variables w and t. A quadratic polynomial in two variables has six coefficients to be estimated. Since Face Center Central Composite Design ( is often used to construct quadratic response surface approximation, a FCCCD with 9 points was employed here first with poor results. Based on our previous experience, higher-order design response surface approximations are needed to fit the probability of failure or the safety index, and the number of points of a typical design of experiments should be about twice the number of coefficients. A cubic polynomial in two variables has 10 coefficients that require about 20 design points. Latin Hypercube

PAGE 88

72 found that Latin Hypercube sampling might fail to sample points near some corners of the design space, leading to poor accuracy around these corners. To deal with this extrapolation problem, all four vertices of the design space were added to 16 Latin Hypercube sampling points for a total of 20 points. Mixed stepwise regression (Myers and Montgomery 1995) was employed to eliminate poorly characterized terms in the response surface models. Design with Strength Constraint The range for the design response surface, shown in Table 6-2, was selected based on the mean-based deterministic design, w = 1.9574" and t = 3.9149". The probability of failure was calculated by direct Monte Carlo simulation with 100,000 samples based on the exact stress in (6-6). Table 6-2. Range of design variables for design response surface System variables w t Range 1.5" to 3.0" 3.5" to 5.0" Cubic design response surfaces with 10 coefficients were constructed and their statistics are shown in Table 6-3. An R2 close to one and an average percentage error (defined as the ratio of root mean square error (RMSE) predictor and mean of response) close to zero indicate good accuracy of the response surfaces. It is seen that the design response surfaces for the probabilistic sufficiency factor has the highest R2adj and the srcentage er The standard error in probility calculated by Monte Carlo simulation can be estima adj mallest average pe ror. ba ted as M pp)1(where p is the probability of failure, and M is the sample size of the Monte Carlo p (6-19)

PAGE 89

73 simulation. If a probability of failure of 0.2844 is to be calculated by Monte Carlo simulation of 100,000 samples (the mean probability of failure in Table 6-3), the standard error due to the limited sampling is 0.00143. The RMSE error of the probability design response surface is of 0.1103. Thus t he error induced by the limited sampling (100,000) is much smaller than error of the response surface approximation to the probability of failure. Table 6-3. Comparison of cubic design response surface approximations of probability of failure, safety index and probabilistic sufficiency factor for single strength failure mode (based on Monte Carlo simulation of 100,000 samples) 16 Latin Hypercube sampling points + 4 vertices Error Statistics Probabilistic RS Probability RS Safety index RS sufficiency factor R2adj 0.9228 0.9891 0.9999 RMSE Predictor 0.1103 0.3027 0.002409 Mean of Response 0.2844 1.9377 1.0331 APE (Average ErroPredicResponse) (=RMSE Predictor of Pof/Mean of Pof) Percentage r=RMSE tor/Mean of 38.78% 15.62% 0.23% APE in Pof 38.78% 12.04% N/A The probabilistic sufficiency factor designse surface haerage error less than onhile the sx desise surface has an average error of about 15.6 percent. It must be noted, however, that the average percent errors of the three d surfatlyd, because oent error in probabilistic sufficiency factor does not correspond to one percent error in probability of faiindex design response surface were transformed to errorability as shown in Table 6-3. It is seen that safety index design respon s an av e percent, w afety inde gn respon esign response ce cannot be direc compare ne perc lure or safety index. Errors in safety s in terms of prob

PAGE 90

74 re sponse surface approximation is more accurate than the probability design response surface approximation. Besides the average errors over the design space, it is instructive to compare errors measured in probability of failure in the important region of the design space. For optimization problems, the important region is defined as the region containing the optimum. Here it is the curve of target reliability according to each design response surface, on which the reliability constraint is satisfied critically, and the probability of failure should be 0.00135 if design response surface approximation does not have errors. For each design response surface approximation, 11 test points were selected along a curve of target reliability and given in the Appendix. The average percentage errors at these test points, shown in Table 6-4, demonstrate the accuracy advantage of the probabilistic sufficiency factor approach. For the target reliability, the standard error due to Monte Carlo simulation of 100,000 samples is 8.6%, which is comparable to the response surface error for the Psf. For the other two response surfaces, the errors are apparently dominated by the modeling errors due to the cubic polynomial approximation. Table 6-4. Averaged errors in cubic design response surface approximations of points on the curves of target reliability Surface of failure sufficiency factor probabilistic sufficiency factor, safety index and probability of failure at 11 Design Response Probability of Safety Index (Pof) Probabilistic Average Percentage of Failure Error in Probability 213.86% 92.38% 10.32% The optima found by using the design response surface approximations of Table 6-3 are compared in Table 6-5. The probabilistic sufficiency factor design response surface clearly led to a better design, which has a safety index of 3.02 according to Monte Carlo simulation. It is seen that the design from probabilistic sufficiency factor design response

PAGE 91

75 surface approximation is very close to the exact optimum. Note that the valuesility based optimum and safety index ba of Psf for the probabsed optimum provide a good estimate to the ght increments. For examplehe design has a safety factor shortfall of 3.37 percent, uquire ore than 2.25 percent weight increment to remedy the problem. Indeed the optimum design is 2.08 percent heavier. required wei with a Psf=0.9663 t safety index based indicating that it sho ld not re m This would have been difficult to infer from a probability of failure of 0.00408, which is three times larger than the target probability of failure. Table 6-5. Comparisons of optimum designs based on cubic design response surface approximations of probabilistic sufficiency factor, safety index and probability of failure Minimize objective function F while 3 or 0.00135 pof Design response function F=wt from MCS of 100,000 samples surface of Optima Objective Pof/Safety index/Safety factor Probability 9.2225 0.00690/2.4624/0.9481 w=2.6350, t=3.5000 Safety index 9.3258 0.00408/2.6454/0.9663 9.5367 0.00128/3.0162/1.0021 9.5204 0.00135/3.00/1.00 w=2.6645, t=3.5000 Probabilistic sufficiency factor w=2.4526, t=3.8884 Exact optimum (Wu et al. 2001) w=2.4484, t=3.8884 Design with Strength and Displacement Constraints system reliability problem with strength and displacement constrain For ts, the pure is 0 sed on thetress lace allowable tip displacemchoe 2.25" impeting cu et al. 2hredesign restions in the range of design variables shown in Table 6-2 were constructed and their statistics are shown in Table 6-6. robability of failamples bas calculated by direct Monte Carlo simulation with 100,00 exact s and exact disp ement in (6-6) and (6-7). Th ent D0 is sen to b n order to have two co onstraints (W 001). The t e cubic ponse surface approxima

PAGE 92

76 Trisonigne surface e first teratbabiilure, safety index and probabilistic sufficiency factor for system reliability (strength and displacement) sampling points + 4 vertices able 6-6. Compadesign i of cubic desion for pro responslity of fa approximations of th 16 Latin Hypercube Error Statistics surface ty index surface Probabilistic response surface Probability Safe response response sufficiency factor R2adj 0.9231 0.9887 0.9996 RMSE Predictor 0.1234 0.3519 0.01055 APE (Average Percentage Predictor/Mean of Response) (=RMSE Predictor of 32.14% 10.51% N/A Mean of Response 0.3839 1.3221 0.9221 Error=RMSE 32.14% 26.62% 1.14% APE in Pof Pof/Mean of Pof) It is seen that the R2adj of probabilistic sufficiency factor response surface approximawhich implies probabilistic sufficiency factost a of averaged ee entire ce as sh. The critical errors of the three design response sre also ch dace approx 51 test pre selected along a curve of target reliability (probabof f.00135). ge perceors at these test points, shown in Table 6-7, demonstrate that the probabilistic sufficiency factor de approxiore accurate than the proband safety index response surface approximations. Table tion is the highest among the three response surface approximations, or design response surface approximation is the m ccurate in terms rrors in th design spaa own by Table 6-2. For eac urfaces ompared esign response surf imation, oints we ility ailure = 0 The avera ntage err esign response surfac mation is m ability of failure 6-7. Averaged errors in cubic design response surface approximations of probabilistic sufficiency factor, safety index and probability of failure at 51 points on the curves of target reliability Design Response Surface of Probability of failure Safety Index (Pof) Probabilistic sufficiency factor Average Percentage of Failure Error in Probability 334.78% 96.49% 39.11%

PAGE 93

77 The optima found by using the design response surface approximations of Table 6-6 are compared in Table 6-8. The probabilistic sufficiency factor design response surface led to a better design than the probability or safety index design response surface in terms of reliability. The probability of failure of the Psf design is 0.00314 evaluated by Monte Carlo simulation, which is higher than the target p robability of failure of 0.00135. The babilistic sufficiencymation. The probabilistic sufficiency factor to estimate tonal welia sc.7123 a315 was ob section 2.1. ctive functiMinimize objective function F while 3 or 0.00135 pof deficiency in reliability in the Psf design is induced by the errors in the pro factor design response surface approxi can be used he additi ight to satisfy the re bility constraint. A aled design of w = 2 nd t = 3.5 tained in The obje on of the scaled design is 9.5785. The probability of failure of the scaled design is 0.001302 (safety index of 3.0110 and probabilistic sufficiency factor of 1.0011) evaluated by MCS with 1,000,000 samples. Table 6-8. Comparisons of optimum designs based on cubic design response surface approximations of the first design iteration for probabilistic sufficiency factor, safety index and probability of failure Design response Optima Objective function F=wt Pof/Safety index/Safety factor from MCS of 100,000 samples surface of Probability t=3.5000 9.3069 0.00522/2.5609/0.9589 w=2.6591, Safety index t=3.5000 9.2654 0.00630/2.4949/0.9519 sufficiency factor t=3.500 9.4084 0.00314/2.7328/0.9733 w=2.6473, Probabilistic w=2.6881, nse surface by shrinking the design space around the current design. The reduced range of design response surface approximations is shown in Table 6-9 for the next design iteration. The design response surface approximations The design can be improved by performing another design iteration, which would reduce the errors in design respo

PAGE 94

78 constructed are compared in Table 6-10. It is observed again that the probabil factor response surface approximation istic sufficiencyis the most accurate. Tf designcond design System variables able 6-9. Range ose variables for design response surface approximations of the n iteratiow t Range 2.2" to 3.0" 3.2" to 4.0" Table 6-10. Comparisosigse surfacecond ilure, safety index and probabilistic ncy steility (stren16 Latin Hypercube sampling points + 4 vertices n of cubic de n respon approximations of the se design iteration for probability of fasufficie factor for sy m reliab gth and displacement) Erro r Statistics Probability response surface Safety index response surface Probabilistic sufficiency factor response surface R2adj 0.9569 0.9958 0.9998 RMSE Predictor 0.06378 0.1329 0.003183 Mean of Response 0.1752 2.2119 0.9548 APE (Average Percentage Error=RMSE Predictor/Mean of Response) 36.40% 6.01% 0.33% The optima based on design response surface approximations for the second design iteration shown in Table 6-10 are compared in Table 6-11. It is see n that the design e due to its superior accuracy over the probability of failure and safety index design sponse surfaces. Table 6-11.es of index and probability of failure Minimizenction or converges in two iterations with probabilistic sufficiency factor response design surfac re Comparisons of optimum designs based on cubic design response surfacthe second design iteration for probabilistic sufficiency factor, safety objective fu F while 3 0.00135 pof Design response surface of Optima e n F=wt fety in Objectivfunctio Pof/Safrom MCS of 100,000 samples dex/Safety factor Probability w=2.7923, t=3.3438 511/2.56 9.3368 0.00 83/0.9658 Safety index t=3.5278Probabilistic sufficiency w=2.6041, w =2.6878, 177/2.910 factor t=3.6746 9.5691 0.00130/3.0115/1.0009 9.4821 0.00 65/0.992

PAGE 95

79 Summary This chapter presented a probabilistic sufficiency factor as a measure of the safety level relative to a target safety level, which can be obtained from the results of Monte Carlo simulation with little extra computation. It was shown that a design response surface approximation can be m ore accurately fitted to the probabilistic sufficiency factor single or s was demonstrated that the design response stion based ond accelerates the convergence of relisign sufficiency factor a more information in regions of such low probability that t of ffety index cannot be estimated by Monte Carlo simulation wple size, which is h guiding thes shown that the probabilistic sufficiency factorloyed by the designer to estimate the requir than to the probability of failure or the safety index. Using the beam design example with ystem reliability constraints, it urface approxima probfficiency abilistic suability-based de factor has superior accuracy an optimization. The probabilistic lso providesailure or sa he probabilityith a given sam elpful in optimizer. Finally it wa can be emp ed additional weight to achieve a target safety level, which might be difficult with probability of failure or safety index.

PAGE 96

CHAPTER 7 RELIABILITY-BASED DESIGN OPTIMIZATION USING DETERMINISTIC OPTIMIZATION AND MULTI-FIDELITY TECHNIQUE Introduction The probabilistic sufficiency factor (PSF) developed in Chapter six is integrated to reliability-based design optimization (RBDO) framework in this chapter. The classical RBDO is performed in coupled double loop fashion, where the inner loop performs reliability analysis and the outer loop performs design optimization. RBDO using double loop framework requires many reliability analyses and is computationally expensive. Wu et al. (1998, 2001) developed a safety-factor based approach for performing RBDO in a decoupled single loop fashion, where the reliability constraints are converted to equivalent deterministic constraints by using the concept of safety factor. The similarity between Wu's approach and the probabilistic sufficiency factor approach indicates that it may be worthwhile to study the use of probabilistic sufficiency factor converting RBDO to sequential deterministic optimization. For many problems the required probability of failure is very low, so that good estimates require a very large MCS sample. In addition, the design response surface (DRS) must be extremely accurate in order to estimate well a very low probability of failure. Thus we may require an expensive MCS at a large number of design points in order to construct the DRS. A multi-fidelity technique using probabilistic sufficiency factor for RBDO is investigated to alleviate the computational cost. The two approaches of reducing computational cost of RBDO for low probability of failure are compared. 80

PAGE 97

81 Reliability-Based Design Optimization Using Sequential Deterministic Optimization with Probabilistic Sufficiency Factor Wu et al. (1998, 2001) proposed a decoupled approach using partial safety factor to replace reliability constraints by equivalent deterministic constraints. After performing reliability analsed values x*, which is the most probable point (MPous reliability analysis. The required shift een Wusblems where the design space is cha ysis, the random variables x are replaced by safety-factor ba P) of the previ of limit state function g in order to satisfy the reliability constraints is s, which satisfy P(g(x)+s)<0)=Pt. Both x* and s can be obtained as the byproducts of reliability analysis. The target reliability is achieved by adjusting the limit state function via design optimization. It is seen that the required shift s is similar to the probabilistic sufficiency factor (Qu and Haftka 2003) presented in Chapter six. The significant difference betw partial safety factor and coupled RBDO is that reliability analysis is decoupled from and driven by the design optimization to improve the efficiency of RBDO. Thus RBDO is performed in a deterministic fashion and corrected by reliability analysis after optimization. The PSF is employed in this chapter to convert RBDO to equivalent deterministic optimization. Converting RBDO to equivalent deterministic optimization enables further exploration of the design space for those pro racterized to have multiple local optima and only limited number of analyses are available due to its high computational cost, such as design of stiffened panels addressed in chapter eight. By starting from a mean value based design, where the deterministic safety factor is one, an initial design was found by deterministic optimization. Reliability analysis using Monte Carlo simulation shows the deficiency in probability of failure and probabilistic

PAGE 98

82 sufficiency factor. In the next design iteration, the safety factor of the next deterministic optimization is chosen to be )()1(),(),(kkPssdxdx (7-1) )(sfkwhich is used to reduce the yield strength of the material, R. The optimization problem is formulated as wtA minimize 0)d,x(1)(ksthatsuchThe process is repeated until the optimum converges and the reliability constraint is satisfied. Reliability-Based Design Optimization Using Multi-Fidelity Technique with Probabilistic Sufficiency Factor For problems with very low probability of failure, a good estimate of probability requires a very large MCS sample. In addition, the DRS must be extremely accurate in order to estimate well a very low probability of failure. Thus we may require an expensive MCS at a large number of design points in order to construct the DRS. The deterministic optimization may be used to reduce the computational cost associated with RBDO for very low probability of failure. However, since it does not use any derivative information for th R (7-2) e probabilities, it is not likely to converge to an optimum design when comphigher than the required probability Pr and can be estimated via a less expensive MCS eting failure modes are disparate in terms of the cost of improving their safety. A compromise between the deterministic optimization and the full probabilistic optimization is afforded by the Psf by using an intermediate target probability PI, which is

PAGE 99

83 and less accurate DRS. Then the Psf can re-calibrated by a single expensive MCS. This is a variable-fidelity techniqu e, with a large number of inexpensive MCS combined with a small number of expensive MCS. A compromise between the deterministic optimization and the full probabilistic rget probability PI, which is higher than the required probability Pr and ss accurate DRS. Then the Psf can re-calibrated by a single expensive MCS. This is a vnumber of inexpensive MCS combined with a small number of expensive MCS. of 0.0000135, and using as intermediate probability 0.00135, the value used as reum desiglude the h optimization is afforded by the probabilistic sufficiency factor Psf by using an intermediate ta can be estimated via a less expensive MCS and le ariable-fidelity technique, with a large For the beam example we illustrate the process by setting a low required probability quired probability for the previous examples. We start by finding an initial optim n with the intermediate probability as the required probability. This involves the generation of a response surface approximation of PSfI for the intermidiate probability as well as finding the optimum based on this response surface. We then perform an expensive MCS which is adequate for estimating the required probability. Here we use MCS with 107 samples. We now calculate the PSf from this accurate MCS, and denote it PSfA. At that design the PSfI predicted by the response surface approximation is about 1, because the initial optimization was performed with a lower limit of 1 on the PSf. In contrast, the accurate PSfA will in general be different for several reasons. These inc igher accuracy of the MCS, the response surface errors, and most important the lower probability requirements. For example, with 107 samples, at this initial design we may get PSfI=1.01 for the intermediate probability (based on the 13500 lowest safety

PAGE 100

84 factors) and PSfA=0.89 for the required probability (based on the 135 lowest safety factors). With a value of of PSfI and PSf A at the same point, we can define a scale factor f, as the ra tio of these two numbers I sfIAsfPPThis ratio can be used to correct the response surface approximation during the optimization process. Once an optimum design is found with a given f, a new accurate MCS can be calculated at the optimum, a new value of f can be calculated from E f (7-3) quation (7-3) at the new point, and the process repeated until convergence. As further refinement, we have also updated the response surface for the intermediate probability, centering it about the new optima. Beam Design Example The following cantilever beam example (Figure 7-1) is taken from Wu et al. (2001) to demonstrate the use of probabilistic sufficiency factor. Figure 7-1. Cantilever beam subject to vertical and lateral beading There are two failure modes in the beam design problem. One failure mode is yielding, which is most critical at the corner of the rectangular cross section at the fixed end of the beam

PAGE 101

85 )600600(),,,,(XYRRtwYXRg (7-4) 22twwtSwhere R is the yield strength, X and Y are the independent horizontal and vertical loceeding the allowable displacement, D0 ads. Another failure mode is the tip deflection ex 2 22wtwt 23004),,,,(XYELDDDtwYXEgD (7-5) where E is the elastic modulus. The random variables are defined in Table 7-1. Table 7-1. Random variables in the beam design problem variables Random X Y R E Distribution (40000,2000) (29E6, 1.45E6) Normal (500, 100) lb Normal (1000, 100) lb Normal psi Normal psi The cross sectional area is minimized subject to two reliability constraints, which require the safety indi ces for strength and deflection constraints to be larger than three (probability of failure less than 0.00135). Tty-based design optimization problem, with the width w and thickness t of the beam as design variables that are deterministic, can be formulated as he reliabili wtA minimize 000135.0pthatsuch (7-6) based on probability of failure, or wtA minimize (7-7) which are equivalently in terms of safety. The details of the beam design are presented in Chapter 6. thatsuch 01sfPbased on the probabilistic sufficiency factor. The reliability constraints are formulated in the above forms

PAGE 102

86 The optimum design for the case with strength failure musing probabilistic sufficiency factor and Wus exact optimum are shown in Table 7-2. The probability of failure is calculated by direct MCS with 100,000 samples based on the exact stress in Equation (7-1). It is seen that the design from probabilistic sufffactor DRS is very close to the exact optimum. Table 7-2. Optimum designs for strength failure mode obtained from double loop RBDO f ode obtained in Chapter 6 iciency Minimize objective function F while 0.00135 po DRS of Optima Objective function F=wt PofE /Safety factor from MCS of xact stress Probabilissufficiency tic factor w=2.4526, t=3.8884 9.5365 0.00128/1.0021 Exact optimum(Wu et al., 20 01) 9.5 w=2.4484 t=3.8884 204 0.00135/1.00 Reliability-Based Design Optimization Using Sequential Deterministic Optimization based on sequential deterministic optimization with probabilistic sufficiency factor under strength constraint for target probability of failure of 0.00135 Minimize objective fun with Probabilistic Sufficiency Factor To verify the validity of the proposed method, converting RBDO to sequential deterministic optimization using probabilistic sufficiency factor for target probability of failure of 0.00135 is performed (Table 7-3). Table 7-3. Design history of RBDO ction F while 3 or 0.00135 pof Probabilistic sufficiency factor Optima Objective function F=wt Pof/Safety index/Safety factor from MCS of 105 samples Initial design w=1.9574, (s1=1.0) t= 3.9149 7.6630 0.49883/0.00293/0.7178 s2=s1/0.7177 =1.3932 w=2.1862, t=4.3724 9.5589 0.00140/2.98889/0.9986 s3=s2/0.9986 =1.3951 w=.1872t=4.3744 9.5676 0.00130/3.01145/1.0006 2, It is seen that the final design has a slightly higher weight than the optimum in Table 7-2. The reason is that this method employs a safety factor based on the mean

PAGE 103

87 valuee to multiple DRS construction and attractive. The beam design with strraint with 0.0000135 is shown in Table 7-4. It is seen that the method provides an acceptable design w he total computational cost is two deterministic optimprobabilistic snt for target probability s of the random variables. Better prediction can be achieved by using a MPP based safety factor. When the target probability failure is very low, the RBDO using double loop is computationally prohibitive. RBDO using coarse MCS and multi-fidelity technique can also be computationally very expensive du corrections. Converting RBDO to deterministic optimization becomes computationally ength const a target probability of failure of ithin one desiations, each follo gn iteration. Twed by a reliability analysis. iz Table 7-4. Design history of RBDO based on sequential deterministic optimization with ufficiency factor under strength constrai of failure of 0.0000135 Minimize objective function F while 3 or 0.00135 pof Probabilistic function F=wt 107 samples sufficiency factor Optima Objective Pof/ Safety factor from MCS of Initial w=1.9574, design 7.6630 0.5001155/ 0.6352549 (s1=1.0) t= 3.9149 s2=s1/ 0.63525410.3698 0.0000130/1.0006877 9=w=2.2770, t=4.5541 1.5741712 ability-Bsign OptimCoac Sufficiency Factor design with a target probability of failure of 0.0000135 was repeated he ponse snd the optimere as initial design. The design process and ranges of DRS are shown in Table 7-5 and 7-6. It is seen Reli ased De ization Using rse MCS with Probabilisti The beam ere, so that th revious resp urface a um in Table 7-2 is used h that the target reliability is achieved in two design iterations of low fidelity DRS

PAGE 104

88 updated by two high fidelity reliability analysis. The total computational cost is three low fidelity design response surface construction and three high fidelity reliability evaluation. under strength constraint Converting RBDO to sequential deterministic optimization is computationally more efficient than multi-fidelity technique for problem with low probability of failures. However, it is seen in Table 7-4 and Table 7-5 that the design obtained by the latter approach is more reliable and lighter than the previous one. Table 7-5. RBDO using variable fidelity technique with probabilistic sufficiency factor Minimize objective function F while 4.1974 or 0.0000135 pof Probabilistic factor Optima Objective function F=wt Pof/Safety factor from MCS of 107 samples (PsfA) Sufficiency Initial design (f1=1.0) w=2.4526, t=3.8884 9.5367 0.0012350/0.891938 f2=0.891938 w=2.2522, t=4.6000 10.3600 0.0000160/0.996375 6375w=2.4071, t=4.3000 f3=f2*0.99=0.88704 10.3510 0.0000108/1.003632 Tnge of design variables for design response variablw t able 7-6. RaSystem surface es Range for f2 D2.0" to 3.0" 3.6" to 4.6" RS Range f or f3 D1.7" RS to 2.7" 4.3" to 5.3" he ufficiency factor represents a factor of safety relative to a targily than probability of failure or safeers or designers to estimate the required weight increase to reach a target Summary Since t probabilistic s et probability of failure. It provides a measure of safety that can be used more readty index by optimiz safety level. The RBDO is converted to equivalent deterministic optimization using probabilistic sufficiency factor. The probabilistic sufficiency factor also provides more information in the region of such low probability that probability of failure or safety index cannot be estimated by

PAGE 105

89 MCS given sample size, which is helpful in guiding the optimizer. In order to reduce the computational cost of RBDO, especially problems with very low probability of failure, a multiach is computationally more f oion in -fidelity based RBDO approach using probabilistic sufficiency factor was proposed, which can find satisfactory design with low probability of failure by using a correction factor to a response surface created by low-accuracy Monte Carlo Simulation. For problem with low probability of failure, the sequential appro efficient than the multi-fidelity approach, but the design is more efficient in term o bjective funct the multi-fidelity approach.

PAGE 106

CHAPTER 8 RELIABILITY-BASED DESIGN OPTIMIZATION OF STIFFENED PANELS USING PROBABILISTIC SUFFICIENCY FACTOR This chapter studies reliability-based designs of isogrid stiffened panels with random uncertainties in material properties and structural dimensions. The PANDA2 program used for the analysis provides information only on a critical subset of the multiple failure modes in the stiffened panel s. Therefore, we cannot follow each individual failure mode, and polynomial response surface approximation (RSA) are fit to the most critical safety margins. Probability of failure is calculated by Monte Carlo simulation (MCS) using these response surface approximation. A probabilistic sufficiency factor approach (Qu and Haftka, 2003) is employed to facilitate the design optimization. Introduction Stiffened panels are often used in aircraft and launch vehicle design to obtain lightweight structures with high bending stiffness. The design optimization of stiffened panels under buckling and strength constraints is characterized by a large number of local optima (Lamberti et al. 2003). Some of these designs are more sensitive to uncertainties than others (Singer et al., 2003). Therefore, it is important to provide designers with reliability-based optimum designs. Here, we consider uncertainties due to variability in material properties and geometric variations induced by the manufacturing process. Noor et al. (2001) investigated the variability in the nonlinear response of stiffened panels associated with 90

PAGE 107

91 variations in the material and geometric parameters. Fuzzy set analysis was employed to model the uncertainties in the response. Buckling loads of the thin walled stiffened panels are highly sensitive to geometric imperfectio1999) applied a convex conveculate pre-buckling, buckling and post-buckling responses with highly efficient analysis. PANDA2 also provterministic global optimization based on mulobal, local, inter-ring, and general ovalization of cylind ns. Elseifi et al. ( model to represent the worst-case geometric imperfection. The results obtained with this x model were compared to the traditional probabilistic models used to account for uncertainties in imperfections. This chapter presents reliability-based designs of isogrid stiffened panels. The problem is to minimize the weight of the stiffened panel subject to a reliability constraint. The reliability-constraint is evaluated by Monte Carlo simulation, which requires a large number panel analyses. The PANDA2 software (Bushnell, 1987) is employed to analyze the stiffened panels. PANDA2 uses a combination of approximate physical models, exact closed form (finite-strip analysis) models and 1-D discrete branched shell analysis models to cal ides limited de ltiple starting points strategy. Under compressive loads, the load carrying capacity of stiffened panels is greatly affected by geometric imperfections due to fabrication. The effects of geometric imperfections are taken into account in PANDA2 software directly by modifying the effective radius of a cylindrical panel and indirectly by redistributing the pre-buckling stress resultants over the various segments of the panel. Various geometric imperfections, such as g rical panels are considered, with a sophisticated methodology to identify the most detrimental imperfections. (Bushnell, 1996).

PAGE 108

92 Even with the low computational cost of PANDA2 analyses, they cannot be used directly in the MCS that requires a large number of analyses (thousands to millions). Instead, response surface approximations are employed. For the reliability-based design optimization, analysis response surface approximation (ARS) is fitted to the most critical margin in the isogrid panel in terms of both geometric design variables and the material properties, both of which are modeled by random variables. Using the ARS, the proba directions to the circumferential directions as shown in Figure 1. The tank barrel to be optimized is stiffened externally with J-shaped ring stiffeners, and internally with a blade-shaped isogrid oriented bility of failure at every design point can be calculated inexpensively by MCS based on the fitted polynomials. A design response surface approximation (DRS) is then fit to the probability of failure in order to filter out noise generated by MCS. The details of reliability analyses and design optimization using Monte Carlo simulation combined with response surface approximation (Qu et al., 2000) were introduced chapter three. Due to the high nonlinearity of probability of failure and safety index in the design space, a probabilistic sufficiency factor approach (Qu and Haftka, 2003) is used instead of the probability of failure in the design optimization, as shown in detail in chapter six. Example problems of the reliability-based design of isogrid stiffened panels are presented. Aluminum Isogrid Panel Design Example An isogrid panel design problem is taken from Lamberti et al. (2003) to demonstrate the reliability-based design methodology. Reliability-Based Design Problem Formulation Isogrid stiffened panel are cylindrical shells that have rectangular blade stiffeners positioned along the circumferential and

PAGE 109

93 circumpsi critical for buckling. Figure 8-1. Isogrid-stifd external rings with isogrid pattern oriented along circumferential direction for increased bending bject to the constraint that the ower than a given requirement. The reliab ferentially. The length, L1, of the tank barrel is 300 in, and the radius, r, is 160 in. A half cylindrical tank is considered due to symmetry. The design variables are the isogrid spacing, b, the isogrid blade height, h, and the thicknesses the skin, t1, and the thickness of the isogrid blades, t2. The geometry of the ring is fixed. Following Lamberti et al. (2003) the panel is designed subject to two load cases internal proof pressure of 35 psi critical for strength axial compressive load Nx = 1000 lb/in, with an internal (stabilizing) pressure of 5 fened cylindrical shell with internal isogrid an stiffness in hoop direction The weight W of the panel is minimized su probability of failure P of the panel must be l ility-based optimization of stiffened panels is formulated as ),,,(21tthbWW minimize u P P thatsuch (8-1)

PAGE 110

94 where b, h, t1 and t2 are the design variables; W is the weight of the panel; Pu=10-4 is the limit on the probability of failure, P. The four design variables are b, h, t1 and t2. Geometric imperfections such as global, local, inter-ring, and general ovalization of cylindrical panels are taken into account in PANDA2 software directly by specifying the geometric imperfection amplitudes as shown in Table 81. ftware OBuckling modal general Table 8-1. Amplitudes of geometric imperfection handled by PANDA2 so ut of roundness (in) amplitude (in) Initial local amplitude (in) 0.8 0.8 0.008 Uncertainties The material used in the study is Al 2219-T87, which has a density of 0.1 lb/in3. Two for elastic properties (Youngs modulus E and Poissons Ratio ) and the strength allowable a are assumed to be normally distributed and uncorrelated random variables with mean values and coefficients of variation shown in Table 8-2. Table 8-2. Uncertainties in material properties (Al 2219-T87) modeled as normal random variables Youngs Modulus (E) Poisson Ratio () Stress Allowable (a) Mean value 0.107108 psi 0.34 0.58105 psi Coefficient of variation 0.03 0.03 0.05 Table 8-3. ing process modeled as uniformly distributed random design variables around design (mean) value 12 Uncertainties in manufactur b h t t Percentage 1% 2% 4% 4% variation The four design variables (b, h, t1 and t2) are also random around design (mean) value due to manufacturing uncertainties. They are assumed to be uniformly distributed with percent variation shown in Table 8-3. These data are intended only for illustration.

PAGE 111

95 Analysis Response Surface Approximation Reliability analysis of systems with multiple failure modes often employs Monte Carloently, it is impossible to for each of thxtractealysirations are fitted to therst safety margins in tf random bility of failures for the panel system and each load cases are calculated by pe simulation, combined with approximation to the failure evaluations. PANDA2 provides the margins of those failure modes that are critical for the stiffened panel, and these may change during the design optimization. Consequ follow each individual failure mode, and instead the most critical safety margins e two load cases are e d from the PANDA2 an s report. Analysis response surface app oxim wo erms o variables. Proba rforming Monte Carlo simulation using the approximation. The deterministic optimum obtained by PANDA2 global optimization in Lamberti et al. (2003) is shown in Table 8-4 (Lamberti et al. 2003). Table 8-4. Deterministic Optimum b (in) h (in) t1 (in) t2 (in) Weight (lb) 101 091 .7 2.049 0. 86 0.126 2598 sis respsuations (ARS) were fitted to thar two load cases in terms of seven variables, which included four design nt is Latin Hypercube sampling (LHS, e.g., Wysnd Jorgensen, 1), where designdom variables were treated as uniformly distributed variables ove5. Ranges for normal random variables are automatically handled by Latin Hypercube sampling. Using the ARS, probabilities of failure are calculated by Monte Carlo simulations Two analygins of the onse rfaces approxim e critical m random variables, and three material random variables. Statistical design of experime s a 998 n ra r the range shown in Table 8

PAGE 112

96 Table 8-5. Range of analysis response surface approximations (inch) b h t1 t2 9.5-10.8 1.94652.1515 0.09367-0.1035 0.1198-0.1324 e number of sampling pointsonstructed from LHS with 72 Table 8-6. Quadratic analysis response surfation to the most critical msing Latin Hbe sampling of 72 points Critical mf load case 1 cal margiad case 2 The accuracy of the ARS is evaluated by statistical measures provided by the JMP software (Anon. 2000), which include the adjusted coefficient of multiple determination (R2adj.), and the root mean square error (RMSE) predictor. To improve the accuracy of response surface approximation, polynomial coefficients that were not well characterized were eliminated from the response surface model by using a mixed stepwise regression ( Myers and Montgomery 1995). A quadratic polynomial of seven has 36 coefficients. Th generated by LHS was selected to be twice the number of coefficients. Table 8-6 shows that the quadratic response surface approximations c points offer good accuracy. ace approxim argins u ypercu argins o Criti ns of lo Rsquare adj. 0.9413 0.9986 RMSE predictor 0.0203 0.00273 Mean of response 0.4770 0.1986 The deterministic design is then evaluated under material and manufacturing uncertainties. The probability of failure of the deterministic optimum under uncertainties in material properties is shown in Table 8-7. The dominant failure mode is local skin triangular buckling.

PAGE 113

97 Table 8-7. Probabilities of failure calculated by Monte Carlo simulatio n with 1106 mples Probability of failure of se 1 Probability of failure of d caseSystem probability of sa load ca loa 2 failure 0 69310-6 69310-6 Design Response Surfaces onlinear, the reliability constraint of Equation (8-1) is replac avoid extrapolation in probability calculation. The error probability ofilure is poor, whe enoTable 8esign response surface approximations (inch) b h t1 t In order to filter the noise in the results of MCS, design response surface approximation (DRS) are constructed to approximate the reliability constraints. Using the ARS constructed in previous section, the probability of failure at each design point of the design of experiment of DRS can be evaluated inexpensively by MCS. Since the probability of failure is highly n ed by an equivalent form of constraint in terms of probabilistic sufficiency factor introduced in chapter six. The range of the cubic DRS is shown in Table 8-8, which is a subset of the range of ARS shown in Table 8-5 to statistics are summarized in Table 8-9. It is seen that the accuracy of the DRS to fa ile DRS to PSF is accurat ugh for RBDO. -8. Range of d 2 9.6-10.7 1.9854 2.1084 0.0974-0.0986 0.1246-0.1271 able 8-9. Cubic design response surface approximation to the probability of failure and samples) factor T probabilistic sufficiency factor (calculated by Monte Carlo sampling of 1106 Probability of failure Probabilistic sufficiency Rsquare adj. 0.6452 0.9990 RMSE predicto r 0.000211 0.00176 Mean of response 0.0000580 1.079

PAGE 114

98 Optimum Panel Design Using the DRS of td. The optim he PSty-baseis performeum design is shown in ility evaluated by MCS sing ARS, whose error statistics are shown in Table 8-5, with 106 samples is shown in Table 8-11. It is seen that the reliability-based design is slightly lighter and safer than the deterministic design. The probability of failure of reliability-based design is slightly higher than the target reliability, which is due to the errors in the ARS approximation. The design can be refined by another design iteration that use a smaller design domain centered around it. Table 8-10. Optimum panel design F in Table 8-9, reliabili d design optimization Table 8-10. Its reliab u b h t1 t2 Weight (lb) 10.70 1.98 54 0.1004 0.1246 2581 Table 8-11. Probabilities of failure calculated by Monte Carlo simulation of 110 Probability of failure of Probability of failure of System probability of 6samples load case 1 load case 2 failure 0 12310 12310 -6-6 Next we consider a composite isogrid panel. The panel and the zero degree irection for the composite angle ply laminates in isogrid and skin are shown in Figure 2. is a good design for cryogenic environments due to reduced residual strains. Two additional design variables are the ply angles of angle ply laminates iogrid and skins,tively. Reliabilittimization seeks the lightest structures Composite Isogrid Panel Design Example d It is shown in chapter four that angle ply laminates with plies oriented close to each other n the is 1 and 2, respec y-based op

PAGE 115

99 satisfying the reliability co nstraints, which is expressed as the probability of failure of the panel Figusogrid-stiflindrical sh internal id isogrid riented alomferential for increding stiff hoop direction; the zero degree direction for the composite laminates in m st52F re ( -423Fred together withg two load cases roof pressure of 35 psi critical for strength; 2. axial compressivizing) pressure of 5 psi critical fo thermal expansion (CTE) along and transverse to fiber direction, 1 and 2, respectively. It is shown in chapter three that 1 is two orders of magnitude smaller than 2, 1 is ignored in the analysis to reduce the number of variables in response surface P less than Pu=10-4. re 8-2. Io fened cyng circu ell with direction sogrid anased ben patternness in isogrid and skin panel are shown Thermal loading fro ) are conside ress free temperature ( 3 the followin ) to service temperatu 1. internal p e load Nx = 1000 lb/in, with an internal (stabilr buckling. The amount of geometric imperfections is the same as those shown in Table 8-1. The material used in the study is AS4, which has a density of 0.057 lb/in3. For composite panel, the residual stress induced from cooling for cure temperature to service temperature must be considered. The residual stress calculation needs the coefficients of

PAGE 116

100 approximation. The uncertainties in the material properties are represented by 11 random variables, six for elastic p roperties (E1, E2, G12, G23 and 2) and five for strength allowables (1t, 1c, 2t, 2c and max), which are assumed to be normally distributed and uncorrelated. The mean values and coefficients of variation of the uncertainties in material elastic properties and strength allowables are shown in Table 8-12 and Table 8-13 respectively. The six design variables (b, h, t1, 1, t2 and 2) are assumed to be uniformly distributed with percent variation given in Table 8-14. These data are intended only for illustration. Table 8-12. Uncertainties in material elastic properties (AS4) modeled as normal distribution with coefficient of variation of 0.03 Mean value Youngs modulus along fiber direction (E1) 20.6106 psi Youngs direction modulus transverse to fiber (E2) 1.49106 psi In plan shear modulus (G12) 1.04106 psi Minor Poisson ratio () 1.9510 -2 Transverse shear modulus (G23) 0.75 106 psi CTE across fiber direction (2) 0.1510-4 8-13. Uncertainties in mater Tableial strength properties (AS4) modeled as normal distribution with coefficient of variation of 0.05 1122 t c t c max Mean value 331000 psi 208900 psi 8300 psi 33100 psi 10300 psi Table 8-14. Variation of the random design variables around nominal design value b h t1 1 t2 2 Variation 1% 2% 4% 1 4% 1 The design problem is formulated as Deterministic Design

PAGE 117

101 ),,,,,(2211 tthbWW minimize Ni csaii (8-2) ththTable 8-15. Safety factors used in deterministic design General instability i,1,01thatsuchwhere N is the total number of failure modes, ai and ci is the allowable and response for the i failure modes, and si is the safety factor for the i failure mode. Deterministic optimization use safety factors to account for uncertainties. A set of safety factor typically used in the design of stiffened panel aerospace application (Lamberti et al., 2003) is shown in Table 8-15. Local buckling Stiffener buckling Strength failure 1.4 1.2 1.2 1.2 The deterministic optimum found by using the global oppability in PANDA2 software is shown in Table 8-16. The composite design is heavier than the Aon, because composite bstantially p shown in chapter four. Tptimum (inch, degree, lb) 1 t2 Weight timization ca luminum designs in previous secti design is su enalized by thermal loading as able 8-16. Deterministic O B h t1 2 9.999 1.583 0.0222 9 87.57 0.03950 3386 14.84 Analysis Response Surface Approximation The m ost criticaafety margins of the two locases are exted from PANDA2 nase S) d to ma ables, and 11 random variables. Statistical design of experimeis Latin Hercube sam(LHS). The range of the design random vas for thS was c as 5%ed l s ad trac report. Two a lysis respon surfaces (AR were fitte the critical rgins of the two load cases in terms of sixteen variables, which included six design random vari nt yp pling riable e AR hosen bas

PAGE 118

102 on the values of the opt imal deterministic design. Using the ARS, probabilities of failure are caA quadr 17 variables has 171 coefficients (=18x19/2). The number of sampling points generated by LHS was secoefficients. Table 8-17 shows that the quadratic response surfaces constructed from LHS with 342 points offer good accuracy. Quadratic ARS over as % design perturbation is employed to perform reliability analysis and design optimization. Table 8-17. Quadratic analysis response surface approximation to the worst margins Critical margins of load Critical margins of load case 2 lculated by Monte Carlo simulations atic polynomial of lected to be twice the number of using Latin Hypercube sampling of 342 points case 1 Rsquare adj. 0.9686 0.9772 RMSE predictor 0.01615 0.010 80 Mean of response 0.4181726 6 0. The deterministic design is then evaluated under material and manufacturing uncertainties. The probability of failure of the deterministic optimum under uncertainties is shown in Table 8-18. The dominant failure mode is in-plane shear and transverse tensile strength failure for load case 1) of internal pro of pressure of 35 psi. The high safety factor or reliability-based design optimization. Table 8-18. Probabilities of lculaMonimul 106 s (material and manufacturing uncertainties) failure of lo1 (strSystem probability System Probabilistic failure probability can be reduced by using increased failure ca ted by te Carlo s ation of amples Probability of Probability of ad case ength) failure of load case 2 (buckling) of failure Sufficiency Factor for 10-4 probability 89.7810-4 0 89.7810-4 0.837178 As shown by the aluminum panel design example, RBDO using a design surface approximation around deterministic optimization yields a reliability-based design near

PAGE 119

103 deterministic optimization. The approximation domain of design response surface approximation and thus the search domain of the optimization is rather limited compared to the whole design space. In order to fu lly explore the potential of reliability-based desigtion with ProbaFor the deterministic dhere the deterministicfactors are given in Table 8-15, reliability analysis using Monte Carlo simulation shows the critical failure mailure model a probabilistic sufficiencyr. The deficiency safety factor of strength failure mode can be corrected by using the probabilistic sufficiency factor, which is the factor to be applied to the response to meet the target probability of failure requirement. In the next design iteration, the safety factors of those failure modes that are not active in the last design iteration remains the same. The safety factor of the critical failure modes are chosen to be n optimization, RBDO and design response surface construction must be performed repeatedly to cover the entire design space, which is computationally prohibitive. To address this problem, RBDO is converted to equivalent deterministic optimization in order to reduce the computational cost and explore the entire design space as shown in Chapter 7. Reliability-Based Design Optimization Using Sequential Deterministic Optimizabilistic Sufficiency Factor esign, w safety ode is the strength f nd the facto in )()1(ksfikiPs (8-3) )(ks duoriluPANDA2 the safety margins in optimization and it usually rrors on the conservative side in approximation, the actual safety margins from which are used to reSince ce the allowables ai f approximates the corresponding fa re modes. e

PAGE 120

104 PAND A2 analysis that doesnt employ those approximation used in optimization might be slightly higher than zero due to approximation errors. The result is that the PANDA2 optimum design maybe corresponds to a slight higher safety factor than the one used in optimization. Another correction formula based on the actual safety margin from PANDA2 analysis is proposed as the following )( )1()(1sfikPkms ki (8-4) The correction is also only done for those safety factors that correspond to active failure modes. The o ptimization problem is formulated as ),,,,,(2211 tthbWW minimize Nigsaki,1,01)1( (8-5) iitThe process is repeated until the optimum converges and the reliability constraint is satisfied. Two design iterations are carried out using Equation. (8-3) and (8-4) using PANDA2 optimization, respectively. The results are shown in Table 8-19 and Table 8-20. It is seen that the second design (S2=1.437514) in Table 8-19 has much lower probability of failure than the desired probability of failure predicted by using the safety factor calibrated by PSF. It must be noted that the critical failure mode switches from in plane shear failure of the isogrids as of the first design (S1=1.2) to transverse tensile failure of the tank skin a of the second desigconvergence. thasuch sn, which causes the oscillation in design

PAGE 121

105 Tableusing probabilistic sufficiency factor to correct safety factor directly by Minimize objective function W while pof 0.0001 8-19. Design history of RBDO based on sequential deterministic optimization Equation (8-3) Safety fastrength fa6 ctor for ilure mode [b, h, t1, 1, t2, 2] Objective function: weight (lb) Pof/PSF from MCS of 10 samples* Critical strength failure mode (load case 1) Initial d(s1shear esign =1.2) [9.999, 1.583, 0.02229, 87.57, 0.0395, 14.84] 3386 .008978/0.837178 Isogrid in-plane Ss1=Skin transverse-tensile 2= /0.8371781.437514 [9.997, 0.9701, 0.01828, 89.54, 0.07011, 0.1484] 3192 0.00000/1.179381 S3= s2/1.179381=1.218872 [9.997, 0.8254, 0.005205, 90.00, 0.07087, 0.9826] 1878 0.001875/0.944237 Skin transverse-tensile pof = probability of failure, PSF = probabilistic sufficiency factor Table 8-20. Design history of RBDO based on sequential deterministic optimization using probabilistic sufficiency factor to correct safety factor by actual safety margin using Equation (8-4) ction W while pof 0.0001 Minimize objective fun Safety factor for strength failure mode [bObjective function: weight (lb) Pof/PSF from MCS of 106 samples Critical strength failure mode (load case 1) h, t1, 1, t2, 2] Initial design (s1=1.2) [9.999, 1.583, 0.02229, 87.57, 0.0395, 14.84] 3386 0.00shear 8978/0.837178 Isogrid in-plane S2= s=1.437514 [9.997, 0.9701, 0.07011, 0.1484] 1/0.8371780.01828, 89.54, 3192 0.00000/1.179381 Skin transverse-tensile Ss2/1.179=1.218[9.997, 1.137, Skin transverse3= 381872 0.006349, 90.00, 0.05445, 0.001495] 2058 0.000080/1.002482 tensile pof = probability of failure, PSF = probabilistic sufficiency factor It is seen that the third designs in Table 8-19 and quite different from that in Table 8-20, because the different correction formulation used. The designs in Table 8-19 use direct correction to the safety factors employed in the previous design iteration shown by Equation (8-3), while designs in Table 8-20 employed correction to the actual safety margins of the previous design iteration shown by Equation (8-4). Comparing the third

PAGE 122

106 designs in Table 8-19 and 8-20, it is seen that the correction with actual safety margins is more accurate than that using the safety factor, because PANDA2 Superopt employs fety margin in in. In the second design, the actual safety margin of PANDA2 design (S2= 1.437514) is ich sif PANioin ondte differerd s in Ta0. Th, the secola accuone, Equation (8-4). eliign Oization using DIREimdi fac actual sabation optimizatiog global optimizationmed afeter PANns. CT Gltion Algorithm IRECd Lipschitzian optimization algorithm (Jones et al. 1993). es,es algorithm works. It works in a space normalized to 0 xi 1 for i = [1, 2, N], s f pace. DIRECT begins by evaluating the ei for i = [1, 2, N] (the 0 approximation in the optimization process, which might results in slightly positive sats optimum desig superoptdid not employ approxim 0.0532, whptimization a hould be zero results in h DA2 Superopt atn nce in the thi designore ble 8-19 and 8-2 erefore nd correction formu Equation (8-3) is mR rate than the first ability-Based Des ptim CT Opt ization The using an o screpancy of safetyimizer such as DIR tor and fety margin can also e eliminated by ptECT (Jones et al. 1993) that doesnt use approxim a nd use only PANDA2 analysis. However, DIRECT is a sampling-based global n algorithm that is computationally more expensive than the multiple startin pointsseveral in PANDA2, DIRECT is chosen to be perfor DA2 design iteratio DIRED obal OptimizaT is a modifie The name is an acronym for DIvide RECTangl which is a simple d cription of how thewhere N i the dimensionality o the design s function to be minimized, f, at the center of the design space, c1 = 0.5 th iteration). It then evaluates the function at the points c1 1/3 ei (the 1st iteration). Each dimension then has 2 function evaluations, and DIRECT defines

PAGE 123

107 11 11min, for 1, 2, ... N33wfcfci iii ee (6) Figure 8-3. 1st iteration of DIRECT windowing for 2 dimensional example, Goldstein-with potentially optimal boxes shaded and subsequently divided along longest Price (GP) function (Finkel, 2003), and (b) further iterations on GP function dimension(s). The design space is divided into thirds along the dimension with the minimum wi. The center box is then divided again into thirds along the dimension with the next smallest wi, and so on. Thus, the minimum function value for the iteration is in the largest subdivision of the original box in the iteration (Figure 8-3 (a)). DIRECT then determines potentially optimal boxes, using both the function value at the center of the box and the size of the box. DIRECT divides potential optimum boxes along their longest dimension(s), using the same procedure at the initial division. Thus, if, as in the initial division, the box is a hypercube in n = N dimensions with the side length of the cube greater than the other side lengths, it is divided along those n dimensions. Multiple

PAGE 124

108 iterations of DIRECT for a two dimensional space might look like Figure 8-3 (b); given infinite iterations, DIRECT will evaluate every point in the design space A more detailed in Jones et al. (1993) and Finkel (2003). Reliability-Based Design Optimization Using Direct Optimization with Safety actor Corrected by Probabilistic Sufficiency Factor DIRECT optimizer is employed to avoid the problem that PANDA2 Superopt may not drive the actual safety margin to zero. DIRECT optimization starts from the last design in Table 8-20. The optimum is shown in Table 8-21. It is seen that design indeed satisfies the reliability constraint. Table 8-21. Design history of RBDO based on DIRECT deterministic optimization with probabilistic sufficiency factor correct safety factor by actual safety margin using Equation (8-4) Minimize objective function W while pof 0.0001 explanation of this procedure is given F Safety factor for strength failure mode [b, h, t1, 1, t2, 2] Objective function: weight (lb) Pof/PSF from MCS of 106 samples Critical strength failure mode (load case 1) S3= 1.514/1.179381=1.2837[5.895, 0.9167, 0.0005, 89.81, 0.0400, 1.67] 1952.4 0.000100/0.999641 Skin transverse-tensile 25 pof = probability of failure, PSF = probabilistic sufficiency factor Summary Reliability-based design optimization of aluminum and composite isogrid stiffened panels is investigated. The uncertainties in the panels including both material properties and manufacturing process variation are modeled by random variables. The reliability-based design optimization is carried out using Monte Carlo simulation and response surface approximation. Due to the high nonlinearity of probability of failure, probabilistic sufficiency factor approach is employed to construct design response surface approximation.

PAGE 125

109 RBDO using a design surface approximation is computationally prohibitive to explore the entire design space, since it requires a large number of design response surface approximation. In order to full explore the potential of reliability-based design optimization, probabilistic sufficiency factor is employed to convert reliability-based design optimization to equivalent sequential deterministic optimization, which enable the that is safer and lighter than the deterministic design using PANDA2 global optimization. The reliability-based panel design is lighter and safer than the deterministic optimum obtained by global optimization. use of global optimizer DIRECT. DIRECT optimization found optimum design

PAGE 126

APPENDIX A MATERIAL PROPERTIES OF IM600/133 The material properties of IM600/133 measured by Aoki et al. (200) are the polynomial curve fittings are shown in the following pictures. Figure A-1. Quadratic fit to 1 (1.0E-6/F) Figure A-2. Sixth-order fit to 2 (1.0E-4/F) -0.4 -0.1 -0.0 0.1 -0.3 -0.2 -600 -400 -200 0 200 400Temperature (F) 0.0 0.2 0.4 0.6 0.8 -500 -250 0 250 500Temperature (F) 110

PAGE 127

111 Figure A-3. Quadratic fit to E1 (Mpsi) Figure A-4. Quartic fit to E2 (Mpsi) 21.1 21.2 21.4 21.5 21.6 21.3 21.7 21.8 -500 -250 0 250 500Temperature (F) 0.0 0.5 1.0 1.5 2.0 2.5 -500 -250 0 250 500Temperature (F)

PAGE 128

112 1.25 Figure A-5. Cubic fit to G12 (Mpsi) Figure A-6. Quartic fit to 12 0.25 0.50 0.75 1.00 -600 -400 -200 0 200 400Temperature (F) 0.375 0.345 0.350 0.355 0.360 0.365 0.370 -500 -250 0 250 500 Temperature (F)

PAGE 129

APPENDIX B CONTOUR PLOTS OF THREE DESIGN RESPONSE SURFACE APPROXIMATIONS AND TEST POINTS ALONG THE CURVE OF TARGET RELIABILITY To compare critical errors of the design response surface approximations, 11 test points were selected along a curve of target reliability (probability of failure=0.00135) for each design response surface approximployed in error calculation for the probabilistic sufficiency factor, probability of failure and safety index response surface approximations are shown by the following figures. The average percentage errors at these test points are shown in Table 6-4. Figure B-1. Contour plot of probabilistic safety factor design response surface approximation and test points along the curve of target reliability ation. The contour plots and test points em 1.5 2 2.5 3 3.5 4 4.5 5 w (inch)t (inch)0.576020.75180.927571.27911.45491.63071.8064 1.1033 113

PAGE 130

114 Figure B-2. Contour plot of probability of failure design response surface approximation ative values of probability of failure are due to the interpolation errors of the design response Figure B-3. Contour plot of safety index design response surface approximation and test points along the curve of target reliability and test points along the curve of target reliability. The neg surface approximation 1.5 2 2.5 3 3.5 4 4.5 5 .018562 w (inch) t (inch) 1.0 8690.994740.902620.81050.626260.534150.442030.34991 0.257790.165670.073556 0.71838 1.5 2 2.5 3 3.5 4 4.5 5 t (inch).8403.0325.2247.416870.390951.19882.00662.81443.62224.434 .43 w (inch)

PAGE 131

LIST OF REFERENCES Aoki, T., T. Ishikawa, H. Kumazawa, and Y. Morino (2000), Mechanical Performance of CF/Polymer Composite Laminates Under Cryogenic Conditions. 41st AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Atlanta, Georgia, AIAA Paper 2000-1605. Aoki, T., T. Ishikawa, H. Kumazawa, and Y. Morino (1999), Mechanical Behavior of CF/Polymer Composite Laminates under Cryogenic Environment. 12th International Conference on Composite Materials (ICCM-12), Paris, France, Paper No.172. Balabanov, V.O. (1997), Development of Approximations for HSCT Wing Bending Material Weight using Response Surface Methodology. Ph.D. dissertation, Virginia 2004) Birger, I.Ars and Diagnostics. Problems of Mechanics of Solid Bodies, 71-82. Leningrad: Sudostroenve (in Russian) Breitung, K. (1984), Asymptotic Approximation for Multinormal Integrals. Journal of Engineering Mechanics, ASCE, 110 (3): pp. 357-366. Bucher, C. G. and Bourgund, U.(1990), A Fast and Efficient Response Surface Approach for Structural Reliability Problems. Structural Safety, 7, pp. 57-66. Elishakoff, I. (2001), Interrelation between Safety Factors and Reliability. NASA/CR-2001-211309 Fox, E. P. (1996), Issues in Utilizing Response Surface Methodologies for Accurate Probabilistic Design. Proceedings of 37th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Paper No. AIAA-1996-1496. Fox, E. P. (1994), The Pratt & Whitney Probabilistic Design System. Proceedings of 35th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Paper No. AIAA-1994-1442. Fox, E. P. (1993), Methods of Integrating Probabilistic Design within an Organizations AIAA/uctural Dynamics, and Materials Conference, Paper No. AIAA-1993-1380. Polytechnic Institute and State University, Blacksburg, VA, 1997. (Available athttp://scholar.lib.vt.edu/theses/available/etd-82597-124631/, last checked March. (1970), Safety Facto Design System Using Box-Behnken Matrices. Proceedings of 34th ASME/ASCE/AHS /ASC Structures, Str 115

PAGE 132

116 Freudenthal, A.M. (1962), Safety, Reliability and Structural Design. ASCE Trans. 127, 304-323 Grdal, Z., Haftka, R. T. and Hajela, P. (1999), Design and Optimization of Laminated composite Material, Wiley, New York. Haftka, R. T. (1989), Integratedsis and Design. AIAA Journal, Vol. 27, No. 11, pp.1622-1627. HalHarbitz, A. (1986), An Efficient Sampling Method for Probability of Failure Calculation. Isuk. Ph.D. dissertation submitted to the Graduate SchoolNew Brunswick, January, 1999. Kamtructural Reliability Accessment of Systems. Proceedings of the 8th ASCE Joint Specialty Conference on Probabilistic Mechanics and Structural Reliability, Khuri, A. I. and Cornell, J. A. (1996), Response Surfaces: Designs and Analyses. 2nd, Kwak, B. M. and Lee, T. W. (1987), Sensitivity Analysis for Reliability-based ual Stresses and Layer Sequences. Computers and Structures, Vol. 64, No. 1-4, pp. 375-382. Ma-172. My), Response Surface Methodology. Wiley, New York. Padal Dynamics, and Materials Conference, Atlanta, GA, AIAA Paper 2000-1391. Nonlinear Structural Analy dar, A. and Mahadevan, S. (2000), Reliability Assessment Using Stochastic Finite Element Analysis. John Wiley & Sons, New York. Structural Safety, Vol. 3, pp.109-115. apalli, S. S. (1999), Uncertainty Analysis of Transport-Transformation Models (Available at http://www.ccl.rutgers.edu/~ssi/thesis/thesis.html last checked March 2004) al, H.A. and Ayyub, B.M. (2000), Variance Reduction Techniques for Simulation-based S University of Notre Dame, Paper No. PMC 2000155 Marcel Dekker, New York. Optimization using an AFOSM. Computers and Structures, Vol. 27, No. 3, pp. 399-406. Kwon, Y.W. and Berner, J. M. (1997), Matrix Damage of Fibrous Composites: Effects of Thermal Resid glaras and Nikolaidis (1990), Integrated Analysis and Design in Stochastic Optimization. International Journal of Structural Optimization, Vol. 2, pp. 163 Melchers, R. E. (1999), Structural Reliability Analysis and Prediction. Wiley, New York. ers, H. R. and Montgomery, D. C. (1995 manabhan, D., Rodriguez, J.F., Perez, V.M and Renaud, J.E. (2000), Sqeuential Approximate Optimization Using Variable Fidelity Response Surface Approximations. 41st Structures, Structur

PAGE 133

117 Ponslet, E., Maglaras, G., Haftka, R. T., Nikolaidis, E., and Cudney, H. H. (1995), Comparis on of Probabilistic and Deterministic Optimizations Using Genetic Algorithms. International Journal of Structural Optimization, Vol. 10, pp. 247-257. Qu,gn under Uncertainty Using Monte Carlo Simulation and Probabilistic Sufficiency Factor. Proceedings of ASME DETC Qu, X. and Haftka, R. T. (2003b), Reliability-based Design Optimization of Stiffened urnal of Structural and Multidisciplinary Qu,ournal, Vol. 41, No.10, pp.2029-2036. WA, AIAA paper No. 2001-Qutka, R.T., and Johnson, T. F. (2000), Response Surface Rajashekhar, M. R. and Ellingwood, B. R. (1993), A New Look at the Response Surface X. and Haftka, R.T. (2003a), Desi Conference, Chicago, IL. Panels. Proceedings of the 4th International Symposium of Uncertainty Modeling and Analysis, College Park, MD. Qu, X. and Haftka, R.T. (2004), Reliability-based Design Optimization Using Probabilistic Sufficiency Factor, Jo Optimization, accepted, in print. X., Venkataraman, S., Haftka, R. T., and Johnson, T. (2003), Deterministic and Reliability-based Optimization of Composite Laminates for Cryogenic Environments. AIAA J Qu, X., Venkataraman, S., Haftka, R.T., and Johnson, T. F. (2001), Reliability, Weight, and Cost Tradeoffs in the Design of Composite Laminates for Cryogenic Environments. Proceedings of the 42nd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Material Conference, Seattle 1327. X., Venkataraman, S., Haf Options for Reliability-based Optimization of Composite Laminate. Proceedings of 8th ASCE Specialty Conference on Probabilistic Mechanics and Structural Reliability, Notre Dame, Indiana, Paper No. PMC 2000-131. Park, C. H., and McManus, H. L. (1996), Thermally Induced Damage in Composite Laminates: Predictive Methodology and Experimental Investigation. Composites Science and Technology, Vol. 56, pp. 1209-1219. Rackwitz, R. (2000), Reliability Analysis Past, Present and Future. Proceedings of 8th ASCE Specialty Conference on Probabilistic Mechanics and Structural Reliability, Notre Dame, Indiana, Paper No. PMC 2000-RRR. Rackwitz, R. and Fiessler, B. (1978), Structural Reliability Under Combined Random Load Sequences. Computers and Structures, 9 (5): pp. 484-494 Approach for Reliability Analysis. Structural Safety, Vol. 12, pp.205-220.

PAGE 134

118 Romero, V. J. and Bankston, S. D. (1998), Efficient Monte Carlo Probability Estimation with Finite Element Response Surfaces Built from Progressive Lattice Sampling. Proceedings of 39th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Long Beach, CA, pp.1103-1119. Romn Decision Analysis. 43 Proceedings of AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Rubon and the Monte Carlo method, Wiley, New York. Singer, T., Qu, X. and Haftka, R. (2003), Global Optimization of a Composite Tank Sorensen, J. D. (1987), Reliability Based Optimization of Structural Systems. Suetimization, Long Beach, CA, AIAA 2000-4804. Thacker, B.H., Riha, D.S., Millwater, H.R., and Enright, M.P. (2001), Errors and 327 Wang, L. and Grandi, R. V. (1994), Structural Reliability Optimization Using an ics, and Materials Conference, Hilton Head, SC, AIAA 94-1416. Wuproach for Probability-based Design Optimization. Proceedings of 42nd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Seattle, WA, AIAA 2001-1522. ero, V.J., Qu, X. and Haftka, R. T. (2002), Initiation of Method Costing Software for Uncertainty Propagatiord Conference, Denver, CO, Paper No. AIAA-2002-1463. instein, R.Y. (1981), Simulati SAS Institute (2000), JMP Statistics and Graphics Guide Version 4.0, SAS Institute Inc, Cary, NC. Structure Using the DIRECT Algorithm. Proceedings of American Society of Composites-18th Annual Technical Conference, Gainesville, FL. Proceedings of the IFIP 13th Conference on System Modeling and Simulation. s, R., Casare, M., Pageau, S. and Wu, J. (2000), Reliability Based MDO for Aerospace Systems. Proceedings of the 8th Symposium on Multidisciplinary Analysis and Op Sues, R. H., Oakley, D. R. and Rhodes, G. S. (1996), Portable Parallel Computing for Multidisciplinary Stochastic Optimization of Aeropropulsion Components. Final Report, NASA Contract NAS3-27288, June, 1996. Uncertainties in Probabilistic Engineering Analysis. Proceedings of the 42nd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Material Conference, Seattle, WA, AIAA paper No. 2001-1 Tu, J, Choi, K. K., and Park, Y. H. (2000), Design Potential Method for Robust System Parameter Design. AIAA Journal, 39, No. 4, 667-677. Efficient Safety Index Calculation Procedure. Proceedings of 35th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynam Y-T., Shin, Y., Sues, R. and Cesare, M. (2001), Safety-Factor Based Ap

PAGE 135

119 Wu, Y-T. and Wang, W. (1998), Efficient Probabilistic Design by Converting Reliability Constraints to Approximately Equivalent Deterministic Constraints. Journal of Integrated Design and Process Sciences (JIDPS), Vol. 2, No. 4, pp. 13-21. Wyss, G. D. and Jorgensen, K. H. (1998), A Users Guide to LHS Sandias Latin YaNo. 5, pp. 804-812. Hypercube Sampling Software. Albuquerque, NM, SAND98-0210. ng, J.-S. and Nikolaidis, E. (1991), Design of Aircraft Wings Subjected to Gust Loads: A Safety Index Based Approach. AIAA Journal, Vol. 29,

PAGE 136

BIOGRAPHICAL SKETCH helor of Science in Aircraft Design from Nanjing University of Aeronautics and Astronautics in July 1996, and a Master of Science in Aircraft Design from the same university in April 1999. His interest in conducting research motivated him to join the Structural and Multidisciplinary Optimization Group of Professor Haftka at the University of Florida in June 1999, to pursue his Ph.D. degree in Aerospace Engineering. Xueyong Qu was born Changchun, China in 1974. Mr. Qu received his Bac 120


Permanent Link: http://ufdc.ufl.edu/UFE0004395/00001

Material Information

Title: Reliability-Based Structural Optimization Using Response Surface Approximations and Probabilistic Sufficiency Factor
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0004395:00001

Permanent Link: http://ufdc.ufl.edu/UFE0004395/00001

Material Information

Title: Reliability-Based Structural Optimization Using Response Surface Approximations and Probabilistic Sufficiency Factor
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0004395:00001


This item has the following downloads:


Full Text











RELIABILITY-BASED STRUCTURAL OPTIMIZATION USINTG RESPONSE
SURFACE APPROXIMATIONS AND PROBABLISTIC SUFFICIENCY FACTOR
















By

XUEYONG QU


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA


2004































Copyright 2004

by

Xueyong Qu

































This dissertation is dedicated to my lovely wife, Guiqin Wang.
















ACKNOWLEDGMENTS

I want to thank Dr. Raphael T. Haftka for offering me the opportunity to complete

my Ph.D. study under his exceptional guidance. He provided the necessary funding to

complete my doctoral studies and support me to attend many academic conferences.

Without his patience, guidance, knowledge, and constant encouragement, this work

would not have been possible. Dr. Haftka made an immense contribution to this

dissertation and my academic growth, as well as my professional and personal life.

I would also like to thank the members of my supervisory committee: Dr. Peter G.

Ifju, Dr. Theodore F. Johnson, Dr. Andre I. Khuri, and Dr. Bhavani V. Sankar. I am

grateful for their willingness to review my research and provide constructive comments

that helped me to complete this dissertation. Special thanks go to Dr. David Bushnell for

his help with the PANDA2 program and stiffened panel analysis and design. Special

thanks go to Dr. Vicente J. Romero for many helpful discussions and collaboration in

wntmig papers.

Financial support provided by grant NAG-1-2177, contract L-9889 and grant

URETI from NASA is gratefully acknowledged.

My colleagues in the Structural and Multidisciplinary Optimization Research

Group at the University of Florida also deserve thanks for their help and many fruitful

discussions. Special thanks go to Palaniappan Ramu, Thomas Singer, and Dr. Satchi

Venkataraman for their collaboration in publishing papers.

My parents deserve my deepest appreciation for their constant love and support.









Lastly, I would like to thank my beautiful and loving wife, Guiqin Wang. Without

her love, patience and support I would not have completed this dissertation.


















TABLE OF CONTENTS


page

ACKNOWLEDGMENT S .............. .................... iv

LI ST OF T ABLE S ............ ...... .._ .............. ix...


LIST OF FIGURES ............ _...... ._ ..............xiii...

AB S TRAC T ..... ._ ................. ............_........x

CHAPTER


1 INTRODUCTION .............. ...............1.....

Focus .................. ...............2......... ......
Objectives and Scope............... ...............4..

2 LITERATURE SURVEY: METHODS FOR RELIABILITY ANALYSIS AND
RELIABILITY-BASED DESIGN OPTIMIZATION ................. ................ ...._.6


Review of Methods for Reliability Analysis ................. ...............7...............
Problem Definition .............. ...............7.....
M onte Carlo Simulation .............. ... .............. .. .............
Monte Carlo Simulation Using Variance Reduction Techniques .........................8
Moment-Based Methods .............. ...............9.....
Response Surface Approximations............... ... ...........1
Reliability-Based Design Optimization Frameworks ..........__......... _.._.............12
Double Loop Approach ................ ....___ ......... .............1
Inverse Reliability Approach............... ...............14
Design potential approach .................. .......__ ...............15......
Partial safety factor approach (Partial SF) .............. ....................1
Summary ................. ...............17.................


3 RESPONSE SURFACE APPROXIMATIONS FOR RELIABILITY-BASED
DE SIGN OPTIMIZ ATION ................. ......... ......... .............1


Stochastic Response Surface (SRS) Approximation for Reliability Analysis............20
Analysis Response Surface (ARS) Approximation for Reliability-Based Design
Optim ization .............. ............. ..................2
Design Response Surface (DRS) Approximation............... .............2











Analysis and Design Response Surface Approach ................. ... ........................24
Statistical Design of Experiments for Stochastic and Analysis Response Surfaces...25

4 DETERMINISTIC DESIGN OF COMPOSITE LAMINATES FOR CRYOGENIC
ENVIRONMENT S ................ ...............28.................

Introducti on ............... .... ............ .. ....... ..... ........................2
Composite Laminates Analysis under Thermal and Mechanical Loading .................29
Properties of IM600/133 Composite Materials ................. ................ ......... .30
Deterministic Design of Angle-Ply Laminates ................. .............................34
Optimization Formulation ................. ...............35...
Optimizations without Matrix Cracking............... ...............36
Optimizations Allowing Partial Matrix Cracking .............. ....................3
Optimizations with Reduced Axial Load Ny .............. ...............39....

5 RELIABILITY-BASED DESIGN OF COMPOSITE LAMINATES FOR
CRYOGENIC ENVIRONMENT S .............. ...............41....


Reliability-Based Design Optimization ................. ...............41........... ....
Problem Form ulation............... ....... ..... ... .................4
Response Surface Approximation for Reliability-Based Optimization ........._....43
Analysis Response Surfaces ................ ...............43........... ....
Design Response Surfaces............... ...............45
Refining the Reliability-Based Design ................. ............. ......... .......46
Quantifying Errors in Reliability Analysis............... ...............47
Effects of Quality Control on Laminate Design ................. ................. ..........48
Effects of Quality Control on Probability of Failure ................. ............... .....49
Effects of Quality Control on Optimal Laminate Thi ckne ss ............... .... ...........50O
Effects of Other Improvements in Material Properties............... ...............51
Summary .........___ ....... ...............54....

6 PROBABILISTIC SUFFICIENCY FACTOR APPROACH FOR RELIABILITY-
BASED DESIGN OPTIMIZATION ................. ...............56........... ....

Introducti on ................. .. ......... ...............56.......
Probabilistic Sufficiency Factor .............. ........... ....... ...... .... .........6
Using Probabilistic Sufficiency Factor to Estimate Additional Structural Weight
to Satisfy the Reliability Constraint ................. ...............62...............
Reliability Analysi s Using Monte Carlo Simulation ................. ....... .. ......... ........._.64
Calculation of Probabilistic Sufficiency Factor by Monte Carlo Simulation......66
Monte Carlo Simulation Using Response Surface Approximation. ................... .68
Beam Design Example .............. ...............71..
Design with Strength Constraint ............... .. ..... ...............7
Design with Strength and Displacement Constraints .............. .....................7
Sum m ary ................. ...............79.......... ......











7 RELIABILITY-BASED DESIGN OPTIMIZATION USING DETERMINISTIC
OPTIMIZATION AND MULTI-FIDELITY TECHNIQUE .............. ...................80

Introducti on ................ ....... .. ... ... .. .. .. ........ .........8
Reliability-Based Design Optimization Using Sequential Deterministic Optimization
with Probabilistic Sumfciency Factor. ................... ............. .... .. .... ..... .........8
Reliability-Based Design Optimization Using Multi-Fidelity Technique with
Probabilistic Sumfciency Factor............... ...............82.
Beam Design Example ............... ...... ... ........ .....................8
Reliability-Based Design Optimization Using Sequential Deterministic Optimization
with Probabilistic Sumfciency Factor. ................. ......... ....... ................... .........8
Reliability-Based Design Optimization Using Coarse MCS with Probabilistic
Suffciency Factor ................. ...............87.................
Summary ................. ...............88.................

8 RELIABILITY-BASED DESIGN OPTIMIZATION OF STIFFENED PANELS
USING PROBABILISTIC SUFFICIENCY FACTOR ................. ............. .......90

Introducti on ............... .. ... ... ..... ...............90.......
Aluminum Isogrid Panel Design Example .............. ...............92....
Reliability-Based Design Problem Formulation .............. ....................9
U ncertainti es .................. ... ........ .... ...............94...
Analysis Response Surface Approximation .............. ...............95....
Design Response Surfaces............... ...............97
Optimum Panel Design............... ...............98
Composite Isogrid Panel Design Example .............. ...............98....
Deterministic Design ................. .......... .... ...............100....
Analysis Response Surface Approximation ................ .. .. .......... ...............101
Reliability-Based Design Optimization Using Sequential Deterministic
Optimization with Probabilistic Sumfciency Factor .............. ...................103
Reliability-Based Design Optimization using DIRECT Optimization.....................106
DIRECT Global Optimization Algorithm ................ .............. .... ............... 106
Reliability-Based Design Optimization Using Direct Optimization with Safety
F actor C orrected by Prob abili sti c Sumfici ency Factor ................. ................10 8
Sum m ary ................. ...............108..............

APPENDIX

A MATERIAL PROPERTIES OF IM600/133 ................. .............................110

B CONTOUR PLOTS OF THREE DESIGN RESPONSE SURFACE
APPROXIMATIONS AND TEST POINTS ALONG THE CURVE OF TARGET
RELIABILITY ................. ...............113......... ......

LIST OF REFERENCES ................. ...............115................

BIOGRAPHICAL SKETCH ................. ...............120......... ......

















LIST OF TABLES


Table pg

4-1 Transverse strains calculated for conditions corresponding to the onset of matrix-
cracking in the 900 plies of a quasi-isotropic (45/0/-45/90)2s in Aoki et al. (2000).33

4-2 Transverse strains of an angle-ply laminate (f 25)4S under the same loading
condition as Table Al .............. ...............34....

4-3 Strain allowables for IM600/133 at -4230F .............. ...............34....

4-4 Optimal laminates for different operational temperatures: E2u Of 0.01 10................. 37

4-5 Optimal laminates for temperature dependent material properties with EZu Of 0.0110
(optimized for 21 temperatures) .....__.._ ... ......_.._......_ ...........3

4-6 Optimal laminate for temperature dependent material properties allowing partial
matrix cracking: E2u Of 0.011 for uncracked plies and 0.0154 for cracked plies......39

4-7 Optimal laminates for reduced axial load ofl, 200 lb./inch by using load shunting
cables (equivalent laminate thickness of 0.005 inch) ....._____ ... ... ...._ ..........40

5-1 Strain allowablesa for IM600/133 at -4230F ........._. ...... .... ......._.........42

5-2 Coefficients of variation (CV) of the random variables ................. .........._ .....42

5-3 Range of design variables for analysis response surfaces .............. ............._..44

5-4 Quadratic analysis response surfaces of strains (millistrain) ................. ................44

5-5 Design response surfaces for probability of failure (Probability calculated by Monte
Carlo simulation with a sample size of 1,000,000) ................. .................4

5-6 Comparison of reliability-based optimum with deterministic optima. .............._......46

5-7 Refined reliability-based design [f6]s (Monte Carlo simulation with a sample size
of 10,000,000) .............. ...............47....

5-8 Comparison of probability of failure from MCS based ARS and CLT. ........._........47

5-9 Accuracy of MC S ................. ...............48.......... ....










5-10 Effects of quality control of EZu On probability of failure for 0. 12 inch-thick (16)s
laminates............... ...............4

5-11 Effects of quality control of st", sil, E21, and Y12 On probability of failure of 0. 12
inch-thick (f6)s laminates ................. ...............50........... ....

5-12 Effects of quality control of El, E2, G12, Cl12, Tzero, al, and a2 On probability of
failure of 0. 12 inch-thick (f6)s laminates ................. ...............50..............

5-13 Effects of quality control of EZu On probability of failure for 0. 1 inch-thick (16)s
laminates............... ...............5

5-14 Effects of quality control of EZu On probability of failure for 0.08 inch-thick (16)s
laminates............... ...............5

5-15 Sensitivity of failure probability to mean value of 82u (CV=0.09) for 0. 12 inch-thick
1(iO) s am inmates .............. ...............52....

5-16 Sensitivity of failure probability to CV of 82u ( E(82u)=0.0154 ) for 0. 12 inch-thick
(16)s lam inmates .............. ...............52....

5-17 Maximum E2 (milliStrain) induced by the change of material properties El, E2, G12,
Cl12, Tzero, al, and a2 for 0. 12 inch-thick [f250]s laminate ................. ................. .54

5-18 Probability of failure for 0. 12 inch-thick [ f 250]s laminate with improved average
material properties (Monte Carlo simulation with a sample size of 10,000,000)....54

6-1 Random variables in the beam design problem............... ...............62

6-2 Range of design variables for design response surface............___ ........._ ......72

6-3 Comparison of cubic design response surface approximations of probability of
failure, safety index and probabilistic sufficiency factor for single strength failure
mode (based on Monte Carlo simulation of 100,000 samples) ........._..... ..............73

6-4 Averaged errors in cubic design response surface approximations of probabilistic
sufficiency factor, safety index and probability of failure at 11 points on the curves
of tar get reliability ................. ...............74................

6-5 Comparisons of optimum designs based on cubic design response surface
approximations of probabilistic sufficiency factor, safety index and probability of
failure .............. ...............75....

6-6 Comparison of cubic design response surface approximations of the first design
iteration for probability of failure, safety index and probabilistic sufficiency factor
for system reliability (strength and displacement) ................. .................7










6-7 Averaged errors in cubic design response surface approximations of probabilistic
sufficiency factor, safety index and probability of failure at 51 points on the curves
of tar get reliability ................. ...............76................

6-8 Comparisons of optimum designs based on cubic design response surface
approximations of the first design iteration for probabilistic sufficiency factor,
safety index and probability of failure ................. ............... ......... ...._..77

6-9 Range of design variables for design response surface approximations of the second
design iteration ........... ..... .._ ...............78....

6-10 Comparison of cubic design response surface approximations of the second design
iteration for probability of failure, safety index and probabilistic sufficiency factor
for system reliability (strength and displacement) ................. .................7

6-11 Comparisons of optimum designs based on cubic design response surfaces of the
second design iteration for probabilistic sufficiency factor, safety index and
probability of failure ................. ...............78........... ....

7-1 Random variables in the beam design problem............... ...............85

7-2 Optimum designs for strength failure mode obtained from double loop RBDO .....86

7-3 Design history of RBDO based on sequential deterministic optimization with
probabilistic sufficiency factor under strength constraint for target probability of
failure of 0.00135 ................. ...............86........... .

7-4 Design history of RBDO based on sequential deterministic optimization with
probabilistic sufficiency factor under strength constraint for target probability of
failure of 0.0000135 ................. ...............87........... .

7-5 RBDO using variable fidelity technique with probabilistic sufficiency factor under
strength constraint .............. ...............88....

7-6 Range of design variables for design response surface ................ .....................88

8-1 Amplitudes of geometri c imperfecti on handled by PANDA2 software ................... 94

8-2 Uncertainties in material properties (Al 2219-T87) modeled as normal random
variables .............. ...............94....

8-3 Uncertainties in manufacturing process modeled as uniformly distributed random
design variables around design (mean) value .............. ...............94....

8-4 Deterministic Optimum .............. ...............95....

8-5 Range of analysis response surface approximations (inch) ..........._._... ................ 96










8-6 Quadratic analysis response surface approximation to the most critical margins
using Latin Hypercube sampling of 72 points .............. ...............96....

8-7 Probabilities of failure calculated by Monte Carlo simulation with 1x106 Samples 97

8-8 Range of design response surface approximations (inch) .............. ...................97

8-9 Cubic design response surface approximation to the probability of failure and
probabilistic sumfciency factor (calculated by Monte Carlo sampling of lx106
sam pl es) ................ ...............97.......... ......

8-10 Optimum panel design............... ...............98.

8-11 Probabilities of failure calculated by Monte Carlo simulation of 1x106 Samples....98

8-12 Uncertainties in material elastic properties (AS4) modeled as normal distribution
with coeffcient of variation of 0.03 ................ ...............100..............

8-13 Uncertainties in material strength properties (AS4) modeled as normal distribution
with coeffcient of variation of 0.05 ................ ...............100..............

8-14 Variation of the random design variables around nominal design value ...............100

8-15 Safety factors used in deterministic design .............. ...............101....

8-16 Deterministic Optimum (inch, degree, lb)............... ...............101..

8-17 Quadratic analysis response surface approximation to the worst margins using Latin
Hypercube sampling of 342 points............... ...............102

8-18 Probabilities of failure calculated by Monte Carlo simulation of 106 samples
(material and manufacturing uncertainties) ................. ............... ......... ...102

8-19 Design history of RBDO based on sequential deterministic optimization using
probabilistic sumfciency factor to correct safety factor directly by Equation (8-3)105

8-20 Design history of RBDO based on sequential deterministic optimization using
probabilistic sumfciency factor to correct safety factor by actual safety margin using
Equation (8-4) .............. ...............105....

8-21 Design history of RBDO based on DIRECT deterministic optimization with
probabilistic sumfciency factor correct safety factor by actual safety margin using
Equation (8-4) .............. ...............108....

















LIST OF FIGURES


Figure pg

2-1 Double loop approach: reliability analysis coupled inside design optimization...... 12

2-2 Design potential approach: reliability constraints approximated at design potential
point dpk; reliability analyses still coupled inside design optimization...................15

2-3 Partial safety factor approach: decouple reliability analysis and design
optimization ................. ...............17................

3-1 Analysis response surface and design response surface approach: decouple
reliability analysis and design optimization ................ ..............................25

3-2 Latin Hypercube sampling to generate 5 samples from two random variables........27

4-1 Polynomials fit to elastic properties: El, E2, Gl2, and C112 .............. ..................31

4-2 Polynomials fit to coefficients of thermal expansion: a1 and u2 ................... .........32

4-3 Geometry and loads for laminates ................ ...............35...............

4-4 The change of optimal thickness (inch) with temperature for variable and constant
material properties (EZu Of 0.0110) ................. ...............37..............

4-5 Strains in optimal laminate for temperature dependent material properties with Ezu
of 0.01 10 (second design in Table 4-3) ................. ...............38...........

5-1 Tradeoff plot of probability of failure, cost, and weight (laminate thickness) for
[+25]s................ ...............53

6-1 Probability density of safety factor. The area under the curve left to s=1 measures
the actual probability of failure, while the shaded area is equal to the target
probability of failure indicating that probabilistic sufficiency factor = 0.8 .............61

6-2 Cantilever beam subj ect to vertical and lateral beading............_.__ ........._._ .....62

6-3 Monte Carlo simulation of problem with two random variables................_._..........66

7-1 Cantilever beam subj ect to vertical and lateral beading.............__ ..........__ .....84










8-1 Isogrid-stiffened cylindrical shell with internal isogrid and external rings with
isogrid pattern oriented along circumferential direction for increased bending
stiffness in hoop direction .............. ...............93....

8-2 Isogrid-stiffened cylindrical shell with internal isogrid and isogrid pattern oriented
along circumferential direction for increased bending stiffness in hoop direction;
the zero degree direction for the composite laminates in isogrid and skin panel are
show n .............. ...............99....

8-3 1st iteration of DIRECT windowing for 2 dimensional example, Goldstein-Price
(GP) function (Finkel, 2003), and (b) further iterations on GP function with
potentially optimal boxes shaded and subsequently divided along longest
dim ension(s). ............. ...............107....

A-1 Quadratic fit to al (1.0E-6/oF) .............. ...............110............ ...

A-2 Sixth-order fit to a2 (1.0E-4/oF) ................. ......... ...............110 ..

A-3 Quadratic fit to El (Mpsi) ................. ...............111..............

A-4 Quartic fit to E2 (Mpsi) ................. ...............111..............

A-5 Cubic fit to G12 (Mpsi) ................. ...............112..............

A -6 Q uartic fit to Cl2............... ...............112

B-1 Contour plot of probabilistic safety factor design response surface approximation
and test points along the curve of target reliability ................. .......................1 13

B-2 Contour plot of probability of failure design response surface approximation and
test points along the curve of target reliability. The negative values of probability of
failure are due to the interpolation errors of the design response surface
approxim ation............... .............11

B-3 Contour plot of safety index design response surface approximation and test points
along the curve of target reliability ....._.. ................ ............... 114 ...
















Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

RELIABILITY-BASED STRUCTURAL OPTIMIZATION USINTG RESPONSE
SURFACE APPROXIMATIONS AND PROBABLISTIC SUFFICIENCY FACTOR

By

Xueyong Qu

August 2004

Chair: Raphael T. Haftka
Major Department: Mechanical and Aerospace Engineering

Uncertainties exist practically everywhere from structural design to manufacturing,

product life-time service, and maintenance. Uncertainties can be introduced by errors in

modeling and simulation; by manufacturing imperfections (such as variability in material

properties and structural geometric dimensions); and by variability in loading. Structural

design by safety factors using nominal values without considering uncertainties may lead

to designs that are either unsafe, or too conservative and thus not efficient.

The focus of this dissertation is reliability-based design optimization (RBDO) of

composite structures. Uncertainties are modeled by the probabilistic distributions of

random variables. Structural reliability is evaluated in term of the probability of failure.

RBDO minimizes cost such as structural weight subj ect to reliability constraints.

Since engineering structures usually have multiple failure modes, Monte Carlo

simulation (MCS) was used to calculate the system probability of failure. Response

surface (RS) approximation techniques were used to solve the difficulties associated with









MCS. The high computational cost of a large number of MCS samples was alleviated by

analysis RS, and numerical noise in the results of MCS was filtered out by design RS.

RBDO of composite laminates is investigated for use in hydrogen tanks in

cryogenic environments. The major challenge is to reduce the large residual strains

developed due to thermal mismatch between matrix and fibers while maintaining the load

carrying capacity. RBDO is performed to provide laminate designs, quantify the effects

of uncertainties on the optimum weight, and identify those parameters that have the

largest influence on optimum design. Studies of weight and reliability tradeoffs indicate

that the most cost-effective measure for reducing weight and increasing reliability is

quality control.

A probabilistic sufficiency factor (PSF) approach was developed to improve the

computational efficiency of RBDO, to design for low probability of failure, and to

estimate the additional resources required to satisfy the reliability requirement. The PSF

is a safety factor needed to meet the reliability target. The methodology is applied to the

RBDO of composite stiffened panels for the fuel tank design of reusable launch vehicles.

Examples are used to demonstrate the advantages of the PSF over other RBDO

techniques















CHAPTER 1
INTTRODUCTION

Aerospace structures are designed under stringent weight requirement. Structural

optimization is usually employed to minimize the structural weight subjected to

performance constraints such as strength and deflection. Deterministically optimized

structures can be sensitive to uncertainties such as variability in material. Uncertainties

exist practically everywhere from engineering design to product manufacturing, product

life-time service condition and maintenance. Uncertainties can be introduced by

manufacturing process such as variability in material properties and structural geometric

dimensions; by errors in modeling and simulation; and by service conditions such as

loading changes. Deterministic optimization can use large safety factors to accommodate

uncertainties, but the safety and performance of the optimized structure under

uncertainties (such as reliability) are not known, and the resulting structural design may

be too conservative, and thus not efficient.

To address this problem, reliability-based design optimization (RBDO) became

popular in the last decade (Rackwitz 2000). The safety of the design is evaluated in terms

of the probability of failure, with uncertainties modeled by probabilistic distribution of

random variables. RBDO minimizes costs such as structural weight subject to reliability

constraints, which are usually expressed as limits on the probability of failure of

performance measures.









Focus

The focus of this dissertation is reliability-based structural optimization for use in

the reusable launch vehicles (RLV). RLV is being developed as a safer and cheaper

replacement of space shuttles, which suffer from high probability of failure and operating

cost. For example, the probability of failure per mission launch is about 0.01 based on the

shuttle launch history. The catastrophic failures both space shuttles, Challenger and

Columbia, were all initiated from structural failure. The limited reuse of rocket boosters

and fuel tanks also increase the operating cost of space shuttle. In order to reduce the

operating cost, cryogenic fuel tank must be structurally integrated into the RLV, which

motivates the use of composite materials.

Composite materials are widely used in aerospace structures because of their high

stiffness to weight ratio and the flexibility to tailor the design to the application. This

extra flexibility can render deterministically optimized composite laminates very

sensitive to uncertainties in material properties and load conditions (e.g., Giirdal et al.

1999). For example, the ply angles of a composite laminate deterministically optimized

under unidirectional loading are all aligned with the loading direction, which leads to

poor design for even small loading transverse to the fiber direction.

Design of composite structures for RLV poses a major challenge because the

feasibility of these vehicles depends critically on structural weight. With traditional

deterministic design based on safety factors, it is possible to achieve a safe design, but it

may be too heavy for the RLV to take off. Therefore, reliability-based design

optimization based is required for the structural design of RLV structures in order to

satisfy both safety and weight constraints. The advantages of reliability-based design over

deterministic design have been demonstrated (e.g., Ponslet et al., 1995). For designs with









stringent weight requirements, it is also important to provide guidelines for controlling

the magnitudes of uncertainties for the purpose of reducing structural weight.

Deterministic structural optimization is computationally expensive due to the

need to perform multiple structural analyses. However, reliability-based optimization

adds an order of magnitude to the computational expense, because a single reliability

analysis requires many structural analyses. Commonly used reliability analysis methods

are based on either simulation techniques such as Monte Carlo simulation, or moment-

based methods such as the first-order-reliability-method (e.g., Melchers, 1999). Monte

Carlo simulation is easy to implement, robust, and accurate with sufficiently large

samples, but it requires a large number of analyses to obtain a good estimate of low

failure probability. Monte Carlo simulation also produces a noisy estimate of probability

and hence is difficult to use with gradient-based optimization. Moment-based methods do

not have these problems, but they are not well suited for problems with many competing

critical failure modes.

Response surface approximations solve the two problems of Monte Carlo

simulation, namely simulation cost and noise from random sampling. Response surface

approximations (Khuri and Cornell 1996) typically fit low order polynomials to a number

of response simulations to approximate response. Response surface approximations

usually fit the structural response such as stresses in terms of random variables for

reliability analyses. The probability of failure can then be calculated inexpensively by

Monte Carlo simulation using the fitted response surfaces. Response surface

approximations can also be fitted to probability of failure in terms of design variables,

which replace the reliability constraints in RBDO to filter out numerical noise in the










probability of failure induced by Monte Carlo simulation and reduce the computational

cost. Different ways of using response surface approximations for reliability analysis and

reliability-based design optimization will be presented in subsequent chapters.

Objectives and Scope

The main purpose of this dissertation is to address the challenges associated with

the reliability-based design of composite panels for reusable launch vehicles. The

problems encountered include the high computational cost for calculating probabilities of

failure and for performing reliability-based design optimization, and the control of

structural weight penalty due to uncertainties. Therefore the main obj ectives are:

1. Investigate response surface approximation for use in reliability analysis and

design optimization. Analysis and design response surface approximation are developed.

2. Develop methods that allow more effcient reliability-based design optimization

when the probability of failure must be low. This motivates the development of a

probabilistic sumfciency factor approach.

3. Explore the potential of uncertainty control for reducing structural weight for

unstiffened and stiffened panels.

4. Provide reliability-based designs of selected composite panels

A literature survey of methods for reliability analysis and reliability-based design

optimization is presented in Chapter 2. Chapter 3 introduces the response surface

approximation techniques developed for the effcient RBDO (objective 2). Chapter 4

presents deterministic design optimization for composite laminates in cryogenic

environment. Chapter 5 demonstrates the reliability-based design the composite laminate

for use in cryogenic environments, and tradeoffs of weight and reliability via the control

of uncertainty (objective 1). Chapter 6 proposes a probabilistic sumfciency factor










approach for more efficient reliability-based design optimization. Chapter 7 demonstrates

the use of probabilistic sufficiency factor for RBDO. Chapter 8 provides reliability-based

designs of selected composite stiffened panels for the fuel tank design of reusable launch

vehicles.














CHAPTER 2
LITERATURE SURVEY: 1VETHODS FOR RELIABILITY ANALYSIS AND
RELIABILITY-BASED DESIGN OPTIMIZATION

The basic conceptual structure of the reliability-based design optimization (RBDO)

problem, called RBDO framework, can be formulated as

minimize F = F(d)
such that

P3 () <;Pf ,j = n(2-1)

where F is the obj ective function, d is a vector of design variables, Pj is the probability of

failure of the jth failure mode, P,' is the allowable probability of failure of the jth failure

mode, n, is the total number of failure modes, and x is a vector of random variables.

To perform RBDO, reliability analyses must be performed to evaluate the

probability of failure, which requires multiple evaluations of system performance (such

as stresses in a structure). Depending on the specific reliability analysis method, the

computational cost of a single reliability analysis is usually comparable to or higher than

the cost of performing a deterministic local optimization. Furthermore, RBDO requires

multiple reliability analyses, thus the computational cost of performing RBDO by directly

coupling design optimization with reliability analysis is at least an order of magnitude

higher than deterministic optimization. Efficient frameworks must be developed to

overcome this computational burden.

This chapter presents a literature review of state-of-the-art reliability analysis

methods and RBDO frameworks, and concludes with the motivation to develop the

methodologies in Chapters 3, 6, and 7 for solving the problems.









Review of Methods for Reliability Analysis

The most common techniques for reliability analysis are Monte Carlo simulation,

approaches based on most probable point (MPP), and decoupledd" Monte Carlo sampling

of a response surface approximation fit to samples from some experimental design.

Different techniques are preferable under different circumstances.

Problem Definition

The limit state function of the reliability analysis problem is defined as G(x), where

G(x) represents a performance criterion and x is a random variable vector. Failure occurs

when G(x)<0. Thus the failure surface or limit state of interest can be described as

G(x)=0. The probability of failure can be calculated as

P, = fx (x)dx (2-2)
G(x) 0

where fx(x) is the joint probability distribution function (JPDF). This integral is hard to

evaluate because the integration domain defined by G(x)<0 is usually unknown and

integration in high dimension is very difficult.

Commonly used probabilistic analysis methods are based on either simulation

techniques such as Monte Carlo simulation or moment-based methods such as the first-

order-reliability-method (FORM) or second-order-reliability-method (SORM) (Melchers

1999).

Monte Carlo Simulation

Monte Carlo simulation (MCS) (e.g., Rubinstein 1981) generates a number of

samples of the random variables x by using a random number generator. The number of

samples required is usually determined by confidence interval analysis. Simulations (e.g.,

structural analyses) are then performed for each of these samples. Statistics such as mean,









variance, and probability of failure can then be calculated from the results of simulations.

This method is also called direct MCS or MCS with simple random sampling (SRS).

Direct MCS is simple to implement; is robust; and is accurate with sufficiently large

samples. But the usefulness of direct MCS in reliability analysis is quite limited because

of its relatively low efficiency. For example, the probability of failure in engineering

applications is usually very small, thus the number of limit state function evaluations

required to obtain acceptable accuracy using direct MCS is very large (Chapter 5), which

makes direct MCS very time-consuming. Direct MCS is usually used as a benchmark to

verify the accuracy and compare the efficiency of other methods using approximation

concepts.

To improve the accuracy and efficiency of simple random sampling, various

simulation methods using Variance Reduction Techniques (VRT) have been developed to

reduce the variance of the output random variables.

Monte Carlo Simulation Using Variance Reduction Techniques

Rubinstein (1981) and in Melchers (1999) gave good overviews of VRT for general

Monte Carlo sampling. The VRT can be classified into different categories, such as

sampling method, correlation method, conditional expectation method, and specific

method. Sampling methods reduce the variance of the output by constraining samples to

be representative of (or distorting the samples to emphasize the important region of) the

performance function. Commonly used sampling methods include importance sampling

(Harbitz 1986), adaptive sampling, stratified sampling, Latin Hypercube sampling, and

spherical sampling. Correlation methods use techniques to achieve correlation among

random observations, functions, or different simulations to improve the accuracy of the

estimators. Some commonly used techniques are antithetic variate, common random









numbers, control variates, and rotation sampling. Conditional expectation methods utilize

the independence of random variables to reduce the order of probabilistic integration to

achieve higher efficiency. Some common techniques are conditional expectation,

generalized conditional expectation, and adaptive conditional expectation. Specific

methods include response surface method and internal control variables techniques. The

VRTs can be combined further to increase the efficiency of simulation. A comparison of

the accuracy and efficiency of several common VRT methods can be found in Kamal and

Ayyub (2000). Latin hypercube sampling and response surface methods are studied in

this dissertation.

The VRT requires fewer limit state function evaluations to achieve the desired level

of accuracy, but the simplicity of simulation is lost, and the computational complexity of

each simulation cycle is increased.

Moment-Based Methods

Besides VRT, moment-based methods also reduce the computational cost

drastically compared to MCS. The first-order-reliability method (FORM) and second

order reliability-method (SORM) are well-established methods that can solve many

practical applications (Rackwitz 2000). FORM and SORM methods first transform the

random variables from the original space (X-space) to the uncorrelated standard normal

space (U-space). An optimization problem is then solved to find the minimum distance

point (most probable point, MPP) on the limit state surface (Z=0) to the origin of the U-

space. The minimum distance, P, is called the safety index. The probability of failure is

then calculated by using the normal cumulative distribution function Pf, = c(-P) in









FORM (Rackwitz and Fiessler 1978), or by using second order correction in SORM

(Breitung 1984). Thus the safety index can be used directly as a measure of reliability.

One disadvantage of FORM and SORM methods is that there is no readily

available error estimate. The accuracy of FORM and SORM must be verified by other

methods, such as MCS. The errors of FORM and SORM may come from the errors

associated with MPP search and the nonlinearity of the limit state. The search of MPP

requires solving a nonlinear optimization problem, which is difficult to solve for some

problems. Wrong MPP usually leads to poor probability estimates, which is common

problem for MPP-based reliability analysis methods. FORM and SORM methods are also

not well suited for problems with many competing critical failure modes (i.e., multiple

MPPs). Due to the limitations of first-order and second-order approximations, FORM and

SORM methods do not perform well when the limit state surface is highly nonlinear

around MPP. This nonlinearity may come from the inherent nonlinearity of the problem

or may be induced by the transformation from X-space to U-space (Thacker et al. 2001).

For example, transforming a uniform random variable to a standard normal variable

usually increases the nonlinearity of the problem. When FORM and SORM methods

encounter difficulties, sampling methods with VRT such as Importance Sampling can be

employed to obtain/improve results with a reasonable amount of computational cost

compared to direct MCS.

Response Surface Approximations

Response surface approximations (RSA) (Khuri and Cornell 1996) can be used to

obtain a closed-form approximation to the limit state function to facilitate reliability

analysis. Response surface approximations usually fit low order polynomials to the

structural response in terms of random variables. The probability of failure can then be









calculated inexpensively by Monte Carlo simulation or FORM and SORM using the

fitted polynomials. Therefore, RSA is particularly attractive for computationally

expensive problems (such as those requiring complex finite element analyses). The

design points where the response is evaluated are chosen by statistical design of

experiments (DOE) so as to maximize the information that can be extracted from the

resulting simulations.

Response surface approximations can be applied in different ways. One approach is

to construct local response surfaces around the MPP region that contributes most to the

probability of failure of the structure. The DOE of this approach is iteratively adjusted to

approach the MPP. Typical DOEs for this approach are Central Composite Design (CCD)

and saturated design. For example, Bucher and Bourgund (1990), and Rajashekhar and

Ellingwood (1993) constructed progressively refined local response surfaces around the

MPP. This local RSA approach can produce good probability estimates given enough

iterations.

Another approach is to construct global RSA over the entire range of random

variables (i.e., DOE around the mean values of the random variables). Fox (1993, 1994,

and 1996) used Box-Behnken DOE to construct global RSA and summarized 12 criteria

to evaluate the accuracy of response surfaces. Romero and Bankston (1998) used

progressive lattice sampling, where the initial DOE is progressively supplemented by

new design points, as the statistical design of experiments to construct global response

surfaces. With the global approach, the accuracy of the RSA around the MPP is usually

unknown, thus caution should be taken to avoid extrapolation around the MPP. Both the










global and local approaches provide substantial savings in the number of total function

evaluations.

Reliability-Based Design Optimization Frameworks

This section summarizes several popular RBDO frameworks. These frameworks

are based on the concepts of design sensitivity analyses, approximated limit state

function, approximated reliability constraints, and partial safety factor concept to convert

reliability constraints to approximately equivalent deterministic constraints, and RSA.

Double Loop Approach


Initial Design
deterministicc optimum)

Probabilistic Analyses:
FORM/SORM
Inner Loop: iterative
probabilistic analysesl
SNo
liability Cone e?
yg................... Outer Loop: design
Design Sensitivity Sp/8dd optimization
from PA

Approximate Reliability
Constraints 0 at DP d

Update Design
using Optimizer

No
Design Converge?

Yes
Stop

Figure 2-1. Double loop approach: reliability analysis coupled inside design optimization

The traditional approach of RBDO is to perform a double loop optimization: outer

loop for the design optimization (DO) and inner sub-optimization that performs reliability

analyses using methods such as FORM or SORM. This nested approach is rigorous and









popular, but it is computationally expensive and sometimes troubled by convergence

problems, etc. (Tu et. al. 2000). The computational cost of RBDO with nested MPP may

be reduced by sensitivity analysis. The sensitivity of the safety index to design variables

can be obtained with little extra computation as by-products of reliability analysis (Kwak

and Lee 1987). A simplified formula that ignores the higher order terms in the estimation

equation was proposed by Sorensen (1987). Yang and Nikolaidis (1991) used this

sensitivity analysis and optimized an aircraft wing with FORM subjected to system

reliability constraint.

Figure 2-1 shows the typical procedure for the double loop approach. With this

approach, the reliability constraints are approximated at the current design point (DP) dk.

For problems requiring expensive finite element analysis, this approach may still be

computationally prohibitive; and FORM (e.g., classical FORM such as Hasofer-Lind

method) may converge very slowly (Rackwitz 2000). Wang and Grandhi (1994)

developed an efficient safety index calculation procedure for RBDO that expands limit

state function in terms of intermediate design variables to obtain more accurate

approximation. Reliability constraints can also be approximated to reduce the

computational cost of RBDO. Wang and Grandhi (1994) approximate reliability

constraints with multi-point splines within a double loop RBDO. Another way of

improving the efficiency of multi-level optimization is to integrate the iterative

procedures of reliability analysis and design optimization into one where the iterative

reliability analysis stops before full convergence at each step of the optimization, as

suggested by Haftka (1989). Maglaras and Nikolaidis (1990) proposed an integrated

analysis and design approach for stochastic optimization, where reliability constraints are










approximated to different levels of accuracy in optimization. Even combined with above

approaches, nested MPP approach still suffers the problems of high computational cost

and convergence. Several RBDO approaches are being developed to solve these

problems.

Inverse Reliability Approach

Recently, there has been interest in using alternative measures of safety in RBDO.

These measures are based on margin of safety or safety factors that are commonly used

as measures of safety in deterministic design. The safety factor is generally expressed as

the quotient of allowable over response, such as the commonly used central safety factor

that is defined as the ratio of the mean value of allowable over the mean value of the

response. The selection of safety factor for a given problem involves both objective

knowledge (such as data on the scatter of material properties) and subjective knowledge

(such as expert opinion). Given a safety factor, the reliability of the design is generally

unknown, which may lead to unsafe or inefficient design. Therefore, using safety factor

in reliability-based design optimization seems to be counter productive.

Freudenthal (1962) showed that reliability can be expressed in terms of the

probability distribution function of the safety factor. Elishakoff (2001) surveyed the

relationship between safety factor and reliability, and showed that in some cases the

safety factor can be expressed explicitly in terms of reliability. The standard safety factor

is defined with respect to the response obtained with the mean values of the random

variables. Thus a safety factor of 1.5 implies that with the mean values of the random

variables, we have a 50% margin between the response (e.g., stress) and the capacity

(e.g., failure stress). However, the value of the safety factor does not tell us what the

reliability is. Birger (1970), as reported by Elishakoff (2001), introduced a factor, which










we call here the Birger's safety factor that is more closely related to the target reliability.

A Birger's safety factor of 1.0 implies that the reliability is equal to the target reliability;

a Birger's safety factor larger than 1.0 means that the reliability exceeds the target

reliability; and Birger's safety factor less than 1.0 means that the system is not as safe as

we wish.

Design potential approach

Tu et al. (2000) used the probabilistic performance measure, which is closely

related to Birger's safety factor, for RBDO using FORM. Figure 2-2 summarizes the

design potential approach.


Initial design
(deterministicoptmm

Probabilistic Analyses:
FORM/SORM
Inner Loop: iterative
probabilistic analyses
SNo
liability Cone e?
y g .................... .. Outer Loop: design
Design Sensitivity Sp/8dd optimization
from PA

Approximate Reliability
[constraints p at DPP dk )

Update Design
Using Optimizer
No
Design Converge?,

Yes
Stop

Figure 2-2. Design potential approach: reliability constraints approximated at design
potential point dpk; reliability analyses still coupled inside design
optimization

They showed that the search for the optimum design converged faster by driving

the probabilistic performance measure to zero than by driving the probability of failure to









its target value. Another major difference between the design potential approach and

double loop approach is that the reliability constraints are approximated at the design

potential point dk (DPP), which defined as the design that renders the probabilistic

constraint active, instead of the current design point. Since the DPP is located on the

limit-state surface of the probabilistic constraint, the constraint approximation of DPM

becomes exact at (DPP). Thus DPM provides a better constraint approximation without

additional costly limit state function evaluation. Therefore, a faster rate of convergence

can be achieved.

Partial safety factor approach (Partial SF)

Wu et al. (1998 and 2001) developed a partial safety factor similar to Birger's

safety factor in order to replace the RBDO with a series of deterministic optimizations by

converting reliability constraints to equivalent deterministic constraints.

After performing reliability analysis, the random variables x are replaced by safety-

factor based values x*e, which is the MPP of the previous reliability analysis. The shift of

limit state function G needed to satisfy the reliability constraints is s, which satisfies

P(G(x)+s)<0)=Pt. Both x*k and s can be obtained as the byproducts of reliability analysis.

Since in design optimization, the random variables x are replaced by x* (just as in the

case of traditional deterministic design, where random variables are replaced by

deterministic values after applying some safety factor), the method is called partial safety

factor approach (Figure 2-3). The target reliability is achieved by adjusting the limit state

function via design optimization. It is seen that the required shift s is similar to the target

probabilistic performance measure g*.

The significant difference between Partial SF and DPM or nest MPP is that

reliability analysis (FORM in the paper, can be any MPP-based method) is decoupled










from and driven by the design optimization to improve the efficiency of RBDO. If n

iterations are needed for convergence, the approach needs n deterministic optimizations

and n probabilistic analyses. However, the convergence rate of subsequent probabilistic

analyses is expected to increase after obtaining a reasonable MPP. Wu et al. (2001)

demonstrated the efficiency of this approach by optimizing a beam subject to multiple

reliability constraints.


Deterministic Design
Optimization

Probabilistic Analyses:
FORM/SORM Probabilistic analyses to
replace random variable
with deterministic values
Safety factor based x* x*


Deterministic Design
Otimization with x* j Deterministic design
optimization with x*
New design d


No ign and Reii

Yes

Stop

Figure 2-3. Partial safety factor approach: decouple reliability analysis and design
optimization

Summary

Since the reliability analyses involved in our study are for system probability of

failure, MCS was used to perform reliability analysis. We developed an analysis RS

approach to reduce the high computational cost of MCS and design response surface

approach to filter noise in RBDO (Chapter 3).









The current RBDO frameworks mostly deal with the probability of failure of

individual failure modes, an efficient framework must be developed to address RBDO for

the system probability of failure. Chapter 6 developed an inverse reliability measure, the

probabilistic sufficiency factor, to improve the computational efficiency of RBDO, to

design for low probability of failure, and to estimate the additional resources needed to

satisfy the reliability requirement. Chapter 7 demonstrated the use of probabilistic

sufficiency factor with multi-Hidelity techniques for RBDO and converting RBDO to

sequential deterministic optimization. The methodology is applied to the RBDO of

stiffened panels in chapter 8.














CHAPTER 3
RESPONSE SURFACE APPROXIMATIONS FOR RELIABILITY-BASED DESIGN
OPTIMIZATION

Response surface approximation (RSA) methods are used to construct an

approximate relationship between a dependent variable f(the response) and a vector x of

n independent variables (the predictor variables). The response is generally evaluated

experimentally (these experiments may be numerical in nature), in which case denotes

the mean or expected response value. It is assumed that the true model of the response

may be written as a linear combination of basis functions Z(x) with some unknown

coefficients (3 in the form of Z(xy (3 Response surface model can be expressed as

Y(x) = Z(x)T b (3-1)

where Z(x) is the assumed basis function vector that usually consists of monomial

functions, and b is the least square estimate of p For example, if the a linear response

surface model is employed to approximate the response in terms of two independent

variables, xl anS X2, the response surface approximation is

Y(x)= b, +bhx, +bh2x: (3 -2)

The three major steps of response surface approximation as summarized by Khuri and

Cornell (1996) are

*Selecting design points where responses must be evaluated. The points are chosen
by statistical design of experiment (DOE), which is performed in such a way that
the input parameters are varied in a structured pattern so as to maximize the
information that can be extracted from the resulting simulations. Typical DOE for
quadratic RSA is central composite design (CCD, Khuri and Cornell 1996).









* Determining a mathematical model that best fits the data generated from the design
points of DOE by performing statistical test of hypotheses of the model
parameters(Khuri and Cornell 1996 and Myers et al. 2002)

* Predicting response for given sets of experimental factors or variables by the
constructed response surface approximation. Due to the close form nature of the
approximation, RSA is particularly attractive for engineering problems that require
a large number of computationally expensive analyses, such as structural
optimization and reliability analysis.

The accuracy of RSA is measured by error statistics such as the adjusted coefficient

of multiple determination (R2adj), root mean square error predictor (RMSE), and

coefficient of variation (COV =RMSE/Mean of Response). An R2adj close to one and a

small COV close to zero usually indicate good accuracy. The RSAs in this dissertation

were all constructed by JMP software (SAS Institute., 2000). The above error statistics

are readily available from JMP after RSA construction. Khuri and Cornell (1996)

presented a detailed discussion on response surface approximation.

This chapter presents the response surface approach developed for reliability-

based design optimization.

Stochastic Response Surface (SRS) Approximation for Reliability Analysis

Among the available methods to perform reliability analysis, moment-based

methods (e.g., FORM/SORM) are not well suited for the composite structures in

cryogenic environments because of the existence of multiple failure modes. Direct Monte

Carlo simulation requires a relatively large number of analyses to calculate probability of

failure, which is computationally expensive. Stochastic response surface approximation is

employed here to solve the above problems.

To apply RSA to a reliability analysis problem, the limit state function g(x)

(usually the stress of displacement in the structures) is approximated by










G(x) = Z(x)T b (3-3)

where x is the vector of input random variables. With the polynomial approximation

G(x), the probability of failure can then be calculated inexpensively by Monte Carlo

simulation or FORM/SORM. Since the RSA is constructed in random variable space, this

approach is called stochastic response surface approach.

Stochastic RSA can be applied in difference ways. One approach is to construct

local RSA around the Most Probable Point (MPP), the region that contributes most to the

probability of failure of the structure. The statistical design of experiment (DOE) of this

approach is iteratively performed to approach the MPP. Another approach is to construct

global response surfaces over the entire range of the random variables, where the mean

value of the random variables is usually chosen as the center of DOE. The selection of

RSA approach depends on the limit state function of the problem. Global RSA is simpler

and efficient to use than local RSA for problems with limit state function that can be well

approximated globally.

Analysis Response Surface (ARS) Approximation for Reliability-Based Design
Optimization

In reliability-based design optimization (RBDO), the SRS approach needs to

construct response surfaces to limit state functions at each point encountered in the

optimization process, which requires a fairly large number of limit state function

evaluation and RS construction. The local SRS approach is more computationally

expensive than the global SRS approach due to multiple iterations involved in the RSA

construction.

This dissertation (see also Qu et al., 2000) developed an analysis response surface

(ARS) approach in the unified system space (x, d) to reduce the cost of RBDO, where x









is the vector of random variables and d is the vector of design variables. By including

design variables in the response surface formulation, the efficiency of the RBDO is

improved drastically for certain problems. The ARS is fitted to the response (limit state

function) in terms of both design variables and random variables

G(x) = Z(x, d)T b (3 -4)

The ARS approach combines probabilistic analyses with design optimization. Using the

ARS, the probability of failure at every design point in the design optimization process

can be calculated inexpensively by Monte Carlo simulation based on the fitted

polynomials.

The number of analyses required for ARS depends on the total number of random

variables and design variables. Because the ARS fits an approximation in terms of both

random variables and design variables it requires more analyses than SRS. For our

applications, where the number of random variables is large (around 10) and the number

of design variables is small (around four), this typically results in an ARS that is less than

three times as expensive to construct as an SRS, which is due to the use of Latin

Hypercube sampling that can generate an arbitrary number of design points for RSA

construction (explained in last section of this chapter and demonstrated in chapter 5).

This compares with a large number (order of 10 to 100) of SRS approximation required

in the course of optimization. For a large number of variables (more than 20 to 30), the

construction of ARS is hindered by the curse of dimensionality. SRS might be used to

reduce the dimensionality of the problem. Besides the computational cost issue, the

inclusion of design variables may increase the nonlinearity of the response surface

approximation. It might be necessary to use RSA of order higher than quadratic, for









which proper DOE must be employed. The DOE issues are discussed in the last section

of this chapter.

Design Response Surface (DRS) Approximation

Direct Monte Carlo simulation introduces noise in computed probability of failure

due to limited samples. The noise can be reduced by using a relatively large number of

samples, which is computationally made possible by using response surface

approximation. The noise can also be filtered out by using another response surface

approximation, the design response surface (DRS). DRS fitted to probability of failure P

as a function of design variables d can be shown as

P(d) = Z(d)T b (3-5)

The use of DRS also reduces the computational cost of RBDO by approximating the

reliability constraint by close-form function.

The probability of failure is found to change by several orders of magnitude over

narrow bands in design space, especially when the random variables have small

coefficients of variation (Chapter 5). The steep variation of probability of failure requires

DRS to use high-order polynomials for the approximation, such as quintic polynomials,

increasing the required number of probability calculations (Qu et al. 2000). An additional

problem arises when Monte Carlo simulations (MCS) are used for calculating

probabilities. For a given number of simulations, the accuracy of the probability estimates

deteriorates as the probability of failure decreases.

The numerical problems associated with steep variation of probability of failure led

to consideration of alternative measures of safety. The most common one is to use the









safety index P, which replaces the probability by using the inverse standard normal

transformation,

P7= -95' (P) (3-6)

Safety index is the distance measured as the number of standard deviations from the

mean of a normal distribution that gives the same probability. Fitting DRS to safety index

showed limited improvement of accuracy (Chapter 6), and it has the same problems of

accuracy as the probability of failure when based on Monte Carlo simulations.

Box-Cox transformation (Myers and Montgomery 1995) on the probability of

failure was also tested, but showed very limited improvement. A probabilistic sufficiency

factor approach is developed as an inverse reliability measure to improve the accuracy of

DRS, estimate additional resources required to satisfy the reliability constraint, and

convert RBDO to sequential deterministic optimization (Chapter 6 and 7).

Analysis and Design Response Surface Approach

Figure 3-1 summarizes the ARS/DRS-based RBDO approach. First the DOE of

ARS is performed and ARS is constructed. Then DOE of DRS is performed, which

should stay in the range of design variables of the DOE for the ARS, and DRS is

constructed. Design optimization is then performed on the DRS. If the design does not

converge, the DOE of the DRS can be moved toward the intermediate optimum and its

range can be shrunk to improve the accuracy of DRS. If the intermediate optimum is near

the boundary of the ARS, the DOE of the ARS needs to be moved to cover the potential

optimum region better. The entire process is repeated until the optimization converges

and the reliability of the optimum stabilizes.
































Figure 3-1. Analysis response surface and design response surface approach: decouple
reliability analysis and design optimization

Statistical Design of Experiments for Stochastic and Analysis Response Surfaces

Statistical design of experiments selects design points for response surface

approximation in such a manner that the required accuracy is achieved with a minimum

number of design points. However, the exact functional form of the structural response to

be approximated is rarely known, the errors in SRS and ARS usually include both

variance and bias errors. Structural responses are usually computationally expensive to

evaluate. Therefore, the selection of the DOE for ARS are primarily based on the

following two considerations

* The number of design points in the DOE is flexible, since we want to reduce the
number of analyses.

* The points in the design space have good space-filling distribution. The DOE is
often used to provide a sampling of the problem space. Higher than quadratic
polynomials may also be needed in order to provide good approximation of the
response. Both desire space-filling DOE.









ARS needs to include both the design and random variables, the number of

variables is relatively large, often exceeding 15 variables. This excludes the use of many

DOE, such as Central Composite Design (CCD). The CCD has 2n vertices, 2n axial

points, and one center point, so the required number of design points is 2n+2n+1,where n

is the number of variables involved. A polynomial of nmth order in terms of n variables has

L coefficients, where

(n + 1)(n + 2)...(n + m)
L = (3-7)


For n = 15, CCD requires 32799 analyses. On the other hand, a quadratic polynomial in

15-variable has 136 coefficients. From our experience, in order to estimate these

coefficients, the number of analyses only needs to be about twice as large as the number

of coefficients, which is less than one percent of the number of vertices for 15-variable

space. Therefore, other DOEs such as CCD using fractional factorial design (Myers and

Montgomery 1995) need to be used. The fractional factorial CCD is intended for the

construction of quadratic RSA. Orthogonal arrays (Myers and Montgomery 1995) are

used for the construction of higher order RS (Balabanov 1997 and Padmanabhan et al.

2000). Isukapalli (1999) employed orthogonal arrays to construct SRS. For problems

where only very limited number of analyses is computationally affordable, Box-Behnken

designs or saturated designs can be used (Khuri and Cornell 1996).

In the paper of Qu et al. (2000), it is shown that Latin Hypercube sampling is more

efficient and flexible than orthogonal arrays. The idea of Latin Hypercube sampling can

be explained as follows: assume that we want n samples out of k random variables. First,

the range of each random variable is divided into n nonoverlapping intervals on the basis

of equal probability. Then one value is selected randomly from each interval. Finally, by
























*)


randomly pairing values of different random variables, the n input vectors each of k

dimension for Monte Carlo simulation are generated. Figure 3-2 illustrates a two-

dimensional Latin Hypercube sampling.


Uniform


Normal


Figure 3-2. Latin Hypercube sampling to generate 5 samples from two random variables















CHAPTER 4
DETERMINISTIC DESIGN OF COMPOSITE LAMINATES FOR CRYOGENIC
ENVIRONMENTS

This chapter presents deterministic designs of composite laminates for hydrogen

tanks in cryogenic environments. The traditional way of deterministically designing the

laminate with safety factors is employed in this chapter in order to investigate the design

issues. Reliability-based design, explicitly taking account of uncertainties in material

properties, is presented in chapter 5.

Introduction

The use of composite materials for the design of liquid hydrogen tanks at cryogenic

temperatures has many challenges. The coefficient of thermal expansion (CTE) along the

fiber direction is usually two orders of magnitude smaller than that transverse to fiber

direction. In typical composite laminates, the ply angles are different in order to carry

load efficiently, which results in a mismatch of the coefficients of thermal expansion.

When the laminates are cooled down during manufacturing from stress-free temperature

that is near the curing temperature, the mismatch of the coefficients of thermal expansion

induces large thermal strains. Cooling to cryogenic temperatures substantially increases

the thermal stresses. The residual thermal strains may result in matrix cracking leading to

reduction in stiffness and strength of the laminate and possible initiation of delamination.

A more detrimental effect of matrix cracking in hydrogen tanks is hydrogen leakage

through the wall of the tank. Park and McManus (1996) proposed a micro-mechanical

model based on fracture mechanics and verified the model by experiments. Kwon and









Berner (1997) studied matrix damage of cross-ply laminate by combining a simplified

micro-mechanics model with finite element analysis and showed that the prediction of

damage is improved substantially with the incorporation of residual stresses. Aoki et al.

(2000) modeled and successfully predicted the leakage through the matrix cracks.

The present objective is to investigate options available to minimize the increase in

thickness due to thermal residual strains for laminates designed subject to thermal and

mechanical loads. Deterministic designs were performed to investigate the following

effects: (i) temperature dependant material properties for strains analysis, (ii) laminates

designed to allow partial ply failure (matrix cracking), and (iii) auxiliary stiffening

solutions that reduce the axial mechanical load on the tank wall laminates.

Composite Laminates Analysis under Thermal and Mechanical Loading

Since the properties of composite materials, such as coefficients of thermal

expansion and elastic moduli, change substantially with temperature, classical lamination

theory (CLT) (e.g., Giirdal et al. 1999) is modified to take account of temperature

dependent material properties. The stress-free strain of a lamina is defined as

EF (Ia F = GAT, Where a is the coefficient of thermal expansion (CTE). When a is

a function of temperature T, the stress free strain is given by the expression


aF = a(T)dT (4-1)


where Tero is the stress-free temperature of the material and Tserwce, is the service

temperature. From the equilibrium equation and vanishing of residual stress resultant, the

equilibrium of a symmetric laminate subjected to pure thermal load with uniform

temperature profile through the thickness can be expressed by











A(T)EO = IQ(T)Ef~5 _N(T) (4-2)

where so~is the non-mechanical strain induced by thermal load. The right hand side of

Equation 4-2 is defined as the thermal load N". From Equation 4-2, the non-mechanical

strain induced by thermal load can be expressed by

soN(T)= A-'(T)N"(T) (4-3)

The residual thermal stress is given by the constitutive equation

aR Q()( ON F (T)) (4-4)

The mechanical strain is expressed by

EM(T) = A '(T)NM(T) (4-5)

Therefore, the mechanical stress is given by

a" (T) = Q(T)Eh" (T) (4-6)

By the principle of superposition, the residual strain and total stress in the laminate are

expressed by

eResidual(T Mh~ ()ON (T_ EF (T) (4-7)

ca"" (T) = aR () M (T) (4-8)

Properties of IM600/133 Composite Materials

The composite material used in the present study is the IM600/133 graphite-epoxy

material system, which has a glass-transition temperature of 3560F. Aoki et al. (2000)

tested IM600/133 (material Aa in their paper) composite material system at various

temperatures, ranging from 3560F to -452.20F (1800C to -2690C), with mechanical

tensile loads. The material properties of IM600/133 were taken from Aoki et al. (2000)










and fitted with smooth polynomials as functions of temperatures in order to be used in

calculation (Figures 4-1 and 4-2). The data points used in the fitting and the individual

polynomials are shown in the Appendix A.

25.0 2.5

20. 2.0

15.0 1.5

$~ 1.0


5.0 .
B 10.0 -a

0. -0.5



-425.0 -225.0 -25.0 175.0 375.0
Temperature (F)

-El El (Mpsi) ~- E2 (Mpsi) +0 12 (Mpsi) nul 2

Figure 4-1. Polynomials fit to elastic properties: El, E2, Gl2, and C12

Aoki et al. (2000) showed that the fracture toughness of the material increased at

lower temperatures; however, the increased strain energy due to the mismatch in the

thermal expansion coefficients also increased the critical energy release rate. They also

applied a micro-mechanics model proposed by Park and McManus (1996) for predicting

micro-cracking and showed good correlation with experiments. Aoki et al. (2000) found

that at cryogenic temperatures, quasi-isotropic laminates exhibited a large reduction in

the transverse mechanical strain 82 that initiates micro-cracking (0.702% at room

temperature to 0.325% at cryogenic temperatures).

Experimental data from Aoki et al. (2000) were used to determine the strain

allowables. They tested a 16-ply quasi-isotropic (45/0/-45/90)2 sSymmetric laminate in










tension in the 00 direction at cryogenic temperatures. The nominal specimen thickness

and width were 2.2 mm and 15 mm. The mechanical loads corresponding to matrix

cracks (Table 4-1) were extracted from Figure 5 in Aoki et al. (2000). The strain

transverse to the fiber direction, 82, iS assumed to be the strain that induces matrix-

cracking in the laminate. Based on the load condition and the configuration of the

laminate, the transverse strain 82 in the 900 plies is the most critical strain in the laminate.



0.012

0.010

0.008
-4 Epsilon1 -5 Epsilon2 -A Gammal2
0.006

c* 0.004

0.002

0.000

-0.00 ) 40 -0 0 0

-0.004

-0.000

Temperature (aF


Figure 4-2. Polynomials fit to coefficients of thermal expansion: a1 and u2

Normally, strain allowables are calculated by loading laminates at room

temperatures. However, for micro-cracking, the residual stresses are of primary

importance, so all strains are calculated from the stress-free temperature, assumed to be

300 oF. The calculations are made by integrating the thermal strains from the stress free

temperature to the operational temperature as described in the next section.









Table 4-1 shows the transverse strains E2 in the 900 plies corresponding to the

loading at the onset of matrix cracking at selected temperatures. Aoki et al. (2000) found

that the maximum mechanical strain before matrix cracking is reduced from 0.7% at

room temperature to 0.325% at -4520F. Older results (Aoki et al. 1999) (Table 4-1)

indicated that the maximum mechanical strain at cryogenic temperature may be as low as

0.082%. However, the calculation indicates that the total strain (including the residual

thermal strain) may vary anywhere from 1.5 to 1.9% depending on the temperature and

the measurement. These values appear high, but this is because they include the residual

strains that are usually not counted. For the quasi-isotropic laminate, these residual strains

at room temperature are very high, at 0.86%, and are higher at lower temperatures

Table 4-1. Transverse strains calculated for conditions corresponding to the onset of
matrix-cracking in the 900 plies of a quasi-isotropic (45/0/-45/90)2s in Aoki et
al. (2000)
Room LN2 LHe LHe
temperature temperature temperature temperature
(77 OF or 25 OC) (-320 OF or (-452 OF or (-452 OF or -
196 OC) 269 OC) 269 OC)a
Mechanical load
390 330 200 50a
(Mpa)
Total 82 0.01564 0.01909 0.01760 0.01517a
Thermal 82 0.00864 0.01365 0.01435 0.01435a
Mechanical 82 0.00700 0.00544 0.00325 0.00082a
a Older data obtained from Aoki et al. (1999)

The importance of working with strains measured from the stress free temperature

is demonstrated in Table 4-2, which shows the 82 in the angle-ply laminate (f 25)4S under

the same loading condition as Table 4-1. At room temperature, the residual (thermal)

strains are only about 0.4% compared to 0.86% for the quasi-isotropic laminate. An

analysis based on strains measured from room temperature will not show the additional

0.46% strain that the (f 25)4s laminate can carry compared to a quasi-isotropic laminate.









Based on the data from Table 4-1, we selected the allowable strain to be 1.54% for the

probabilistic design and 1.1% (1.4 safety factor) for the deterministic design.

Table 4-2. Transverse strains of an angle-ply laminate (+ 25)4S under the same loading
condition as Table Al
Room LN2 LHe LHe
temperature temperature temperature temperature
(77 OF or 25 OC) (-320 OF or (-452 OF or (-452 OF or
196 OC) -269 OC) -269 OC)a
Mechanical load
390 330 200 50"
(Mpa)
Total Ez -0.00261 0.00360 0.00527 0.00656a
Thermal Ez 0.00393 0.00669 0.00699 0.00699a
Mechanical Ez -0.00654 -0.00309 -0.00172 -0.00043a
a Older data obtained from Aoki et al (1999)

Table 4-3 shows the strain allowables for the lamina, where other strain allowables

except 82u WeTO prOVided to us by NASA. The strain allowables may appear to be high,

but this is because they are applied to strain including residual strains that develop due to

cooling from stress-free temperature of 3000F. A quasi-isotropic laminate will use up its

entire transverse strain allowable of 0.011, when cooled to -4520F. Thus, this value is

conservative in view of the experiments by Aoki et al. (2000) that indicated that the

laminate can carry 0.325% mechanical strain at cryogenic temperature.

Table 4-3. Strain allowables for IM600/133 at -4230F
Strain sy" sy' sy, es' 77,7;
Allowablesa 0.0103 -0.0109 0.0110 or 0.0154b -0.0130 0.0138
a Strains include residual strains calculated from the stress-free temperature of 300 OF
b The value 0.0110 is obtained from the extreme value 0.0154 divided by a safety factor
of 1.4

Deterministic Design of Angle-Ply Laminates

It is estimated that the minimum thickness needed to prevent hydrogen leakage is

0.04 inch, so it may be acceptable to permit matrix cracking if the undamaged part of the

laminate has a minimum thickness of 0.04 inch. For the cracked part of the laminate, the

















































Figure 4-3. Geometry and loads for laminates

The design problem was formulated as (Thickness are in inches)

minimize h= 4(tz + t2
such that



7Y12 I 12
0.005 <; t,, t2
0.040 <; h


elastic modulus transverse to fiber direction, E2, and the shear modulus, G12, are reduced

by 20 percent and the transverse strain allowable, 82u, iS increased. The rest of the

laminate must not have matrix cracking and must provide at least 8 contiguous intact

plies (0.04 inch) in order to prevent hydrogen leakage.

Optimization Formulation

Laminates with two ply angles, [f Bi/f 62 S, See Figure 4-3, were optimized. The x

direction here corresponds to the hoop direction on a cryogenic propellant tank, while the

y direction corresponds to the axial direction. The laminates are made of IM600/133

graphite-epoxy material with ply thickness of 0.005 inch and subjected to mechanical

load and an operating temperature of -4230F. Nx is 4,800 lb./inch and N, is 2,400

lb ./inch.



N\


X I


(4-9)









where h is the laminate thickness, superscripts u and I denote upper and lower limits of

associated quantities, and E1, 82, and yl2 are the ply strains along fiber direction,

transverse to fiber direction, and shear strain, respectively. The stack thickness of plies

with ply-angle 97, which is allowed to have matrix cracking, is tl. The stack thickness of

the plies with ply-angle 82, which are not allowed to crack and must provide in total a

minimum intact thickness of 0.04 inch to prevent hydrogen leakage, is t2. The four design

variables are the ply angles 6, and 62 and their stack thickness tl and t2. The individual

stack thickness from a continuous optimizer (SQP in MATLAB) is rounded up to the

nearest multiple of 0.005 inch.

Optimizations without Matrix Cracking

In order to see the effect of mechanical and thermal loads, it is instructive to

compare designs for different operational temperatures. Table 4-4 shows the optimum

laminates at these temperatures. In the last row of Table 4-4, the numbers in the

parentheses are the continuous thickness before rounding. Without thermal strains, a

cross-ply laminate with thickness of 0.04 inch can easily (with 0.1% transverse strain as

the margin of safety) carry the mechanical loads. When thermal strains are taken into

account, the angle between the + 6 plies must decrease in order to reduce the thermal

strains. The ply angles do not vary monotonically because both the residual strains and

the stiffness of the laminate increase with the decrease of temperature. At cryogenic

temperatures the angle decreases to 25.50, and at that angle the axial loads cannot be

carried efficiently and the thickness increases to 0.1 inch. Figure 4-4 shows that the

thickness of optimum laminates for temperature dependent and constant material

properties at 770F changes substantially with the working temperature for a strain limit










82u Of 0.0110. Using temperature dependent material properties avoided a very

conservative design with constant material properties.

Table 4-4. Optimal laminates for different operational temperatures: E2u Of 0.01 10

Mechanical only Mechanical and Thermal load

Temperature
j")77.00 77.00 -61.50 -242.00 -423.00
8~(dere OF.0 )48 81 35 5
6, (degree) 90.00 34.82 38.13 33.57 25.50
6, (degre) 0.005 33.93 38.13 33.57 25.50
tz (inch) 0.005 0.005 0.005 0.010 0.010
0.040 0.040 0.060 0.080 0.100
ha (mnch)
(0.040) (0.040) (0.048) (0.079) (0.093)
a Numbers in parentheses indicate unrounded thickness



0.2
S0.18 O x Variable material properties
S0.16 O Constant material properties
(n0.14
P 0.12
S0.1 O
S0.08
S0.06
S0.04 O v MtM
O 0.02

-500 -400 -300 -200 -100 0 100 200
Temperature (oF)

Figure 4-4. The change of optimal thickness (inch) with temperature for variable and
constant material properties (EZu Of 0.01 10)

Designs must be feasible for the entire range of temperatures, so for all designs

discussed in the rest of the dissertation, strain constraints were applied at 21

temperatures, which were uniformly distributed from 770F to -4230F. Table 4-5 shows

that the design problem has multiple optima. Figure 4-5 shows that the tensile strain limit

82u iS the active constraint at -4230F for the second optimal design of Table 4-5.










Table 4-5. Optimal laminates for temperature dependent material properties with Ezu Of
0.0110 (optimized for 21 temperatures)

6, 6,t; tha (inch) Probability of failure
(degree) (degree) (inch) (inch)
0.00 28.16 0.005 0.020 0.100 (0.103) 0.019338 (0.014541)
27.04 27.04 0.010 0.015 0.100 (0.095) 0.000479 (0.001683)
25.16 27.31 0.005 0.020 0.100 (0.094) 0.000592 (0.001879)
a Numbers in parentheses indicate unrounded thickness
b The probabilities were calculated by the methodology described in the chapter 5


-


SEpsilon1 Epsilon2 +-Gammal 2




- 4 M~2


0.012
0.010
0.008
0.006
0.004
0.002
0.000
-0.002!
-0.004
-0.006


Temperature (oF)



Figure 4-5. Strains in optimal laminate for temperature dependent material properties
with EZu Of 0.01 10 (second design in Table 4-3)

These optimal laminates have similar thickness but different ply angles. The failure

probabilities of the continuous designs are shown in parentheses. The high failure

probabilities of the first design (continuous and discrete) clearly show a smaller safety

margin than the other two. The second and third designs show that a slight rounding can

change the failure probability significantly. Designs with two similar ply angles have

much lower failure probabilities than designs with two substantially different ply angles.

The failure probabilities of these laminates are too high (compared with 10-4 to 10-6), and

this provides incentives to conduct reliability-based design.









Optimizations Allowing Partial Matrix Cracking

Plies of angle 61 are the plies allowed for matrix cracking for optimizations

allowing partial matrix cracking. The 82ti Of the 61 plies was increased to 0.0154, while

the rest of the laminate still used 82ti Of 0.01 1. The lower limit of t2 WAS increased to 0.010

inch (total + 62 thickness of 0.04 inch) to prevent hydrogen leakage. Table 4-6 shows the

optimal design allowing partial matrix cracking. Its thickness is the same as that of the

design without partial matrix cracking (Table 4-5), and the ply angle of the cracked plies

increased due to the increased strain limit, 82". However, the failure probability is higher

than the design that does not allow matrix cracking, which indicates that this option does

not help. The active constraint is still the tensile strain limit 82ti Of 0.011 at cryogenic

temperatures for the un-cracked plies.

Table 4-6. Optimal laminate for temperature dependent material properties allowing
partial matrix cracking: 82ti Of 0.01 1 for uncracked plies and 0.0154 for
cracked plies
6, 6,t; tha (inch) Probability of failure
(degree) (degree) (inch) (inch)
36.07 25.24 0.015 0.010 0.100 (0.097) 0.003716 (0.004582)
a Numbers in parentheses indicate unrounded thickness.

Optimizations with Reduced Axial Load Ny

With small ply angles, the critical component of the load is the axial load Ny,

induced by pressure on the caps of the propellant tank. A smaller axial load may be

obtained by using an auxiliary structure to carry part of this load, such as axial stiffeners

or a cable connecting the caps. If the auxiliary structure does not directly connect to the

wall of the hydrogen tank (such as attached to the caps of the tank), it will not be affected

by the mismatch of the thermal expansion coefficients, i.e., the residual thermal strains.

Here the possibility of reducing the axial load by half by carrying 1200 lb./inch of the









axial load by a cable made of unidirectional material was explored. The required cross-

sectional area of the composite cable is 5.05 inch2, which is equivalent to a laminate

thickness of 0.005 inch for a tank with a 160-inch radius. Table 4-7 lists designs

optimized with half of the axial load. The results indicate that reducing axial load is an

effective way to reduce the laminate thickness. Higher probabilities of failure reflect

rounding down of the thickness.

Table 4-7. Optimal laminates for reduced axial load ofl, 200 lb./inch by using load
shunting cables (equivalent laminate thickness of 0.005 inch)
6, 6,t; tha (inch) Probability of failure
(degree) (degree) (inch) (inch)
0.00 29.48 0.005 0.005 0.040 (0.043) 0.010311 (0.001156)
27.98 26.20 0.005 0.005 0.040 (0.043) 0.585732 (0.473 536)
30.62 11.31 0.005 0.005 0.040 (0.042) 0.008501 (0.008363)
a Numbers in parentheses indicate unrounded thickness.


It is seen that the traditional way of deterministically design the laminate with

safety factors did not work well for this problem due to various uncertainties and the

laminate cracking failure mode. Uncertainties in the material properties are introduced by

the fabrication process, the temperature dependence of material properties, the cure

reference temperature, and acceptable crack density for design. These uncertainties

indicate a need to use reliability-based optimization to design laminates for use at

cryogemic temperatures.















CHAPTER 5
RELIABILITY-BASED DESIGN OF COMPOSITE LAMINATES FOR CRYOGENIC
ENVIRONMENTS

This chapter presents reliability-based designs of composite laminates for hydrogen

tanks in cryogenic environments, comparison between deterministic and reliability-based

design, identification of uncertainty parameters that have the largest influence on the

optimum design, and quantification of the weight penalty associated with level of

uncertainty in those parameters. The results indicate that the most effective measure for

reducing thickness is quality control (refer also to Qu et al., 2001). The reliability-based

optimization is carried out using response surface approximations combined with Monte

Carlo simulation described in chapter three.

Reliability-Based Design Optimization

Problem Formulation

The reliability-based optimization is formulated as

minimize h= 4(tz +t2)
such that
P < P"
0.005 < t, 51
0.005 < t2

where h is the laminate thickness, tl is the stack thickness of lamina with ply-angle 6, and

has a lower limit of 0.005 inch on it, t2 is the stack thickness of lamina with ply-angle 82

and has a lower limit of 0.005 inch. The limits on tl and t2 alSo ensure that the laminate

has a minimum thickness of 0.04 inch to prevent hydrogen leakage. The reliability









constraint is expressed as a limit P" (i.e., P"=10-4) On the probability of failure, P. The

probability of failure is based on first-ply failure according to the maximum strain failure

criterion. The four design variables are the ply angles Bi and 6, and their stack-thickness

tl and tz. Reliability-based optimization seeks the lightest structure satisfying the

reliability constraint.

The twelve random variables are four elastic properties (El, E2, G12, and Cl12), two

coefficients of thermal expansion (al and a2), five ply strain allowables (sin, E1, E2u, Ez21

and ylu), and the stress-free temperature of the material (Tzero). The mean values of the

strain limits are shown in Table 5-1 except for EZu, which is 0.0154. Table 5-2 shows the

coefficients of variation (CV) of the random variables that are assumed to be normally

distributed and uncorrelated. Those CVs are obtained based on limited test data provided

to us by the manufactures, and are intended only for illustration. The mean value of the

stress-free temperature is 3000F. The mean values of other random variables change as

function of temperature and are given in chapter four.

Table 5-1. Strain allowablesa for IM600/133 at -4230F

Strain 8/< ey' sy, es' Y12"

Allowables 0.0103 -0.0109 0.0110 or 0.0154b -0.0130 0.0138

a Strains include residual strains calculated from the stress-free temperature of 3000F
b The value 0.0110 is obtained from the extreme value 0.0154 divided by a safety factor
of 1.4

Table 5-2. Coefficients of variation (CV) of the random variables
Random 1
variables ~~'GZ~ ~ Z ~ eo61 1EE11

CV 0.035 0.035 0.030 0.06 0.09










Response Surface Approximation for Reliability-Based Optimization

For the present work, response surface approximation of two types was created.

The first type is analysis response surface (ARS), which is fitted to the strains in the

laminate in terms of both design variables and random variables. Using the ARS, the

probability of failure at every design point can be calculated inexpensively by Monte

Carlo simulation based on the fitted polynomials. The second type of response surface is

design response surface (DRS) that is fitted to probability of failure as a function of

design variables. The DRS is created in order to filter out noise induced by the Monte

Carlo simulation and is used to calculate the reliability constraint in the design

optimization. The details of the ARS/DRS approach are given in chapter three.

Analysis Response Surfaces

Besides the design and random variables described in the problem formulation, the

service temperature was treated as a variable ranging from 770F to -4230F in order to

avoid constructing analysis response surfaces at each selected temperature. Therefore, the

total number of variables was seventeen. However, the strains in the laminate do not

depend on the Hyve strain allowables, so the ARS were fitted to the strains in terms of

twelve variables, which included four design variables, four elastic properties, two

coefficients of thermal expansion, the stress-free temperature and the service temperature.

The range of the design variables (Table 5-3) for the ARS was chosen based on the

values of the optimal deterministic design. Ranges for random variables are automatically

handled and explained below. Using the ARS and five strain allowables, probabilities of

failure are calculated by Monte Carlo simulations, while the strain constraints were

evaluated at 21 uniformly distributed service temperatures between 770F and -4230F.










Table 5-3. Range of design variables for analysis response surfaces
Design
variables

Range 200 to 300 200 to 300 0.0125 to 0.03 inch 0.0125 to 0.03 inch


The accuracy of the ARS is evaluated by statistical measures provided by the JMP

software (Anon. 2000), which include the adjusted coefficient of multiple determination

(R2agi.) and the root mean square error (RMSE) predictor. To improve the accuracy of

response surface approximation, polynomial coefficients that were not well characterized

were eliminated from the response surface model by using a mixed stepwise regression

(Myers and Montgomery 1995).

The statistical design of experiment of ARS was Latin Hypercube sampling or

Latin Hypercube design (LHS, e.g., Wyss and Jorgensen 1998), where design variables

were treated as uniformly distributed variables in order to generate design points

(presented in Chapter 3).

Since the laminate has two ply angles and each ply has three strains, six ARS were

needed in the optimization. A quadratic polynomial of twelve variables has 91

coefficients. The number of sampling points generated by LHS was selected to be twice

the number of coefficients. Tables 4 shows that the quadratic response surfaces

constructed from LHS with 182 points offer good accuracy.

Table 5-4. Quadratic analysis response surfaces of strains (millistrain)
Analysis response surfaces based on 182 LHS points
Error Statistics
E1 in Bi ez in Bi Yl in Bi E in 82 ez in 82 Y1 in 82

R2agj 0.9977 0.9956 0.9991 0.9978 0.9961 0.9990
RMSE Predictor 0.017 0.06 0.055 0.017 0.055 0.06
Mean of
1.114 8.322 -3.13 1.108 8.328 -3.14
Response










Design Response Surfaces

The six quadratic ARS were used to calculate the probabilities of failure with

Monte Carlo simulation. Because the fitting errors in design response surfaces (DRS) are

generally larger than the random errors from finite sampling in probability calculation,

Monte Carlo simulation needs only to be performed until relatively small errors estimated

confidence intervals are achieved. Therefore, a sample size of 1,000,000 was employed.

The design points of DRS combine Face Center Central Composite Design (FCCCD) and

LHS. Table 5-5 compares the three DRS.

Table 5-5. Design response surfaces for probability of failure (Probability calculated by
Monte Carlo simulation with a sample size of 1,000,000)
FCCCD 25 points LHS 252 points L 2 pit
Error Statistics + FCCCD 25 points
quadratic 5th order 5th order

R2adj 0.6855 0.9926 0.9982
RMSE Predictor 0.00053 0.000003 0.000012
Mean of
0.00032 0.000016 0.000044
Response

The accuracy of the quadratic response surface approximation is unacceptable. The

accuracy of fifth order response surface (with 126 unknown coefficient before stepwise

regression) was improved by using a reciprocal transformation on the thickness tl and t2,

since the probability of failure, like most structural response, is inversely correlated with

the stack thickness. We found that LHS might fail to sample points near some corners of

the design space, leading to poor accuracy around these corners. We therefore combined

LHS with FCCCD that includes all the vertices of the design space. The accuracy of DRS

based on LHS combined with FCCCD is slightly worse than that of DRS based on LHS

alone, because the probabilities at the corners of the design space are usually extremely

low or high, presenting a greater fitting difficulty than without FCCCD. But the









extrapolation problem was solved, and the side constraints are set as the range of the ARS

shown in Table 5-3. The error of 0.000012 is much lower than the allowable failure

probability of 0.0001.

Table 5-6 compares the reliability-based optimum with the three deterministic

optima from chapter 4 and their failure probabilities. The optimal thickness increased

from 0.100 to 0.120, while the failure probability decreased by about one order of

magnitude.

Table 5-6. Comparison of reliability-based optimum with deterministic optima
Optimal Design Laminate Failure probability Allowable
[81, 62, tl, t2] thickness from MCS of ARS probability of
(degree and inch) (inch) 1,000,000 samples failure

[24.89, 25.16, 0.015, 0.015] 0.120 (0.120) 0.000055 0.0001

[0.00, 28.16, 0.005, 0.020] 0.100 (0.103) 0.019338a
Deterministic
[27.04, 27.04, 0.010, 0.015] 0.100 (0.095) 0.000479
optima
[25.16, 27.31, 0.005, 0.020] 0.100 (0.094) 0.000592
a This deterministic optimum is out of the range of the analysis response surfaces; the
probability of failure was calculated by Monte Carlo simulation based on another set of
analysis response surfaces.

Refining the Reliability-Based Design

The reliability-based designs in Table 5-6 show that the ply angles close to 250

offer designs with low failure probability. Furthermore, good designs require only a

single ply-angle allowing simplification of the configuration of the laminate from

[f 6,/ B O]s to [f B ]s. Table 5-7 shows the failure probabilities of some chosen designs

calculated with Monte Carlo simulation using ARS. The laminates with ply-angles of

240, 250, and 260 offer lower probabilities of failure than the rest. These three laminates

will be further studied.









Table 5-7. Refined reliability-based design [+6]s (Monte Carlo simulation with a sample
size of 10,000,000)
B (degree) h (inch) Probability of failure
21.000 0.120 0.0001832
22.000 0.120 0.0001083
23.000 0.120 0.0000718
24.000 0.120 0.0000605
25.000 0.120 0.0000565
26.000 0.120 0.0000607
27.000 0.120 0.0000792

Quantifying Errors in Reliability Analysis

The reliability analysis has errors due to MCS with limited sample size and due to

the approximation of CLT analysis by analysis response surfaces. To evaluate the amount

of errors in reliability analysis, the probability of failure of the rounded design was

evaluated by using MCS with the exact analysis (classical laminate theory, CLT), but

only one million analyses were performed due to the cost of computation. Table 5-8

compares the results of MCS based on ARS and that based on CLT. The difference is

about 1.25x105

Table 5-8. Comparison of probability of failure from MCS based ARS and CLT
Failure
Optimal Design Laminate Failure probability
probability from
[81, 62, tl, t2] thickness from MCS of ARS
7MC Sof CLT
(degree and inch) (inch) 1x 107 samples 106 ape

[25, 25, 0.015, 0.015] 0.120 (0.120) 0.0000565 0.000069


By assuming each simulation as a Bernoulli trial and the N trails as Binomial

distribution, the coefficient of variation (COV) of the probability (PoJ) obtained by MCS

can be estimated by

(1- Pof )Pof

CO V(Pof ) (5-2)
Pof









where N is the sample size of the MCS. The accuracy of MCS can also be expressed in

terms of percentage error corresponding 95% confidence interval as

(1 Pof ")
E% = x 196% (5-3)
Nx Pof "

where Pof is the true probability of failure. Table 5-9 shows the accuracy and error

bounds for MCS. Together with Table 5-8 the error calculation indicates that the

probability of failure of the rounded design is still below the target probability of failure

of 0.0001. The errors can be reduced by more accurate approximations and advanced

Monte Carlo simulations. Another reliability-based design cycle in a reduced size design

region can be performed to obtain more accurate result.

Table 5-9. Accuracy of MCS
Coefficient of Variation Percentage errors (Absolute
(COV) errors) for 95% CI
MCS of lx107 samples 4.2% 8.2% (f4.66x10-6)

MCS of lx106 Samples 12.05% 23.6%(fl.63x10- )



Effects of Quality Control on Laminate Design

Comparing deterministic designs to the reliability-based design, there is an increase

of 20% in the thickness. In addition, the design failure probability of 10-4 is quite high. In

order to improve the design the possibility of limiting the variability in material

properties through quality control (QC) is considered. Here, quality control means that

materials will be tested by the manufacturer and/or fabricator, and that extremely poor

batches will not be accepted. Normal distributions assume the possibility (though very

small) of unbounded variation. In practice, quality control truncates the low end of the

distribution. That is, specimens with extremely poor properties are rejected. It is also









assumed that specimens with exceptional properties are scarcer than those with poor

properties. The normal distribution will be truncated on the high side at 30 (excluding 14

out of 10,000 specimens) and on the low side at different values corresponding to

different levels of QC. The tradeoff between QC, failure probability and laminate

thickness (weight) will be explored.

Effects of Quality Control on Probability of Failure

Since the primary failure mode of the laminate is micro-cracking, the tensile strain

limit 82u is the first quantity to be improved by quality control. The normal distribution of

82u is truncated at 30 to exclude unrealistically strong specimens, and on the low side QC

at -40, -30, and -20 was checked, which corresponds to rejecting 3 specimens out of

100,000, 14 specimens out of 10,000, and 23 specimens out of 1,000, respectively. Table

5-10 shows the change in the failure probability for selected reliability-based designs.

Quality control on E2u is a very effective way to reduce the probability of failure. A

relatively low cost QC of EZu at 30 will reduce the failure probability by more than two

orders of magnitude.

Table 5-10. Effects of quality control of EZu On probability of failure for 0. 12 inch-thick
(+6)s laminates
Probability of failure from MCS 10,000,000 samples
B Un-truncated Truncate at -40 Truncate at -30 Truncate at -20
Normal (3/100,000) (14/10,000) (23/1,000)
24.00 60.5e-6 30.5e-6 0.0 0.0
25.00 56.5e-6 29.9e-6 0.1e-6 0.0
26.00 60.7e-6 31.0e-6 0.5e-6 0.0

Table 5-11 shows that truncating other strain limits even at -20 will not change the

laminate failure probability substantially. This reveals the fact that the primary failure

mode of the laminate is micro-cracking. Therefore, E2u is the critical parameter to study










further. Table 5-12 shows that this conclusion applies also to the elastic moduli,

coefficients of thermal expansion, and stress-free temperature. By comparing with Table

5-10, we see that truncating any of the other parameters at -2 o does not change the

failure probability as significantly as truncating 82u at -4 o. Note that some probabilities

from truncated distributions are slightly larger than those from untruncated distributions,

which is due to the sampling errors.

Table 5-11i. Effects of quality control of st", sil, E21, and Y12 On probability of failure of
0.12 inch-thick (f6)s laminates
Probability of failure from MCS 10,000,000 samples

Un-truncated Truncated Truncated Truncated Truncated
Normal e1U at -20 s/1 at -20 eat at -20 77, at -20

24.00 60.5e-6 58.6e-6 61.5e-6 54.4e-6 55.4e-6

25.00 56.5e-6 53.0e-6 52.3e-6 54.0e-6 53.2e-6

26.00 60.7e-6 63.4e-6 62.0e-6 60.1e-6 61.0e-6


Table 5-12. Effects of quality control of El, E2, G12, Cl12, Tzero, al, and a2 On probability
of failure of 0. 12 inch-thick (f6)s laminates
Probability of failure from MCS 10,000,000 samples by truncating
B at -20
E1 Ez Gol go1 Tzero ao at

24.00 62.2e-6 52. 1e-6 57.8e-6 51.8e-6 54.6e-6 52.7e-6 58.2e-6

25.00 52.5e-6 48. 1e-6 55.1e-6 49.7e-6 55.1e-6 56.8e-6 54.4e-6

26.00 54.5e-6 59. 1e-6 60.4e-6 59.4e-6 59.8e-6 63.0e-6 60.4e-6



Effects of Quality Control on Optimal Laminate Thickness

Quality control (QC) can be used to reduce the laminate thickness instead of the

probability of failure. Table 5-13 shows that QC of EZu at -30 will allow 0.1 inch-thick

laminates with failure probability below the required 0.0001.









Table 5-13. Effects of quality control of EZu On probability of failure for 0. 1 inch-thick
(+6)s laminates
Probability of failure from MCS 1,000,000 samples

Un-truncated Truncate at Truncate at Truncate at
Normal -4a (3/100,000) -3 a(14/10,000) -2.5 a(6/1,000)

24.00 0.002224 0.002163 0.001054 0.000071
25.00 0.001030 0.000992 0.000229 0.000007
26.00 0.000615 0.000629 0.000092 0.000003


Table 5-14 shows that QC of EZu at -1.60, which corresponds to rejecting 55

specimens out of 1000, will reduce the thickness to 0.08 inch with a failure probability

below 0.0001. Therefore, the laminate thickness can be reduced to 0.08 inch if QC is able

to find and rej ect 55 specimens out of 1000.

Table 5-14. Effects of quality control of EZu On probability of failure for 0.08 inch-thick
(+6)s laminates
Probability of failure from MCS 1,000,000 samples

Un-truncated Truncate at Truncate at Truncate at
Normal -30(14/10,000) -20(23/1,000) -1.60 (55/1,000)

24.00 0.061204 0.060264 0.039804 0.015017
25.00 0.028289 0.027103 0.008820 0.001019
26.00 0.013595 0.012154 0.001243 0.000071


Effects of Other Improvements in Material Properties

Instead of quality control, it is possible to improve the design by using a better

material. Table 5-15 shows the effects of changing the mean value of E2u by f 10 percent

of the nominal value of 0.0154. Comparison with Table 5-10 shows that a 10%

improvement has big influence on failure probability but is not as powerful as quality

control at -30 level.










Table 5-15. Sensitivity of failure probability to mean value of 82u (CV=0.09) for 0. 12
inch-thick 1(f6)s aminates
Probability of failure from MCS 10,000,000 samples

E(82u)=0.0154 E(82u)=0.01694 E(82u)=0.013 86

24.00 60.5e-6 2.5e-6 1082.3e-6
25.00 56.5e-6 3.4e-6 996.7e-6
26.00 60.7e-6 3.4e-6 1115.7e-6


The failure probability also depends on the coefficient of variation (CV) of 82u. The

CV can be improved if the manufacturing could be more consistent. Table 5-16 shows

that the failure probabilities are not as sensitive to changes of coefficient of variation as

to changes in the mean value of E2u, but 10 percent reduction in the coefficient of

variation can still reduce the failure probability by about a factor of five.

Table 5-16. Sensitivity of failure probability to CV of 82u ( E(82u)=0.0154 ) for 0. 12 inch-
thick (16)s laminates

Probability of failure from MCS 10,000,000 samples

CV=0.09 CV=0.099 CV=0.081

24.00 60.5e-6 209.5e-6 9.8e-6
25.00 56.5e-6 208.2e-6 10.8e-6
26.00 60.7e-6 224.2e-6 11.1e-6


Figure 5-1 combines several effects discussed earlier to show a tradeoff plot of

probability of failure, cost (truncating and changing the distribution of 82u), and weight

(thickness) for a laminate of [ f25]s. For probability of failure less than le-3, quality

control at the -20 level is more effective for reducing the probability of failure than

increasing the mean value by 10 percent or decreasing the coefficients of variation by 10

percent. The reason is that small failure probability is heavily affected by the tails of the










distributions. For large failure probability, increasing the mean value of E2u is more

effective. Increasing the mean value of E2u by 10 percent or truncating 82u at 2cr can

reduce the laminate thickness to 0.10 inch for a safety level of le-4. Combining all three

measures together, the laminate thickness can be reduced to 0.08 inch with a safety level

of le-7.

Table 5-17 shows the changes of maximum E2 calculated by the laminate analyses.

Ten percent changes of the mean values of E2, Tzero, and a2 (Same CV) will lead to about

5% change in the maximum E2, which indicate that further study needs to focus on the

three quantities. Table 5-18 shows the probabilities of failure are reduced by a factor of

five by 10 percent change of the mean values of E2, Tzero, and a2 (Same CVs). This

reduction of probability shows the potential of further improvements via improvements in

all three material properties.



1.0E+00
-*-Nominal
1.0E-01 -Qual~ity control to -2 Sigma
-%-10% mecrease in allowable
1.0E-02 % rex~~ duction in variability~l
S1.0E-03






1.0E-07

1.0E-08
0.06 0.08 0.1 0.12 0.14 0.16
Thickness (inch)


Figure 5-1. Tradeoff plot of probability of failure, cost, and weight (laminate thickness)
for [+25]s









Table 5-17. Maximum E2 (milliStrain) induced by the change of material properties El,
E2, G12, Cl12, Tzero, a1, and a2 for 0.12 inch-thick [f250]s laminate
NominalMaximum E2 frOm deterministic analyses for 21 temperature
maximum
es=9.859 E1 Ez Gol pa~ 7,ero au at
9.320 9.399
0. 9*Nominal 9.901 1 0.469 9. 763 9. 909 9. 85 7
(5.47%~) (4. 67%~)
9.313
1.1* Nominal 9. 824 9.960 9. 981 10. 584 9.861 1 0. 333
(5.54%~)

Table 5-18. Probability of failure for 0. 12 inch-thick [ f 250]s laminate with improved
average material properties (Monte Carlo simulation with a sample size of
10,000,000)
All three
Nominal 1.1*E(Ez) 0.9*E( Tero) 0.9*E(a )
measures
Probability
0.0000605 0.0000117 0.0000116 0.0000110 0.0000003
of~failure

Summary

The design of hydrogen tanks for cryogenic environments poses a challenge

because of large thermal strains that can cause matrix-cracking, which may lead to

hydrogen leakage. The laminate design must use ply angles that are not too far apart to

reduce the thermal residual strains, compromising the ability of the laminate to carry

loads in two directions. These small ply angles can cause the laminate thickness to more

than double compared to what is needed to carry only the mechanical loads in the

application study here. Satisfying reliability constraints increased the thickness further.

Improving the probability of failure required increase of thickness. The most

influential uncertainty was variability in the tensile strain allowable in the direction

transverse to the fiberS, 82u. Limiting this variability can reduce the required thickness. Of

the different options studied in the chapter, quality control on the transverse tensile

allowable, 82u, prOVed to be the most effective option. Quality control at the -1.60 level

of 82u, COrresponding to rej section of about 5.5% of the specimens, can reduce the required










thickness by a third. Reductions in the coefficient of variation of sZu, Or increase in its

mean value also reduce the failure probability substantially. Increasing the transverse

modulus E2, decreasing coefficient of thermal expansion OL2, and reducing the stress free

temperature Tzero can also help considerably.















CHAPTER 6
PROBABILISTIC SUFFICIENCY FACTOR APPROACH FOR RELIABILITY-
BASED DESIGN OPTIMIZATION

A probabilistic sufficiency factor approach is proposed that combines safety factor

and probability of failure for use in reliability-based design optimization. The

probabilistic sufficiency factor approach represents a factor of safety relative to a target

probability of failure. It provides a measure of safety that can be used more readily than

probability of failure or safety index by designers to estimate the required weight increase

to reach a target safety level. The probabilistic sufficiency factor can be calculated from

the results of Monte Carlo simulation with little extra computation. The chapter presents

the use of probabilistic sufficiency factor with a design response surface approximation,

which fits it as function of design variables. It is shown that the design response surface

approximation for the probabilistic sufficiency factor is more accurate than that for the

probability of failure or for the safety index. The probabilistic sufficiency factor does not

suffer like probability of failure or safety index from accuracy problems in regions of low

probability of failure when calculated by Monte Carlo simulation. The use of

probabilistic sufficiency factor accelerates the convergence of reliability-based design

optimization

Introduction

Recently, there has been interest in using alternative measures of safety in

reliability-based design optimization. These measures are based on margin of safety or

safety factors that are commonly used as measures of safety in deterministic design.









Safety factor is generally expressed as the quotient of allowable over response, such as

the commonly used central safety factor that is defined as the ratio of the mean value of

allowable over the mean value of the response. The selection of safety factor for a given

problem involves both objective knowledge such as data on the scatter of material

properties and subjective knowledge such as expert opinion. Given a safety factor, the

reliability of the design is generally unknown, which may lead to unsafe or inefficient

design. Therefore, the use of safety factor in reliability-based design optimization seems

to be counter productive.

Freudenthal (1962) showed that reliability can be expressed in term of the

probability distribution function of the safety factor. Elishakoff (2001) surveyed the

relationship between safety factor and reliability, and showed that in some cases the

safety factor can be expressed explicitly in terms of reliability. The standard safety factor

is defined with respect to the response obtained with the mean values of the random

variables. Thus a safety factor of 1.5 implies that with the mean values of the random

variables we have a 50% margin between the response (e.g., stress) and the capacity (e.g.,

failure stress). However, the value of the safety factor does not tell us what the reliability

is. Therefore, Birger (1970), as reported by Elishakoff (2001), introduced a factor, which

we call here the probabilistic sufficiency factor that is more closely related to the target

reliability. A probabilistic sufficiency factor of 1.0 implies that the reliability is equal to

the target reliability, a probabilistic sufficiency factor larger than one means that the

reliability exceeds the target reliability, and probabilistic sufficiency factor less than one

means that the system is not as safe as we wish. Specifically, a probabilistic sufficiency









factor value of 0.9 means that we need to multiply the response by 0.9 or increase the

capacity by 1/0.9 to achieve the target reliability.

Tu et al. (2000) used probabilistic performance measure, which is closely related to

Birger's safety factor, for RBDO using most probable point (MPP) methods (e.g., first

order reliability method). They showed that the search for the optimum design converged

faster by driving the safety margin to zero than by driving the probability of failure to its

target value. Wu et al. (1998, 2001) used probabilistic sufficiency factors in order to

replace the RBDO with a series of deterministic optimizations by converting reliability

constraints to equivalent deterministic constraints.

The use of the probabilistic sufficiency factor gives a designer more quantitative

measure of the resources needed to satisfy the safety requirements. For example, if the

requirement is that the probability of failure is below 10-6 and the designer Einds that the

actual probability is 10-4, he or she cannot tell how much change is required to satisfy the

requirement. If instead the designer Einds that a probability of 10-6 is achieved with a

probabilistic sufficiency factor of 0.9, it is easier to estimate the required resources. For a

stress-dominated linear problem, raising the probabilistic sufficiency factor from 0.9 to 1

typically requires a weight increase of about 10 percent of weight of the overstressed

components .

Reliability analysis of systems with multiple failure modes often employs Monte

Carlo simulation, which generates numerical noise due to limited sample size. Noise in

the probability of failure or safety index may cause reliability-based design optimization

(RBDO) to converge to a spurious optimum. The accuracy of MCS with a given number

of samples deteriorates with decreasing probability of failure. For RBDO problems with









small target probability of failure, the accuracy of MCS around the optimum is not as

good as in regions with high probability of failure. Furthermore, the probability of failure

in some regions may be so low that it is calculated to be zero by MCS. This flat zero

probability of failure does not provide gradient information to guide the optimization

procedure.

The probabilistic sufficiency factor is readily available from the results of MCS

with little extra computational cost. The noise problems of MCS motivate the use of

response surface approximation (RSA, e.g., Khuri and Cornell 1996). Response surface

approximations typically employ low-order polynomials to approximate the probability

of failure or safety index in terms of design variables in order to filter out noise and

facilitate design optimization. These response surface approximations are called design

response surface approximation (DRS) and are widely used in the RBDO (e.g., Sues et al.

1996).

The probability of failure often changes by several orders of magnitude over

narrow bands in design space, especially when the random variables have small

coefficients of variation. The steep variation of probability of failure requires DRS to use

high-order polynomials for the approximation, such as quintic polynomials (chapter 5),

increasing the required number of probability calculations (Qu et al. 2000). An additional

problem arises when Monte Carlo simulations (MCS) are used for calculating

probabilities. For a given number of simulations, the accuracy of the probability estimates

deteriorates as the probability of failure decreases.

The numerical problems associated with steep variation of probability of failure

led to consideration of alternative measures of safety. The most common one is to use the









safety index, which replaces the probability by the distance, which is measured as the

number of standard deviations from the mean of a normal distribution that gives the same

probability. The safety index does not suffer from steep changes in magnitude, but it has

the same problems of accuracy as the probability of failure when based on Monte Carlo

simulations. However, the accuracy of probabilistic sufficiency factor is maintained in

the region of low probability. The probabilistic sufficiency factor also exhibits less

variation than probability of failure or safety index. Thus the probabilistic sufficiency

factor can be used to improve design response surface approximations for RBDO.

The next section introduces the probabilistic sufficiency factor, followed by the

computation of the probabilistic sufficiency factor by Monte Carlo simulation. The

methodology is demonstrated by the reliability-based beam design problem.

Probabilistic Sufficiency Factor

The deterministic equivalent of reliability constraint in RBDO can be formulated as

g, (, d < g (i d)(6-1)

where gr denotes a response quantity, go represent a capacity (e.g., strength allowable),

x is usually the mean value vector of random variables, d is the design vector. The

traditional safety factor is defined as

g c(x, d)
s(x, d) = (6-2)
g, (x, d)

and the deterministic design problem requires

s(i, d) > s, (6-3)

where s, is the required safety factor, which is usually 1.4 or 1.5 in aerospace

applications. The reliability constraint can be formulated as a requirement on the safety

factor










Prob(s < 1) < P, (6-4)

where Pr is the required probability of failure. Birger' s probabilistic sufficiency factor Pyf

is the solution to

Prob(s < Pf) = P, (6-5)

It is the safety factor that is violated with the required probability Pr.

Figure 6-1 shows the probability density of the safety factor for a given design. The

area under the curve left to s=1 represents the probability that s<1, hence it is equal to

actual probability of failure. The shaded area in the figure represents the target

probability of failure, Pt. For this example, since it is the area left of the line s=0.8, Pyf=

0.8. The value of 0.8 indicates that the target probability will be achieved if we reduced

the response by 20 % or increased the capacity by 25% (1/0.8-1). For many problems this

provides sufficient information for a designer to estimate the additional structural weight.

For example, raising the safety factor from 0.8 to 1 of a stress-dominated linear problem

typically requires a weight increase of about 20% of the weight of the overstressed

components .












0.6 0.8 1.0 1.2 1.4 1.6 sy/g

Figure 6-1. Probability density of safety factor. The area under the curve left to s=1
measures the actual probability of failure, while the shaded area is equal to
the target probability of failure indicating that probabilistic sufficiency factor
= 0.8










Using Probabilistic Sufficiency Factor to Estimate Additional Structural Weight to
Satisfy the Reliability Constraint

The following cantilever beam example (Figure 6-2) is taken from Wu et al. (2001)

to demonstrate the use of probabilistic sufficiency factor.


L=100"


Figure 6-2. Cantilever beam subject to vertical and lateral beading

There are two failure modes in the beam design problem. One failure mode is

yielding, which is most critical at the corner of the rectangular cross section at the fixed

end of the beam

600 600
gs (R, X, Y, w, t) =R r= R -( 2Y + X) (6-6)
wt2 w2

where R is the yield strength, X and Y are the independent horizontal and vertical loads.

Another failure mode is the tip deflection exceeding the allowable displacement, Do


22-7
4L3 Y fX
gD (E, X, Y, w, t) = Do D = Do +I 67
Ew t t 2

where E is the elastic modulus. The random variables are defined in Table 6-1.

Table 6-1. Random variables in the beam design problem
Random variables X Y R E
Normal Normal
Normal Normal
Di stribution(40020) (961.5)
(500, 100) lb (1000, 100) lb 40,20 (96 .5)
\", '/"psi psi

The cross sectional area is minimized subject to two reliability constraints, which

require the safety indices for strength and deflection constraints to be larger than three


Y



tX










(probability of failure less than 0.00135). The reliability-based design optimization

problem, with the width w and thickness t of the beam as design variables that are

deterministic, can be formulated as

minimize A = wt
such that
p 0.00135 <0(68

based on probability of failure, or

minimize A = wt
such that
3 < 0(6-9)


based on safety index, where Pis the safety index, or

minimize A = wt
such that
(6-10)
1- P, <0

based on the probabilistic sufficiency factor. The reliability constraints are formulated in

the above three forms, which are equivalently in terms of safety. The details of the beam

design are given later in the paper.

In order to demonstrate the utility of the P4, for estimating the required weight for

correcting a safety deficiency, it is useful to see how the stresses and the displacements

depend on the weight (or cross sectional area) for this problem. If we have a given design

with dimensions wo and to and a P,f of Pero, which is smaller than one, we can make the

structure safer by scaling both w and t uniformly by a constant c

w = cw,, t = ct, (6-11)

It is easy to check from (6-6) and (6-7) that the stress and the displacement will

then change by a factor of c3, and the area by a factor of c Since the P,f is inversely









proportional to the most critical stress or displacement, it is easy to obtain the relationship


P, = Peo,( f)' (6-12)


where Ao=woto. This indicates that a one percent increase in area (corresponding to 0.5

percent increase in w and t) will improve the Pyfby about 1.5 percent. Since non-uniform

increases in the width and thickness may be more efficient than uniform scaling, we may

be able to do better than 1.5 percent. Thus, if we have Py-0.97, we can expect that we

can make the structure safe with a weight increase under two percent.

The probabilistic sufficiency factor gives a designer a measure of safety that can be used

more readily than the probability of failure or the safety index to estimate the required

weight increase to reach a target safety level. The Pyf of a beam design, presented in

section 4 in details, is 0.9733 for a target probability of failure of 0.00135, (6-12) indicate

that the deficiency in the Pyf can be corrected by scaling up the area by a factor of 1.0182.

Since the area A is equal to c2wt, the dimensions should be scaled by a factor c of 1.0091

(=1.01820.5) to w = 2.7123 and t = 3.5315. Thus the objective function of the scaled

design is 9.5785. The probability of failure of the scaled design is 0.001302 (safety index

of 3.0110 and probabilistic sufficiency factor of 1.0011) evaluated by MCS with

1,000,000 samples. Such estimation is readily available using the probability of failure

(0.003 14) and the safety index (2.7328) of the design.

Reliability Analysis Using Monte Carlo Simulation

Let g(x) denote the limit state function of a performance criterion (such as strength

allowable larger than stress), so that the failure event is defined as g(x) <0, where x is a

random variable vector. The probability of failure of a system can be calculated as










P,' fx (x)dx: (6-13)
g(x} 0

where fx(x) is the joint probability distribution function (JPDF). This integral is hard to

evaluate, because the integration domain defined by g(x) 0 is usually unknown, and

integration in high dimension is difficult. Commonly used probabilistic analysis methods

are either moment-based methods such as the first-order-reliability method (FORM) and

the second-order-reliability method (SORM), or simulation techniques such as Monte

Carlo simulation (MCS) (e.g., Melchers 1999). Monte Carlo simulation is a good method

to use for system reliability analysis with multiple failure modes. The present chapter

focuses on the use of MCS with response surface approximation in RBDO.

Monte Carlo simulation utilizes randomly generated samples according to the

statistical distribution of the random variables, and the probability of failure is obtained

by calculating the statistics of the sample simulation. Fig. 3 illustrated the Monte Carlo

simulation of a problem with two random variables. The probability of failure of the

problem is calculated as the ratio of the number of samples in the unsafe region over the

total number of samples.

A small probability requires a large number of samples for MCS to achieve low

relative error. Therefore, for fixed number of simulations, the accuracy of MCS

deteriorates with the decrease of probability of failure. For example, with 106

simulations, a probability estimate of 10 has a relative error of a few percent, while a

probability estimate of 10 has a relative error of the order of 100 percent. In RBDO, the

required probability of failure is often very low, thus the probability (or safety index)

calculated by MCS is inaccurate near the optimum. Furthermore, the probabilities of

failure in some design regions may be so low that they are calculated as zero by MCS.








This flat zero probability of failure or infinite safety index cannot provide useful gradient

information to the optimization.


x2 g(x)<0
Unsafe Region



g(x)= 0
*. **Limit State


,' ** g (x) >
** *Safe Region

X1
Figure 6-3. Monte Carlo simulation of problem with two random variables
Calculation of Probabilistic Sufficiency Factor by Monte Carlo Simulation

Here we propose the use of probabilistic sufficiency factor to solve the problems

associated with probability calculation by MCS. Pyf can be estimated by MCS as follows.

Define the nth safety factor of MCS as

s~n =nt m ein sx (6-14)


where M~ is the sample size of MCS, and the nth min means the nth Smallest safety factor

among M safety factors from MCS. Thus so, is the nt'h-order statistics ofM safety factors

from MCS, which corresponds to a probability of nM \ of s(x) < s ,. That is, we seek to

find the safety factor that is violated with the required probability Pr. The probabilistic

sufficiency factor is then given as

P~= st, for n =PSIM (6-15)









For example, if the required probability P,- is 10-4 and the sample size of Monte Carlo

simulation M~ is 106, Ps, is equal to the highest safety factor among the 100 samples

(n=PM with the lowest safety factors. The calculation of Ps, requires only sorting the

lowest safety factors in the Monte Carlo samples. While the probability of failure changes

by several orders of magnitude the probabilistic sufficiency factor usually varies by less

than one order of magnitude in a given design space.

For problems with k reliability constraints, the most critical safety factor is

calculated first for each Monte Carlo sample,


s(xi) = nu (6-16)


Then the sorting of the Ilth minimum safety factor can be proceeded as in (6-14). When n

is small, it may be more accurate to calculate Ps, as the average between the Ilth and

(n+1)th l0WeSt safety factor in the Monte Carlo samples.

The probabilistic sufficiency factor provides more information than probability of

failure or safety index. Even in the regions where the probability of failure is so small

that it cannot be estimated accurately by the MCS with given sample size M, the accuracy

ofPs,is maintained. Using the probabilistic sufficiency factor also gives designers useful

insights on how to change the design to satisfy safety requirements as shown in section

2.1. The estimate is not readily available from the probability of failure or the safety

index. The probabilistic sufficiency factor is based on the ratio of allowable to response,

which exhibits much less variation than the probability of failure or safety index.

Therefore, approximating probabilistic sufficiency factor in design optimization is easier

than approximating probability of failure or safety index as discussed in the next section.









Monte Carlo Simulation Using Response Surface Approximation

Monte Carlo simulation is easy to implement, robust, and accurate with sufficiently

large samples, but it requires a large number of analyses to obtain a good estimate of

small failure probabilities. Monte Carlo simulation also produces a noisy response and

hence is difficult to use in optimization. Response surface approximations solve the two

problems, namely simulation cost and noise from random sampling.

Response surface approximations fit a closed-form approximation to the limit state

function to facilitate reliability analysis. Therefore, response surface approximation is

particularly attractive for computationally expensive problems such as those requiring

complex Einite element analyses. Response surface approximations usually fit low-order

polynomials to the structural response in terms of random variables

g(x) = Z(x)T b (6-17)

where g(x) denotes the approximation to the limit state function g(x), Z(x) is the basis

function vector that usually consists of monomials, and b is the coefficient vector

estimated by least square regression. The probability of failure can then be calculated

inexpensively by Monte Carlo simulation or moment-based methods using the fitted

polynomials.

Response surface approximations (RSA) can be used in different ways. One

approach is to construct local RSA around the Most Probable Point (MPP) that

contributes most to the probability of failure of the structure. The statistical design of

experiment (DOE) of this approach is iteratively performed to approach the MPP on the

failure boundary. For example, Bucher and Bourgund (1990), and Sues (1996, 2000)

constructed progressively refined local RSA around the MPP by an iterative method. This









local RSA approach can produce satisfactory results given enough iterations. Another

approach is to construct global RSA over the entire range of random variables, i.e.,

design of experiment around the mean values of the random variables. Fox (1993, 1994,

1996) used Box-Behnken design to construct global response surfaces and summarized

12 criteria to evaluate the accuracy of RSA. Romero and Bankston (1998) employed

progressive lattice sampling as the design of experiments to construct global RSA. With

this approach, the accuracy of response surface approximation around the MPP is

unknown, and caution must be taken to avoid extrapolation near the MPP. Both

approaches can be used to perform reliability analysis for computationally expensive

problems. The selection of RSA approach depends on the limit state function of the

problem. The global RSA is simpler and efficient to use than local response surface

approximation for problems with limit state function that can be well approximated

globally.

However, the reliability analysis needs to be performed and hence the RSA needs

to be constructed at every design point visited by the optimizer, which requires a fairly

large number of response surface constructions and thus limit state evaluations. The local

RSA approach is even more computationally expensive than the global approach in the

design environment. Qu et al. (2000) developed a global analysis response surface (ARS)

approach in unified space of design and random variables to reduce the number of RSA

substantially and achieve higher efficiency than the previous approach. This analysis

response surface can be written as

g(x,d)= Z(x, d)Tb (6-18)









x and d are the random variable and design variable vectors, respectively. They

recommended Latin Hypercube sampling as the statistical design of experiments. The

number of response surface approximations constructed in optimization process is

reduced substantially by introducing design variables into the response surface

approximation formulation.

The selection of RSA approach depends on the limit state function of the problem

and target probability of failure. The global RSA approach is more efficient than local

RSA, but it is limited to problems with relatively high probability or limit state function

that can be well approximated by regression analysis based on simple basis functions. To

avoid the extrapolation problems, RSA generally needs to be constructed around

important region or MPP to avoid large errors in the results of MCS induced by fitting

errors in RS. Therefore, an iterative RSA is desirable for general reliability analysis

problem.

Design response surface approximations (DRS) are fitted to probability of failure to

filter out noise in MCS and facilitate optimization. Based on past experience, high-order

DRS (such as quintic polynomials) are needed in order to obtain a reasonably accurate

approximation of the probability of failure. Constructing highly accurate DRS is difficult

because the probability of failure changes by several orders of magnitude over small

distance in design space. Fitting to safety index /7=-0 (p), where p is the probability of

failure and & is the cumulative distribution function of normal distribution, improves the

accuracy of the DRS to a limited extent. The probabilistic sufficiency factor can be used

to improve the accuracy of DRS approximation.









Beam Design Example

The details of the beam design problem mentioned in section 2 are presented here.

Since the limit state of the problem is available in closed form as shown by (6-6) and (6-

7), the direct Monte Carlo simulation with a sufficient large number of samples is used

here (without analysis response surface) in order to in order to better demonstrate the

advantage of probabilistic sufficiency factors over probability of failure or safety index

better. By using the exact limit state function, the errors in the results of Monte Carlo

simulation are purely due to the convergence errors, which can be easily controlled by

changing the sample size. In applications where analysis response surface approximation

must be used, the errors introduced by approximation can be reduced by sequentially

improving the approximation as the optimization progresses.

The reliability constraints, shown by (6-8) to (6-10), are approximated by design

response surface approximates that fit to probability of failure, safety index, and

probabilistic sufficiency factor. The accuracy of the design response surface

approximations is then compared. The design response surface approximations are in two

design variables w and t. A quadratic polynomial in two variables has six coefficients to

be estimated. Since Face Center Central Composite Design (FCCCD, Khuri and Cornell

1996) is often used to construct quadratic response surface approximation, a FCCCD

with 9 points was employed here first with poor results. Based on our previous

experience, higher-order design response surface approximations are needed to fit the

probability of failure or the safety index, and the number of points of a typical design of

experiments should be about twice the number of coefficients. A cubic polynomial in two

variables has 10 coefficients that require about 20 design points. Latin Hypercube

sampling can be used to construct higher order response surface (Qu et al. 2000). We









found that Latin Hypercube sampling might fail to sample points near some corners of

the design space, leading to poor accuracy around these corners. To deal with this

extrapolation problem, all four vertices of the design space were added to 16 Latin

Hypercube sampling points for a total of 20 points. Mixed stepwise regression (Myers

and Montgomery 1995) was employed to eliminate poorly characterized terms in the

response surface models.

Design with Strength Constraint

The range for the design response surface, shown in Table 6-2, was selected based

on the mean-based deterministic design, w = 1.9574" and t = 3.9149". The probability of

failure was calculated by direct Monte Carlo simulation with 100,000 samples based on

the exact stress in (6-6).

Table 6-2. Range of design variables for design response surface

System variables w t
Range 1.5" to 3.0" 3.5" to 5.0"

Cubic design response surfaces with 10 coefficients were constructed and their

statistics are shown in Table 6-3. An R a4, close to one and an average percentage error

(defined as the ratio of root mean square error (RMSE) predictor and mean of response)

close to zero indicate good accuracy of the response surfaces. It is seen that the design

response surfaces for the probabilistic sufficiency factor has the highest R a4, and the

smallest average percentage error. The standard error in probability calculated by Monte

Carlo simulation can be estimated as

p(1- p)
cr (6-19)
IM

where p is the probability of failure, and M~ is the sample size of the Monte Carlo





simulation. If a probability of failure of 0.2844 is to be calculated by Monte Carlo

simulation of 100,000 samples (the mean probability of failure in Table 6-3), the standard

error due to the limited sampling is 0.00143. The RMSE error of the probability design

response surface is of 0. 1103. Thus the error induced by the limited sampling (100,000)

is much smaller than error of the response surface approximation to the probability of

failure .

Table 6-3. Comparison of cubic design response surface approximations of probability of
failure, safety index and probabilistic sufficiency factor for single strength
failure mode (based on Monte Carlo simulation of 100,000 samples)
16 Latin Hypercube sampling points + 4 vertices
Error Statistics Probabilistic
Probability RS Safety index RS sufficiency factor
RS


R2adj 0.9228 0.9891 0.9999
RMSE Predictor 0.1103 0.3027 0.002409

Mean of Response 0.2844 1.9377 1.0331
APE (Average
Percentage
Error=RMSE 38.78% 15.62% 0.23%
Predictor/Mean of
Response)
APE in Pof
(=RMSE Predictor of 38.78% 12.04% N/A
PoflMean ofPof)

The probabilistic sufficiency factor design response surface has an average error

less than one percent, while the safety index design response surface has an average error

of about 15.6 percent. It must be noted, however, that the average percent errors of the

three design response surface cannot be directly compared, because one percent error in

probabilistic sufficiency factor does not correspond to one percent error in probability of

failure or safety index. Errors in safety index design response surface were transformed to

errors in terms of probability as shown in Table 6-3. It is seen that safety index design









response surface approximation is more accurate than the probability design response

surface approximation.

Besides the average errors over the design space, it is instructive to compare errors

measured in probability of failure in the important region of the design space. For

optimization problems, the important region is defined as the region containing the

optimum. Here it is the curve of target reliability according to each design response

surface, on which the reliability constraint is satisfied critically, and the probability of

failure should be 0.00135 if design response surface approximation does not have errors.

For each design response surface approximation, 11 test points were selected along a

curve of target reliability and given in the Appendix. The average percentage errors at

these test points, shown in Table 6-4, demonstrate the accuracy advantage of the

probabilistic sumfciency factor approach. For the target reliability, the standard error due

to Monte Carlo simulation of 100,000 samples is 8.6%, which is comparable to the

response surface error for the Pyf. For the other two response surfaces, the errors are

apparently dominated by the modeling errors due to the cubic polynomial approximation.

Table 6-4. Averaged errors in cubic design response surface approximations of
probabilistic sumfciency factor, safety index and probability of failure at 1 1
points on the curves of target reliability
Design Response Probability of Probabilistic
Safety Index (Pof)
Surface of failure L suffciency factor
Average Percentage
Error in Probability 213.86% 92.38% 10.32%
of Failure

The optima found by using the design response surface approximations of Table 6-

3 are compared in Table 6-5. The probabilistic sufficiency factor design response surface

clearly led to a better design, which has a safety index of 3.02 according to Monte Carlo

simulation. It is seen that the design from probabilistic sumfciency factor design response









surface approximation is very close to the exact optimum. Note that the values of Pyffor

the probability based optimum and safety index based optimum provide a good estimate

to the required weight increments. For example, with a Pyf-0.9663 the safety index based

design has a safety factor shortfall of 3.37 percent, indicating that it should not require

more than 2.25 percent weight increment to remedy the problem. Indeed the optimum

design is 2.08 percent heavier. This would have been difficult to infer from a probability

of failure of 0.00408, which is three times larger than the target probability of failure.

Table 6-5. Comparisons of optimum designs based on cubic design response surface
approximations of probabilistic sufficiency factor, safety index and
probability of failure

Desgn espnseMinimize obj ective function F while P 2 3 or 0.0013 5 > pof
surface of Obj ective Pof/Safety index/Safety factor
Optima
function F=wt from MCS of 100,000 samples
w=2.6350,
Probability '9.2225 0.00690/2.4624/0. 9481
t-3.5000
w=2.6645,
Safety index '9.3258 0.00408/2.6454/0.9663
t-3.5000
Probabilistic w=2.4526,
'9.5367 0.00128/3.0162/1.0021
sufficiency factor t-3.8884
Exact optimum w=2.4484,
'9.5204 0.00135/3.00/1.00
(Wu et al. 2001) t-3.8884


Design with Strength and Displacement Constraints

For system reliability problem with strength and displacement constraints, the

probability of failure is calculated by direct Monte Carlo simulation with 100,000

samples based on the exact stress and exact displacement in (6-6) and (6-7). The

allowable tip displacement Do is chosen to be 2.25" in order to have two competing

constraints (Wu et al. 2001). The three cubic design response surface approximations in

the range of design variables shown in Table 6-2 were constructed and their statistics are

shown in Table 6-6.









Table 6-6. Comparison of cubic design response surface approximations of the first
design iteration for probability of failure, safety index and probabilistic
sumfciency factor for system reliability (strength and displacement)
16 Latin Hypercube sampling points + 4 vertices
ErrorStatiticsProbability Safety index Probabilistic
response response suffciency factor
surface surface response surface
R"a4 0.9231 0.9887 0.9996
RMSE Predictor 0.1234 0.3519 0.01055
Mean of Response 0.3839 1.3221 0.9221
APE (Average Percentage
Error=RMSE 32.14% 26.62% 1.14%
Predictor/Mean of Response)
APE in Pof
(=RMSE Predictor of 32.14% 10.51% N/A
Pof/Mean of Pof)


It is seen that the R2adj of probabilistic sufficiency factor response surface

approximation is the highest among the three response surface approximations, which

implies probabilistic sufficiency factor design response surface approximation is the most

accurate in terms of averaged errors in the entire design space as shown by Table 6-2.

The critical errors of the three design response surfaces are also compared. For each

design response surface approximation, 51 test points were selected along a curve of

target reliability (probability of failure = 0.00135). The average percentage errors at these

test points, shown in Table 6-7, demonstrate that the probabilistic sufficiency factor

design response surface approximation is more accurate than the probability of failure

and safety index response surface approximations.

Table 6-7. Averaged errors in cubic design response surface approximations of
probabilistic sufficiency factor, safety index and probability of failure at 51
points on the curves of target reliability
Design Response Probability of Probabilistic
Safet Index (Pof
Surface of failure ae sufficiency factor
Average Percentage
Error in Probability 334.78% 96.49% 39.11%
of Failure









The optima found by using the design response surface approximations of Table 6-

6 are compared in Table 6-8. The probabilistic sufficiency factor design response surface

led to a better design than the probability or safety index design response surface in terms

of reliability. The probability of failure of the Pyf design is 0.00314 evaluated by Monte

Carlo simulation, which is higher than the target probability of failure of 0.00135. The

deficiency in reliability in the Pyf design is induced by the errors in the probabilistic

sufficiency factor design response surface approximation. The probabilistic sufficiency

factor can be used to estimate the additional weight to satisfy the reliability constraint. A

scaled design of w = 2.7123 and t = 3.5315 was obtained in section 2.1. The objective

function of the scaled design is 9.5785. The probability of failure of the scaled design is

0.001302 (safety index of 3.0110 and probabilistic sufficiency factor of 1.0011) evaluated

by MCS with 1,000,000 samples.

Table 6-8. Comparisons of optimum designs based on cubic design response surface
approximations of the first design iteration for probabilistic sufficiency factor,
safety index and probability of failure

Desig resonseMinimize objective function F while P 2 3 or 0.00135 > pof
surface of Obj ective Pof/Safety index/Safety factor
Optimafunction F=w~t from MCS of 100,000 samples
w=2.6591,
Probability '9.3069 0.00522/2. 5609/0.9589
t-3.5000
w=2.6473,
Safety index '9.2654 0.00630/2.4949/0.95 19
t-3.5000
Probabilistic w=2.6881,
94084 0.00314/2.7328/0.9733
sufficiency factor t-3.500

The design can be improved by performing another design iteration, which would

reduce the errors in design response surface by shrinking the design space around the

current design. The reduced range of design response surface approximations is shown in

Table 6-9 for the next design iteration. The design response surface approximations





iteration shown in Table 6-10 are compared in Table 6-11. It is seen that the design

converges in two iterations with probabilistic sufficiency factor response design surface

due to its superior accuracy over the probability of failure and safety index design

response surfaces.

Table 6-11. Comparisons of optimum designs based on cubic design response surfaces of
the second design iteration for probabilistic sufficiency factor, safety index
and probability of failure
Desig reasons Minimize obj ective function F while P > 3 or 0.0013 5 > pof
surfce f Oti. Obj ective_ Pof/Safety index/Safety factor
function F =wt from MVCS of 100,000 samples
w=2.7923,
Probability '9.3368 0.0051 1/2.5683/0.9658
t-3.3438
w=2.6878,
Safety index '9.4821 0.00177/2.916 5/0.9920
t-3.5278
Probabilistic
w=2.6041,
sufficiency '9.5691 0.00130/3.0115/1.0009
t-3.6746
factor


constructed are compared in Table 6-10. It is observed again that the probabilistic

sufficiency factor response surface approximation is the most accurate.

Table 6-9. Range of design variables for design response surface approximations of the
second design iteration
System variables w t
Range 2.2" to 3.0" 3.2" to 4.0"

Table 6-10. Comparison of cubic design response surface approximations of the second
design iteration for probability of failure, safety index and probabilistic
sufficiency factor for system reliability (strength and displacement)
16 Latin Hypercube sampling points + 4 vertices
ErrorStatiticsProbability Safety index Probabilistic
response response sufficiency factor
surface surface response surface
R2a4 0.9569 0.9958 0.9998
RMSE Predictor 0.06378 0.1329 0.003183
Mean of Response 0.1752 2.2119 0.9548


APE (Average Percentage
Error-RMSE Predictor/Mean 36.40% 6.01% 0.33%
of Response)
The optima based on design response surface approximations for the second design









Summary

This chapter presented a probabilistic sufficiency factor as a measure of the safety

level relative to a target safety level, which can be obtained from the results of Monte

Carlo simulation with little extra computation. It was shown that a design response

surface approximation can be more accurately fitted to the probabilistic sufficiency factor

than to the probability of failure or the safety index. Using the beam design example with

single or system reliability constraints, it was demonstrated that the design response

surface approximation based on probabilistic sufficiency factor has superior accuracy and

accelerates the convergence of reliability-based design optimization. The probabilistic

sufficiency factor also provides more information in regions of such low probability that

the probability of failure or safety index cannot be estimated by Monte Carlo simulation

with a given sample size, which is helpful in guiding the optimizer. Finally it was shown

that the probabilistic sufficiency factor can be employed by the designer to estimate the

required additional weight to achieve a target safety level, which might be difficult with

probability of failure or safety index.















CHAPTER 7
RELIABILITY-BASED DESIGN OPTIMIZATION USING DETERMINISTIC
OPTIMIZATION AND MULTI-FIDELITY TECHNIQUE

Introduction

The probabilistic sufficiency factor (PSF) developed in Chapter six is integrated to

reliability-based design optimization (RBDO) framework in this chapter. The classical

RBDO is performed in coupled double loop fashion, where the inner loop performs

reliability analysis and the outer loop performs design optimization. RBDO using double

loop framework requires many reliability analyses and is computationally expensive.

Wu et al. (1998, 2001) developed a safety-factor based approach for performing

RBDO in a decoupled single loop fashion, where the reliability constraints are converted

to equivalent deterministic constraints by using the concept of safety factor. The

similarity between Wu's approach and the probabilistic sufficiency factor approach

indicates that it may be worthwhile to study the use of probabilistic sufficiency factor

converting RBDO to sequential deterministic optimization.

For many problems the required probability of failure is very low, so that good

estimates require a very large MCS sample. In addition, the design response surface

(DRS) must be extremely accurate in order to estimate well a very low probability of

failure. Thus we may require an expensive MCS at a large number of design points in

order to construct the DRS. A multi-fidelity technique using probabilistic sufficiency

factor for RBDO is investigated to alleviate the computational cost. The two approaches

of reducing computational cost of RBDO for low probability of failure are compared.









Reliability-Based Design Optimization Using Sequential Deterministic Optimization
with Probabilistic Sufficiency Factor

Wu et al. (1998, 2001) proposed a decoupled approach using partial safety factor to

replace reliability constraints by equivalent deterministic constraints. After performing

reliability analysis, the random variables x are replaced by safety-factor based values x*,

which is the most probable point (MPP) of the previous reliability analysis. The required

shift of limit state function g in order to satisfy the reliability constraints is s, which

satisfy P(g(x)+s)<0)=Pt. Both x*k and s can be obtained as the byproducts of reliability

analysis. The target reliability is achieved by adjusting the limit state function via design

optimization. It is seen that the required shift s is similar to the probabilistic sufficiency

factor (Qu and Haftka 2003) presented in Chapter six. The significant difference between

Wu's partial safety factor and coupled RBDO is that reliability analysis is decoupled

from and driven by the design optimization to improve the efficiency of RBDO. Thus

RBDO is performed in a deterministic fashion and corrected by reliability analysis after

optimization. The PSF is employed in this chapter to convert RBDO to equivalent

deterministic optimization. Converting RBDO to equivalent deterministic optimization

enables further exploration of the design space for those problems where the design space

is characterized to have multiple local optima and only limited number of analyses are

available due to its high computational cost, such as design of stiffened panels addressed

in chapter eight.

By starting from a mean value based design, where the deterministic safety factor is

one, an initial design was found by deterministic optimization. Reliability analysis using

Monte Carlo simulation shows the deficiency in probability of failure and probabilistic










sufficiency factor. In the next design iteration, the safety factor of the next deterministic

optimization is chosen to be


s~x~d(k+1)S(x d(k)(71



which is used to reduce the yield strength of the material, R. The optimization problem is

formulated as

minimize A = wt
such that
R (7-2)
cr <- 0
s(x, d)(k+1)

The process is repeated until the optimum converges and the reliability constraint is

sati sfied.

Reliability-Based Design Optimization Using Multi-Fidelity Technique with
Probabilistic Sufficiency Factor

For problems with very low probability of failure, a good estimate of probability

requires a very large MCS sample. In addition, the DRS must be extremely accurate in

order to estimate well a very low probability of failure. Thus we may require an

expensive MCS at a large number of design points in order to construct the DRS. The

deterministic optimization may be used to reduce the computational cost associated with

RBDO for very low probability of failure. However, since it does not use any derivative

information for the probabilities, it is not likely to converge to an optimum design when

competing failure modes are disparate in terms of the cost of improving their safety.

A compromise between the deterministic optimization and the full probabilistic

optimization is afforded by the Pyfby using an intermediate target probability PI, which is

higher than the required probability P,- and can be estimated via a less expensive MCS









and less accurate DRS. Then the Pyf can re-calibrated by a single expensive MCS. This

is a variable-fidelity technique, with a large number of inexpensive MCS combined with

a small number of expensive MCS.

A compromise between the deterministic optimization and the full probabilistic

optimization is afforded by the probabilistic sufficiency factor Pyf by using an

intermediate target probability PI, which is higher than the required probability Pr and

can be estimated via a less expensive MCS and less accurate DRS. Then the Pyf can re-

calibrated by a single expensive MCS. This is a variable-fidelity technique, with a large

number of inexpensive MC S combined with a small number of expensive MC S.

For the beam example we illustrate the process by setting a low required

probability of 0.0000135, and using as intermediate probability 0.00135, the value used

as required probability for the previous examples. We start by finding an initial optimum

design with the intermediate probability as the required probability. This involves the

generation of a response surface approximation of~s Pyl forthe in~termidiate probability as

well as finding the optimum based on this response surface. We then perform an

expensive MCS which is adequate for estimating the required probability. Here we use

MCS with 107 samples. We now calculate the Psf from this accurate MCS, and denote it

Psf. At that design the Psf predictled by uthe response: surfacet approximation is about 1,

because the initial optimization was performed with a lower limit of 1 on the Psf. In

contrast, uthe ac~curalte P/ will in general be different for several reasons. These include

the higher accuracy of the MCS, the response surface errors, and most important the

lower probability requirements. For example, with 107 samples, at this initial design we

may get P/=1.0I1 for the intlltermedate probabIiliy (based on Lthe 13500 lowest safety





factors) and" P =0.89 for the required probability (based on the 135 lowest safety

factors) .

With a value of of Psr' and Ps/ at the same point, we can define a scale factor/f as

the ratio of these two numbers


.f = c (7-3)


This ratio can be used to correct the response surface approximation during the

optimization process. Once an optimum design is found with a givenJ fa new accurate

MCS can be calculated at the optimum, a new value of f can be calculated from Equation

(7-3) at the new point, and the process repeated until convergence. As further refinement,

we have also updated the response surface for the intermediate probability, centering it

about the new optima.

Beam Design Example

The following cantilever beam example (Figure 7-1) is taken from Wu et al. (2001)

to demonstrate the use of probabilistic sufficiency factor.


L=100"


=I W


Figure 7-1. Cantilever beam subj ect to vertical and lateral beading

There are two failure modes in the beam design problem. One failure mode is

yielding, which is most critical at the corner of the rectangular cross section at the fixed

end of the beam


tX