<%BANNER%>

Multiple Surrogates and Error Modeling in Optimization of Liquid Rocket Propulsion Components


PAGE 1

1 MULTIPLE SURROGATES AND ERROR MO DELING IN OPTIMIZATION OF LIQUID ROCKET PROPULSION COMPONENTS By TUSHAR GOEL A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2007

PAGE 2

2 2007 Tushar Goel

PAGE 3

3 To my parents Sushil and Ramesh, sister Manjari, and brother Arun.

PAGE 4

4 ACKNOWLEDGMENTS This dissertation could not be completed withou t enormous help from my teachers, family, and friends. While I feel that words would never be sufficient to adequately reflect their contributions, I give a try. I am incredibly grateful to my advisors Prof. Raphael Haftka and Prof. Wei Shyy for their continuous encouragement, ve ry generous support, patience, and peerless guidance. Both Prof. Shyy and Prof. Haftka provi ded me numerous opport unities to develop and to hone my research and personal skills. I am amazed by their never-ending enthusiasm and depth of knowledge, and feel extremely fo rtunate to have been taught by them. I would like to especi ally thank my advisory committee members, Prof. Nam-Ho Kim, Prof. Jacob N. Chung, and Prof. Andr I. Khuri, for their willingness to serve on my committee, for evaluating my dissertation, and for offering constructive criticism th at has helped improved this work. I particularly thank Dr Kim, for ma ny discussions during our weekly group meetings and afterwards. I feel deeply indebted to Prof Nestor V. Queipo for a very fruitful collaboration. Not only did he significantly contribute to my work but also he was very supportive and helpful during the entire course of my graduate studies. I expres s my sincere gratitude to Dr Layne T Watson and Dr Daniel J Dorney for their collaboration and he lp in my research. I thank Dr Siddharth Thakur, who provided a huge assistance with the STREAM code and suggestions related to numerical aspects in my research work. I also thank Pr of. Peretz P. Friedmann and Prof. Kwang-Yong Kim for the opportunities to test some of our ideas. I thank my collaborators Dr Raj Vaidyanathan, Ms Yolanda Mack, Dr Melih Papila, Dr Yogen U tturkar, Dr Jiongyang Wu, Mr Abdus Samad, Mr Bryan Glaz, and Dr Li Liu. I lear nt a lot from you and I feel si ncerely indebted for your help, both personally and academically.

PAGE 5

5 I thank the staff of Mechanical Engineering department, particularly Jan, Pam, and David, for their help with administrative and technical support. I also am thankful to the staff at International Center, library, CIRCA, and ETD fo r their help with this thesis and other administrative details. I sin cerely acknowledge the financ ial support provided by NASA Constellation University Institute Program (CUIP). I duly thank my colleagues in the Structur al and Multi-disciplin ary Optimization Group and Computational Thermo-fluids Laboratory for their assistance and many fruitful discussions about all the worldly issues related to acad emics and beyond. I am highly obliged to Prof. Kalyanmoy Deb and Prof. Prashant Kumar at II T Kanpur, who gave me very sage advice at different times in my life. They indeed have played a big role in shaping my career. I also thank my colleagues Emre, Eric, Pat, Nick, and Amor, who made my stay at the University of Michigan a memorable one. I am gr ateful to have true fri ends in Ashish, Jaco, Erdem, Murali, Siva, Ashwin, Girish, Saur abh, Ved, Priyank, Satish, Tandon, Sudhir, Kale, Dragos, Christian, Victor, Ben, a nd Palani for lending me a shoulder, when I had a bad day and for sharing with me the happy moments. Thes e memories will remain etched for life. Finally, but not at all th e least, I must say that I would never have completed this work, had it not been the unconditional love, appreciation, and understanding of my family. Despite the fact that we missed each other very much, they always motivated me to take one more step forward throughout my life and rejoiced a ll my achievements. To you, I dedicate this dissertation!

PAGE 6

6 TABLE OF CONTENTS page ACKNOWLEDGMENTS...............................................................................................................4 LIST OF TABLES................................................................................................................. ........12 LIST OF FIGURES................................................................................................................ .......16 LIST OF ABBREVIATIONS........................................................................................................20 ABSTRACT....................................................................................................................... ............29 CHAPTER 1 INTRODUCTION AND SCOPE...........................................................................................31 Space Propulsion Systems......................................................................................................31 Design Requirements of Propulsions Systems.......................................................................32 System Identification and Optimization: Case Studies..........................................................33 Sensitivity Evaluation and Model Valida tion for a Cryogenic Cavitation Model..........34 Shape Optimization of Diffuser Vanes............................................................................34 Surrogate Modeling............................................................................................................. ...35 Issues with Surrogate Modeling.............................................................................................37 Sampling Strategies.........................................................................................................37 Type of Surrogate Model.................................................................................................38 Estimation of Errors in Surrogate Predictions.................................................................38 Scope of Current Research.....................................................................................................39 2 ELEMENTS OF SURROGATE MODELING......................................................................43 Steps in Surrogate Modeling..................................................................................................44 Design of Experiments....................................................................................................45 Numerical Simulations at Selected Locations.................................................................45 Construction of Surrogate Model....................................................................................45 Model Validation.............................................................................................................45 Mathematical Formulation of Surrogate Modeling Problem..................................................46 Design of Experiments.......................................................................................................... .48 Factorial Designs.............................................................................................................49 Central Composite Designs.............................................................................................50 Variance Optimal DOEs for Polynomi al Response Surface Approximations................51 Latin Hypercube Sampling .............................................................................................51 Orthogonal Arrays ..........................................................................................................52 Optimal LHS, OA-based LHS, Optimal OA-based LHS................................................53 Construction of Surrogate Model...........................................................................................54 Polynomial Response Surface Approximation ...............................................................54 Kriging Modeling ...........................................................................................................56

PAGE 7

7 Radial Basis Functions ...................................................................................................58 Kernel-based Regression.................................................................................................61 Model Selection and Validation.............................................................................................62 Split Sample .................................................................................................................. ..62 Cross Validation .............................................................................................................63 Bootstrapping..................................................................................................................64 3 PITFALLS OF USING A SING LE CRITERION FOR SELECTING EXPERIMENTAL DESIGNS................................................................................................72 Introduction................................................................................................................... ..........72 Error Measures for Experimental Designs.............................................................................76 Test Problems and Results......................................................................................................80 Comparison of Different Experimental Designs.............................................................81 Space filling characteristics of D-optimal and LHS designs....................................81 Tradeoffs among various experimental designs.......................................................82 Extreme example of risks in single cr iterion based design: Min-max RMS bias CCD......................................................................................................................87 Strategies to Address Multiple Cr iteria for Experimental Designs.................................89 Combination of model-based D-optimality criterion with geometry based LHS criterion.................................................................................................................90 Multiple experimental designs combined with pointwise error-based filtering.......92 Concluding Remarks............................................................................................................. .94 4 ENSEMBLE OF SURROGATES........................................................................................105 Conceptual Framework.........................................................................................................107 Identification of Region of Large Uncertainty..............................................................107 Weighted Average Surrogate Model Concept...............................................................107 Non-parametric surrogate filter .............................................................................109 Best PRESS for exclusive assignments..................................................................109 Parametric surrogate filter .....................................................................................109 Test Problems, Numerical Proce dure, and Prediction Metrics.............................................112 Test Problems................................................................................................................112 Branin-Hoo function..............................................................................................112 Camelback function................................................................................................112 Goldstein-Price function........................................................................................112 Hartman functions..................................................................................................112 Radial turbine design for space launch..................................................................113 Numerical Procedure.....................................................................................................114 Prediction Metrics.........................................................................................................115 Correlation coefficient............................................................................................115 RMS error...............................................................................................................116 Maximum error......................................................................................................116 Results and Discussion.........................................................................................................117 Identification of Zones of High Uncertainty.................................................................117 Robust Approximation via Ensemble of Surrogates.....................................................119

PAGE 8

8 Correlations............................................................................................................120 RMS errors.............................................................................................................121 Maximum absolute errors.......................................................................................123 Studying the role of generali zed cross-validation errors........................................123 Effect of sampling density......................................................................................124 Sensitivity analysis of PWS parameters.................................................................126 Conclusions.................................................................................................................... .......126 5 ACCURACY OF ERROR ESTIMATES FOR SURROGATE APPROXIMATION OF NOISE-FREE FUNCTIONS................................................................................................144 Introduction................................................................................................................... ........144 Error Estimation Measures...................................................................................................145 Error Measures for Polynomial Response Surface Approximation..............................146 Estimated standard error.........................................................................................148 Root mean square bias error...................................................................................148 Error Measures for Kriging...........................................................................................149 Model Independent Error Estimation Models...............................................................151 Generalized cross-validation error ........................................................................151 Standard deviation of responses ............................................................................152 Ensemble of Error Estimation Measures.......................................................................154 Averaging of multiple error measures....................................................................154 Identification of best error measure.......................................................................154 Simultaneous application of multiple error measures............................................155 Global Prediction Metrics..............................................................................................155 Root mean square error..........................................................................................155 Correlation between predicted and actual errors....................................................156 Maximum absolute error........................................................................................157 Test Problems and Testing Procedure..................................................................................157 Test Problems................................................................................................................157 Branin-Hoo function..............................................................................................158 Camelback function................................................................................................158 Goldstein-Price function........................................................................................158 Hartman functions..................................................................................................158 Radial turbine design problem...............................................................................159 Cantilever beam design problem ...........................................................................159 Testing Procedure..........................................................................................................160 Design of experiments ...........................................................................................160 Test points..............................................................................................................161 Surrogate construction............................................................................................161 Error estimation......................................................................................................162 Results: Accuracy of Error Estimates...................................................................................162 Global Error Measures..................................................................................................163 Pointwise Error Measures..............................................................................................165 Root mean square errors.........................................................................................165 Correlations between actual and predicted errors..................................................168 Maximum absolute errors.......................................................................................169

PAGE 9

9 Ensemble of Multiple Error Estimators................................................................................170 Averaging of Errors.......................................................................................................170 Identification of Suitable Error Estimator for Kriging..................................................171 Detection of High Error Regions us ing Multiple Errors Estimators.............................173 Conclusions.................................................................................................................... .......175 Global Error Estimators.................................................................................................175 Pointwise Error Estimation Models...............................................................................176 Simultaneous Application of Multiple Error Measures.................................................177 6 CRYOGENIC CAVITATION MODEL VAL IDATION AND SENSITIVITY EVALUATION....................................................................................................................199 Introduction................................................................................................................... ........199 Cavitating Flows: Significance and Pr evious Computational Efforts...........................199 Influence of Thermal Environment on Cavitation Modeling........................................201 Experimental and Numerical Mode ling of Cryogenic Cavitation.................................203 Surrogate Modeling Framework....................................................................................204 Scope and Organization.................................................................................................205 Governing Equations a nd Numerical Approach...................................................................206 Transport-based Cavitation Model................................................................................207 Thermodynamic Effects................................................................................................208 Speed of Sound Model..................................................................................................210 Turbulence Model.........................................................................................................211 Numerical Approach.....................................................................................................212 Results and Discussion.........................................................................................................213 Test Geometry, Boundary Conditions, and Performance Indicators.............................213 Surrogates-based Global Sensitivity Assessment and Calibration................................214 Global Sensitivity Assessment...............................................................................215 Surrogate construction............................................................................................216 Main and interaction effect s of different variables................................................217 Validation of global sensitivity analysis................................................................218 Calibration of Cryogenic Cavitation Model..................................................................219 Surrogate modeling of responses...........................................................................220 Multi-objective optimization..................................................................................220 Optimization outcome for hydrogen......................................................................222 Validation of the calibra ted cavitation model........................................................222 Investigation of Thermal Effects and Boundary Conditions................................................223 Influence of Thermo-sensitive Material Properties.......................................................223 Impact of Boundary Conditions....................................................................................225 Conclusions.................................................................................................................... .......226 Influence of Turbulence Modeling on Predictions...............................................................243 7 IMPROVING HYDRODYNAMIC PERFOR MANCE OF DIFFUSER VIA SHAPE OPTIMIZATION..................................................................................................................245 Introduction................................................................................................................... ........245 Problem Description............................................................................................................ .247

PAGE 10

10 Vane Shape Definition..................................................................................................248 Mesh Generation, Boundary Conditi ons, and Numerical Simulation...........................250 Surrogate-Based Design and Optimization..........................................................................251 Surrogate Modeling.......................................................................................................251 Global Sensitivity Assessment......................................................................................255 Optimization of Diffuser Vane Performance................................................................257 Design Space Refinement Dimensionality Reduction...............................................259 Final Optimization.........................................................................................................260 Analysis of Optimal Diffuser Vane Shape...........................................................................260 Flow Structure...............................................................................................................260 Vane Loadings...............................................................................................................261 Empirical Considerations..............................................................................................261 Summary and Conclusions...................................................................................................262 8 SUMMARY AND FUTURE WORK..................................................................................277 Pitfalls of Using a Single Cr iterion for Experimental Designs.............................................277 Summary and Learnings................................................................................................277 Future Work...................................................................................................................278 Ensemble of Surrogates........................................................................................................278 Summary and Learnings................................................................................................278 Future Work...................................................................................................................279 Accuracy of Error Estimates for Noise-free Functions........................................................279 Summary and Learnings................................................................................................279 Future Work...................................................................................................................280 System Identification of Cryogenic Cavitation Model.........................................................280 Summary and Learnings................................................................................................280 Future Work...................................................................................................................281 Shape Optimization of Diffuser Vanes.................................................................................282 Summary and Learnings................................................................................................282 Future Work...................................................................................................................282 APPENDIX A THEORETICAL MODELS FOR ESTIMA TING POINTWISE BIAS ERRORS..............283 Data-Independent Error Measures........................................................................................285 Data-Independent Bias Error Bounds............................................................................286 Data-Independent RMS Bias Error...............................................................................286 Data-Dependent Error Measures..........................................................................................287 Bias Error Bound Formulation......................................................................................288 Root Mean Square Bias Error Formulation...................................................................289 Determining the Distribution of Coefficient Vector .................................................289 Analytical Expression for Pointwise Bias Error Bound................................................291 Analytical Estimate of Root Mean Square Bias Error...................................................292 B APPLICATIONS OF DATA-INDEPENDE NT RMS BIAS ERROR MEASURES..........294

PAGE 11

11 Construction of Experimental Designs.................................................................................294 Why Min-max RMS Bias Designs Place Points near Center for Four-dimensional Space?......................................................................................................................... ......295 Verification of Expe rimental Designs..................................................................................296 Comparison of Experimental Designs..................................................................................297 RMS Bias Error Estimates for Trigonometric Example.......................................................298 C GLOBAL SENSITIVITY ANALYSIS................................................................................304 D LACK-OF-FIT TEST WITH NON-RE PLICATE DATA FOR POLYNOMIAL RESPONSE SURFACE APPROXIMATION.....................................................................307 LIST OF REFERENCES.............................................................................................................310 BIOGRAPHICAL SKETCH.......................................................................................................329

PAGE 12

12 LIST OF TABLES Table page 2-1. Summary of main characte ristics of different DOEs..............................................................70 2-2. Examples of kernel functions and related estimation schemes..............................................70 2-3. Summary of main characteristic s of different surrogate models............................................71 3-1. D-optimal design (25 points, 4-dime nsional space) obtained using JMP. ..........................102 3-2. LHS designs (25 points, 4-dimensi onal space) obtained using MATLAB. ........................102 3-3. Comparison of RMS bias CCD, FCCD, D-optimal, and LHS designs for 4-dimensional space (all designs have 25 points). ..................................................................................102 3-4. Prediction performance of different 25-poin t experimental designs in approximation of example functions F1, F2, and F3 in four-dimensional spaces. .......................................103 3-5. Min-max RMS bias central composite designs for 2-5 dimensional spaces and corresponding design metrics. ........................................................................................103 3-6. Mean and coefficient of variation (based on 100 instances) of different error metrics for various experimental designs in four-dimensional space (30 points)..............................104 3-7. Reduction in errors by considering multip le experimental designs and picking one experimental design using appropr iate criterion (filtering).............................................104 4-1. Parameters used in Hartman function with three variables..................................................137 4-2. Parameters used in Hartma n function with six variables.....................................................137 4-3. Mean, coefficient of varia tion (COV), and median of diffe rent analytical functions..........137 4-4. Range of variables for ra dial turbine design problem..........................................................137 4-5. Numerical setup for the test problems..................................................................................138 4-6. Median, 1st, and 3rd quartile of the maximum standard deviation and act ual errors in predictions of different surrogates at the location corresponding to maximum standard deviation over 1000 DOEs for different test problems.....................................138 4-7. Median, 1st, and 3rd quartile of the minimum standard deviation and act ual errors in predictions of different surrogates at the location corresponding to minimum standard deviation over 1000 DOEs for different test problems.....................................139

PAGE 13

13 4-8. Median, 1st, and 3rd quartile of the maximum standa rd deviation and maximum actual errors in predictions of different su rrogates over 1000 DOEs for different test problems....................................................................................................................... ....139 4-9. Effect of design of experiment: Number of cases when an individual surrogate model yielded the least PRESS error. ........................................................................................140 4-10. Opportunities of improvement via PWS: Number of points when individual surrogates yield errors of opposite signs. .........................................................................................140 4-11. Mean and coefficient of variation of correlation coefficient between actual and predicted response for different surrogate models...........................................................140 4-12. Mean and coefficient of variation of RMS errors in design space for different surrogate models......................................................................................................................... .....141 4-13. Mean and coefficient of variation of maximum absolute error in design space.................141 4-14. Mean and coefficient of variation of the ratio of RMS error and PRESS over 1000 DOEs........................................................................................................................... .....142 4-15. The impact of sampling density in approximation of Branin-Hoo function......................142 4-16. The impact of sampling density in approximation of Camelback function.......................143 4-17. Effect of parameters in parametr ic surrogate filter used for PWS. ...................................143 5-1. Summary of different error measures used in this study. ....................................................187 5-2. Parameters used in Hartman function with three variables. ................................................187 5-3. Parameters used in Hartman function with six variables. ...................................................187 5-4. Range of variables for ra dial turbine design problem..........................................................188 5-5. Ranges of variables for can tilever beam design problem. ...................................................188 5-6. Numerical setup for di fferent test problems. .......................................................................188 5-7. Mean and coefficient of variation of nor malized actual RMS error in the entire design space. ........................................................................................................................ .......189 5-8. Mean and coefficient of variation of ra tio of global error meas ures and corresponding actual RMS error in design space. ..................................................................................190 5-9. Mean and COV of ratio of root mean s quared predicted and actual errors for different test problems. ................................................................................................................ ..191

PAGE 14

14 5-10. Mean and COV of correlations between actu al and predicted errors for different test problems. ..................................................................................................................... ....192 5-11. Mean and COV of ratio of maximum pred icted and actual errors for different test problems....................................................................................................................... ....193 5-12. Mean and COV of ratio of root mean squa re average error and actual RMS errors for different test problems. ...................................................................................................194 5-13. Comparison of performance of individual error measures and GCV chosen error measure for kriging. ........................................................................................................194 5-14. Number of cases out of 1000 for which e rror estimators failed to detect high error regions. ...................................................................................................................... ......195 5-15. Number of cases out of 1000 for which e rror estimators failed to detect maximum error regions. ................................................................................................................ ...196 5-16. Number of cases out of 1000 for which di fferent error estimators wrongly marked low error regions as high error. ..............................................................................................197 5-17. Number of cases out of 1000 for which di fferent error estimators wrongly marked low error regions as the maximum error region. ....................................................................198 5-18. High level summary of performance of different pointwise error estimators....................198 6-1. Summary of a few relevant numer ical studies on cryogenic cavitation...............................239 6-2. Ranges of variables for global sensitivity analyses. ............................................................240 6-3. Performance indicators and corresponding weights in surrogate approximations of prediction metrics diffP and diffT. ....................................................................................240 6-4. Performance indicators and corresponding weights in surrogate approximations of prediction metrics diffP and diffT in model-parameter space. ..........................................241 6-5. Predicted and actual diffP and diffT at best-compromise model parameter for liquid N2 (Case 290C). .................................................................................................................. .241 6-6. Description of flow cases chosen fo r the validation of th e calibrated cryogenic cavitation model...............................................................................................................242 6-7. Model parameters in Launder-Spalding and non-equilibrium k turbulence models.....243 7-1. Design variables an d corresponding ranges. ......................................................................273 7-2. Summary of pressure ratio on data poi nts and performance metrics for different surrogate models fitted to Set A. .....................................................................................273

PAGE 15

15 7-3. Range of data, quality indicators for diffe rent surrogate models, and weights associated with the components of PWS for Se t B and Set C data, respectively..............................274 7-4. Optimal design variables and pressure ratio obtained using different surrogates constructed using Set C data. ..........................................................................................274 7-5. Comparison of actual and predicted pressu re ratio of optimal designs obtained from multiple surrogate models (Set C). .................................................................................275 7-6. Modified ranges of design variables and fixed parameters in refined design space............275 7-7. Range of data, summary of performance indicators, and weight ed associated with different surrogate models in the refined design space. ..................................................276 7-8. Design variables and pressure ratio at the optimal designs predicted by different surrogates. ................................................................................................................... ....276 7-9. Actual and empirical ratios of ga ps between adjacent diffuser vanes..................................276 B-1. Design variables and ma ximum RMS bias errors for min-max RMS bias central composite designs in Nv=2-5 dimensional spaces. ..........................................................303 B-2. Comparison of differe nt experimental design s for two dimensions. ..................................303 B-3. Comparison of actual and predicted RMS bi as errors for min-max RMS bias central composite experimental designs in four-dimensional space. ..........................................303

PAGE 16

16 LIST OF FIGURES Figure page 1-1. Schematic of liquid fuel rocket propulsion system................................................................41 1-2. Classification of propulsion sy stems according to power cycles. .........................................42 2-1. Key stages of the surroga te-based modeling approach..........................................................66 2-2. Anatomy of surrogate modeling: m odel estimation + model appraisal. ................................66 2-3. A surrogate modeling scheme provides the expected value of th e prediction and the uncertainty associated with that prediction........................................................................67 2-4. Alternative loss functi ons for the construction of surrogate models. ....................................67 2-5. A two-level full factorial design of experiment for three variables. .....................................68 2-6. A central composite design for three-dimensional design space. ..........................................68 2-7. A representative Latin hype rcube sampling design with 6,2svNN for uniformly distributed variables in the unit square..............................................................................69 2-8. LHS designs with significant differences in terms of uniformity. ........................................69 3-1. Boxplots of radius of the largest unocc upied sphere inside the design space [-1, 1]Nv. ........97 3-2. Illustration of the largest spherical empty space inside the 3D design space [-1, 1]3 (20 points). ...................................................................................................................... ........97 3-3. Tradeoffs between different error metrics. ............................................................................98 3-4. Comparison of 100 D-optimal, LHS, and combination (D-optimality + LHS) experimental designs in four-dimensional sp ace (30 points) using different metrics. .....99 3-5. Simultaneous use of multiple experimental designs concept, where one out of three experimental designs is selected usin g appropriate criterion (filtering). ........................100 4-1. Boxplots of weights for 1000 DOE instances (Camelback function). ................................128 4-2. Contour plots of two variable test functions. .......................................................................129 4-3. Boxplots of function values of different analytical functions. ............................................130 4-4. Contour plots of errors and standard de viation of predictions considering PRS, KRG, and RBNN surrogate models for Branin-Hoo function. .................................................130

PAGE 17

17 4-5. Standard deviation of responses and actual errors in prediction of different surrogates at corresponding locations (boxplots of 1000 DOEs using Branin-Hoo function)..............131 4-6. Correlations between actua l and predicted response for different test problems. ...............132 4-7. Normal distribution approximation of th e sample mean correlation coefficient data obtained using 1000 bootstrap samples (kriging, Branin-Hoo function).........................133 4-8. RMS errors in design space for different surrogate models. ...............................................134 4-9. Maximum absolute error in design space for different surrogate models. ..........................135 4-10. Boxplots of ratio of RMS error and PRESS over 1000 DOEs for different problems. .....136 5-1. Contour plots of two vari able analytical functions. .............................................................178 5-2. Cantilever beam subjected to ho rizontal and vertical random loads....................................178 5-3. Ratio of global error measures and relevant actual RMS error. ..........................................180 5-4. Ratio of root mean square values of poi ntwise predicted and actual errors for different problems, as denoted by predicted error measure. ..........................................................182 5-5. Correlation between actual and predicted error measures for different problems. .............184 5-6. Ratio of maximum predicted and actual ab solute errors in desi gn space for different problems. ..................................................................................................................... ....186 6-1. Variation of physical properties fo r liquid nitrogen and liquid hydrogen with temperature. .................................................................................................................. ..228 6-2. Experimental setup and computational geometries. ............................................................229 6-3. Sensitivity indices of main effects using multiple surrogates of prediction metric (liquid N2, Case 290C). ..................................................................................................230 6-4. Influence of different variables on performance metric s quantified using sensitivity indices of main and total effects (liquid N2, Case 290C). ..............................................231 6-5. Validation of global sensitiv ity analysis results for main effects of different variables (liquid N2, Case 290C). ..................................................................................................232 6-6. Surface pressure and temperature predictio ns using the model parameters for liquid N2 that minimized diffP and diffT respectively (Case 290C). ...............................................232 6-7. Location of points (destC ) and corresponding responses us ed for calibration of the cryogenic cavitation model (liquid N2, Case 290C). ......................................................233

PAGE 18

18 6-8. Pareto optimal front and corresponding opt imal points for liquid N2 (Case 290C) using different surrogates. ........................................................................................................233 6-9. Surface pressure and temperature predic tions on benchmark test cases using the model parameters corresponding to original a nd best-compromise values for different fluids. ....................................................................................................................... .......234 6-10. Surface pressure and temperature predicti ons using the original parameters and bestcompromise parameters for a variety of geometries and operating conditions. .............236 6-11. Surface pressure and temperature profil e on 2D hydrofoil for Case 290C where the cavitation is controlled by, (1) temperatur e-dependent vapor pressure, and (2) zero latent heat, and hence isothermal flow field. ..................................................................237 6-12. Impact of different boundary conditions on surface pressure and temperature profile on 2D hydrofoil (Case 290C, liquid N2) and pr edictions on first computational point next to boundary. ............................................................................................................238 6-13. Influence of turbulence modeling on surf ace pressure and temperature predictions in cryogenic cavitating conditions. .....................................................................................244 7-1. A representative expander cycle used in the upper stage engine. .......................................264 7-2. Schematic of a pump....................................................................................................... .....264 7-3. Meanline pump flow path. ................................................................................................. ..264 7-4. Baseline diffuser vane shape and time-averaged flow. .......................................................265 7-5. Definition of the geometry of the diffuser vane. .................................................................265 7-6. Parametric Bezier curve................................................................................................... .....265 7-7. A combination of Hand Ogrids to analyze diffuser vane. ...............................................266 7-8. Surrogate based design and optimization procedure. ..........................................................266 7-9. Surrogate modeling........................................................................................................ .......267 7-10. Sensitivity indices of main effe ct using various surrogate models. ..................................267 7-11. Sensitivity indices of main and total effects of different va riables using PWS. ...............268 7-12. Actual partial variance of different design variables. ........................................................268 7-13. Baseline and optimal diffuser vane shap e obtained using different surrogate models. .....269 7-14. Comparison of instantaneous and time-aver aged flow fields of intermediate optimal (PRS) and baseline designs. ............................................................................................270

PAGE 19

19 7-15. Instantaneous and time-averaged pressure for the final optimal diffuser vane shape. ......271 7-16. Pressure loadings on different vanes. ................................................................................271 7-17. Gaps between adjacent vanes. ...........................................................................................272 B-1. Two-dimensional illustration of central composite experimental design constructed using two parameters 1 and 2. ......................................................................................301 B-2. Contours of scaled predicted RMS bias e rror and actual RMS error when assumed true model to compute bias error was quintic while the true model was trigonometric. .......301 Figure B-3. Contours of scal ed predicted RMS bias error and actual RMS error when different distributions of (2) were specified. ................................................................302

PAGE 20

20 LIST OF ABBREVIATIONS A Alias matrix ija Constants in Hartman functions ib Estimated coefficients associated with ith basis function b Vector of estimated coefficients of basis functions cov Covariance matrix in kriging c Bounds on coefficient vectors ,destprodCC Cavitation model parameters ic Constants in Hartman functions P C Specific heat at constant pressure 21,,,,,kCCCC Turbulence model parameters effD D-efficiency () Ex Expected value of random variable x avgE Average of surrogate models iE Error associated with ith surrogate model () e x Approximation error at design point x ()be x Bias error at design point x ()beb be x Bias error bound at design point x ()I be x Data-independent bias error bound at design point x ()rms be x Root mean square bias error at design point x

PAGE 21

21 ()esex Standard error at design point x (),,,iij f fffx Function of design variab les and decomposed functions ()fx Vector of basis functions in polynomial response surface model v f Vapor mass fraction h Sensible enthalpy () hx Radial basis function h Vector of radial basis functions I Identity matrix K Thermal conductivity k Turbulent kinetic energy L Loss function, Latent heat M Moment matrix mm Cavitation source terms D OEN Number of design of experiments eN Number of eigenvectors lhsN Number of Latin hypercube samples R BFN Number of radial basis functions qN Number of symbols for orthogonal arrays s N Number of sampled data points SM N Number of surrogate models testN Number of test points

PAGE 22

22 vN Number of variables 1N Number of basis func tions in approximation model 2N Number of basis function s missing from the approximation model diffP L2 norm of the difference between predicted and benchmark experimental pressure data tP Turbulence production term z yPP Location of mid-point on the lower side of diffuser vane p Pressure ij p Constants in Hartman functions R Correlation matrix 2 adj R Adjusted coefficient of multiple determination r Strength of orthogonal array maxr Radius of largest unoccupied sphere ()rx Vector of correlation between prediction and data points (kriging) ,,totaliijiSSS Sensitivity indices, main, interaction, and total effects resps Standard deviation of responses T Temperature diffT L2 norm of the difference between predicted and benchmark experimental temperature data t Time, magnitude of tangents of Bezier curves ,,,,,,ij kUuvwuuu Velocity components

PAGE 23

23 V Null eigenvector ,,iijVVV Variance and its component s (partial variances), volume iw Weight associated with ith surrogate model X Gramian design matrix x Vector of design variables ,,ij k x xx Space variables (coordinates) y Vector of responses ,() y yx Function or response value at a design point x ,() y yx Predicted response at a design point x ()Zx Systematic departure term in kriging Z Subset of design variable s for global sensitivity analysis Volume fraction, Thermal diffusivity, Parameter in weighted average model 12, Parameters used to define ve rtex and axial point locations in a central composite design Vector of coefficients of true basis functions Estimated coefficient vector in kriging Coefficient associated with a basis function in polynomial response surface approximation, coefficient of thermal expansion, parameter in weighted average model Constant used to estimat e root mean square bias error ij Kronecker-delta ,() x Turbulent dissipation, error in surrogate model () x True function or response

PAGE 24

24 Probability density function, variable defining diffuser vane shape Vector of parameters in the Gaussian correlation function Regularization parameter Mean of responses at sampled points, dynamic viscosity i Weights used in numerical integration Density 2 Variance of noise, estimated process variance in kriging a Adjusted root mean square error Degree of correlation, flow variable 1 Vector of ones Subscripts c Cavity, candidate true polynomial dopt D-optimal design ,,ijk Indices krg Kriging l Liquid phase lhs Latin hypercube sampling design m Mixture of liquid and vapor max Maximum of the quantity min Minimum of the quantity p rs Polynomial response surface p ws PRESS-based weighted average surrogate

PAGE 25

25 rbnn Radial basis neural network R MS Root mean square value t Turbulent wta Weighted average surrogate Solution with least deviation from the data v Vapor phase Reference conditions Superscripts/Overhead Symbols I Data independent error measure R Reynolds stress ()i ith design point ()i All points except ith design point ()l Lower bound T Transpose ()u Upper bound (1) Terms in the response surface model (2) Terms missing from the response surface model ^ Predicted value Average of responses in surrogate models Non-dimensional Numbers CFL Courant, Freidricks and Levy number Pr Prandtl number Re Reynolds number

PAGE 26

26 Cavitation number Acronyms ADS All possible data sets AIAA American Institute of Aeronautics and Astronautics ASME American Society of Mechanical Engineers ASP Alkaline-surfactant-polymer CCD Central composite design CFD Computational fluid dynamics COV Coefficient of variation CV Cross validation DOE Design of experiment ED Experimental design EGO Efficient global optimization ESE Estimated standard error FCCD Face-centered central composite design GCV Generalized cross validation error GMSE Generalized mean squared error GSA Global sensitivity analysis KRG Kriging IGV Inlet guide vane LHS Latin hypercube sampling LH2 Liquid Hydrogen LOX Liquid Oxygen

PAGE 27

27 MSE Mean squared error NASA National Aeronautics and Space Administration N-S Navier-Stokes NIST National Institute of Standards and Technology NPSF Non-parametric surrogate filter OA Orthogonal array POF Pareto optimal front PRESS Predicted residual sum of squares PRS Polynomial response surface PSF Parametric surrogate filter PWS PRESS-based weighted average surrogate RBF Radial basis function RBNN Radial basis neural network RMS Root mean square RMSBE Root mean square bias error RMSE Root mean squared error RP1 Refined petroleum SoS Speed of sound SQP Sequential quadratic programming SS Split sample SVR Support vector regression Operators () E a Expected value of the quantity a

PAGE 28

28 max(,)ab Maximum of a and b min(,)ab Minimum of a and b (,)rab Correlation between vectors a and b ()Va Variance of the quantity a ()a Standard deviation of the quantity a ()avga Space-averaged value of the quantity a max()a Maximum of the quantity a a Absolute value of the quantity a a (L2) Norm of the vector a

PAGE 29

29 Abstract of Dissertation Pres ented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy MULTIPLE SURROGATES AND ERROR MODELING IN OPTIMIZATION OF LIQUID ROCKET PROPULSION COMPONENTS By Tushar Goel May 2007 Chair: Raphael T. Haftka Cochair: Wei Shyy Major: Mechanical Engineering Design of space propulsion components is extr emely complex, expensive, and involves harsh environments. Coupling of computational fluid dynamics (C FD) and surrogate modeling to optimize performance of space propulsion componen ts is becoming popular due to reduction in computational expense. However, there are uncerta inties in predictions using this approach, like empiricism in computational models and surr ogate model errors. We develop methods to estimate and to reduce such uncertainties. We demonstrate the need to obtain experi mental designs using multiple criteria by showing that using a single-crite rion may lead to hi gh errors. We propose using an ensemble of surrogates to reduce uncertainties in selecting th e best surrogate and samp ling strategy. We also develop an averaging technique for multiple surr ogates that protects against poor surrogates and performed at par with best surrogate for many problems. We assess the accuracy of different error es timation models, including an error estimation model based on multiple surrogates, used to quantify prediction errors. While no single error model performs well for all problems, we show possible advantage of combining multiple error models.

PAGE 30

30 We apply these techniques to two problems relevant to space propulsion systems. First, we employ surrogate-based strategy to understand the role of em pirical model parameters and uncertainties in material propert ies in a cryogenic cavitation model, and to calibrate the model. We also study the influence of thermal effects on predictions in cryogenic environment in detail. Second, we use surrogate models to improve the hydrodynamic performance of a diffuser by optimizing the shape of diffuser vanes. For both problems, we obser ved improvements using multiple surrogate models. While we have demonstrated the approach using space propulsion components, the proposed techniques can be applie d to any large-scale problem.

PAGE 31

31 CHAPTER 1 INTRODUCTION AND SCOPE Liquid rocket propulsion systems are the mo st popular form of space propulsion systems for high thrust and specific impul se applications as required fo r space applications (Humble et al., 1995, Chapter 5). Unlike propulsion systems us ed in aircraft, space propulsion systems carry both fuel and oxidizer with the vehicle. This poses additional requirements on the selection of suitable propellants and desi gn of propulsion systems. Apart from high energy density, the choice of propellants is also affected by the ease of storage and handling, mass or volume of propella nt, and nature of products of combustion. Typical propellants used for liquid space propuls ion are refined petroleum (RP1) with liquid oxygen (LOX), hypergolic propellants (mono-methyl hydrazine with nitr ogen tetroxide) and cryogens (liquid hydrogen LH2 a nd LOX). Cryogens (LH2 and L OX) are most popular due to higher power/gallon ratio and specific thrust, lo wer weight of LH2, and cleaner combustion products (water). Despite difficulti es in storage (due to the tendency of cr yogens to return to gaseous state unless super-cooled, the boiling point of LH2 and LOX at st andard conditions is 423o F and -298o F, respectively) and safety considerations, the rewards of using cryogens as space propellants are significan t (NASA online facts, 1991). Space Propulsion Systems A conceptual schematic of a typical bi-pr opellant space propulsion system is shown in Figure 1-1. There are five major components of the propulsion system: fuel and oxidizer storage tanks, fuel and oxidizer pumps, gas turbine, co mbustion chamber, and nozzle. Based on the type of power cycle, different propulsion systems are classified as follows: Gas-Generator Cycle (Figure 1-2(A)) A small amount of fuel and oxidizer is fed to gas generator where the fuel is burnt at less than optimal ratio to keep the temperature in turbine low. The hot gases produced in gas

PAGE 32

32 generator drive the turb ine to produce power required to run the fuel and oxidizer pumps. Thrust of the engine is regulated by c ontrolling the amount of propellants through gas generator. This is an open cycle since the hot gas from turbines is either dumped overboard or sent into the main nozzle downstream. This configuration is usef ul for moderate power requirements but not good for the app lications which require high power. Staged Combustion Cycle (Figure 1-2(B)) This is a closed cycle system in which an enriched mixture of fuel and oxidizer is generated in pre-burner, and after passing thr ough the turbine this va porized mixture is fed to the main combustion chamber. No fuel or oxi dizer is wasted in this cycle as complete combustion takes place in the main combustion ch amber. This cycle is used for high power applications but the engine development cost is high and component s of propulsion system are subjected to harsh conditions. Expander Cycle (Figure 1-2(C)) In this closed cycle, the main combustion ch amber is cooled by passing liquid fuel that in turn gets vaporized by exchange of heat. This fu el vapor runs the turbine and is fed back to the main combustion chamber, where comple te combustion takes place. The limitation on heat transfer to the fuel lim its the power available in the tu rbine. This configuration is more suitable to small/mid size engines. Pressure-Fed Cycle (Figure 1-2(D)) This is the simplest configuration which does not require any pump or turbine. Instead, the fuel and the oxidizer are fed to the combus tion chamber by the tank pressure. This cycle can be applied only to relatively low-ch amber-pressure applications because higher pressure requires bulky storage tanks. Design Requirements of Propulsions Systems The design of an optimal propulsion system requires efficient performance of all components. As can be seen from Figure 1-2, pumps, turbine, combustion chamber, and nozzle are integral part of almost all space propulsion systems. The system level goal of designing a space propulsion system is to obtain the highest po ssible thrust with the lowest weight (Griffin and French, 1991), but the requirements of individual components are also governed by corresponding operating conditions Storage tanks are required to withstand high pressure in cryogenic environments while keep ing the weight of tank low. No zzles impart high velocities to high temperature combustion produ cts to produce maximum possibl e thrust. The requirements

PAGE 33

33 for the design of the turbine and the combusti on chamber are high efficiency, compact design, and ability to withstand high pre ssures and temperatures. Similarly, pumps are required to supply the propellants to the co mbustion chamber at a desired flow rate and injection pressure. Major issues in the design of pumps include harsh environments, compact design requirements, and cavitation under cryogenic c onditions. Each subsystem has numerous design options for example, number of stages, number and geometry of blades in turbines and pumps, different configurations and geometry of injectors and combustion chambe rs etc., that relate to the subsystem level requirements. System Identification and Optimization: Case Studies The design of a propulsion system is extrem ely complex due to the often conflicting requirements posed on individual components an d interaction of different subsystems. Experimental design of propulsi on systems is very expensive, time consuming, and involves harsh environments. With improvements in numeric al algorithms and incr ease in computational power, the role of computational fluid dynamics (CFD) for design of complex systems with various levels of complexities has grown many-fold. Computer simulation of design problems has not only reduced the cost of developing the designs but also has reduced risks and design cycle time. The flexibility of trying alternative design options also has increased extensively compared to the experiments. With improveme nts in computer hardware and CFD algorithms, the complexity of the simulation model is increa sing in an effort to cap ture physical phenomenon more accurately. Our current efforts in the design of liquid rocket propulsion system s are focused on two distinct classes of problems. Fi rst, we try to understand the role s of thermal effects and model parameters used for numerical modeling of cryoge nic cavitation; and second, we aim to carry out shape optimization of diffuser vanes to maximize diffuser efficiency. While our interests in the

PAGE 34

34 design of diffuser are motivated by the ongoing in terests in the Moon and Mars exploration endeavors, the study of cryogenic cavitation mode l validation and sensitivity analysis is relatively more generic, but very relevant to the design of liqui d rocket propulsion systems. We discuss each problem in more detail as follows. Sensitivity Evaluation and Model Valid ation for a Cryogenic Cavitation Model Though a lot of improvements have been made over the years in the numerical modeling of fluid physics, areas like cavitation, both in normal as well as cryogenic environment, are still continuously developing. Complex physics, phase ch anges and resulting vari ations in material properties, interaction of convection, viscous effects and pressure, time dependence, multiple time scales, interaction between different pha ses of fluids and ma ny fluids, temperature dependent material properties and turbulence make these pr oblems difficult (Utturkar, 2005). Numerous algorithms (Singhal et al. 1997, Merkle et al. 1998, Kunz et al. 2000, Senocak and Shyy 2004a-b, Utturkar et al. 2005a-b ) have been developed to captu re the complex behavior of cavitating flows, which have serious impli cations on the performance of the propulsion components. We study one cryogenic cavitation model in deta il to assess the role of thermal boundary conditions, thermal environment, uncertainties in material properties and empirical model parameters on the prediction of pressure and temperature field. Finally, we calibrate the cryogenic cavitation model parameters and valida te the outcome using several benchmark data. Shape Optimization of Diffuser Vanes The diffuser is a critical component in liqui d rocket turbomachinery. The high velocity fluid from turbo-pump is passed through the diffuse r to partially convert kinetic energy of the fluid into pressure. The efficiency of a diffuser is determined by its ability to induce pressure recovery that is characterized by the ratio of out let to inlet pressure. While exploring different

PAGE 35

35 concepts for diffuser design, Dorn ey et al. (2006a) observed that diffusers with vanes are more effective than vaneless diffusers. Consequently we seek further improvements in diffuser efficiency through shape optimization of vanes. Since computational cost of simulations is high, we adopt a su rrogate model-based framework for optimization and sensitivity analysis that is briefly introduced in next section and discussed in detail in a following chapter. Together, the present effort demonstrates that the same technical framework can be equally adapted to treat hardware and computational modeling development, with information to allow one to in spect the overall design sp ace characteristics, to facilitate quantitative trade-off considerations between of multiple and competing objectives, and to rank order the importance of the various desi gn variables. Surrogate Modeling High computational cost of si mulations involved in perfor mance evaluation of propulsion components makes direct coupling of optimizatio n tools and simulations infeasible for most practical problems. To alleviate the problems asso ciated with the optimization of such complex and computationally expensive components, surr ogate models based on a limited amount of data have been frequently used. Surrogate models offe r a low-cost alternativ e to evaluate a large number of designs and are amenable to the optim ization process. Surrogate models also can be used to assess the trends in the design space as well as help identify the problems associated with numerical simulations. A sample of recent application of surrogate models for the design of space-propulsion system is given as follows. Lepsch et al. (1995) used polynomial response surface approximations to minimize the empty-weight of the dual fuel vehicles by considering propulsion systems and vehicle design parameters. Ro well et al. (1999) and th e references within discuss the application of regres sion techniques for the design of single-stage-to-orbit vehicles.

PAGE 36

36 Madsen et al. (2000) used polynomial response su rface models for the design of diffusers. Gupta et al. (2000) used response su rface methodology to improve the liftto-drag ratio of artificially blunted leading edge spherical c one that is a representative geom etry for reentry vehicle while constraining the heat transfer rates. Chung and Alonso (2000) estimated boom and drag for low boom supersonic business jet design problem via kriging. Shyy et al. (2001a-b) employed global design optimization te chniques for the design of rocket propulsion components; Pa pila et al. (2000, 2001, 2002), Papila 2001, approximated the design objectives for supersonic turbines using polynomial response surface approximations and neural networks; Vaidyanathan et al. ( 2000, 2004a-b), Vaidyanathan (2004) modeled performance indicators of liquid rocket injectors using polynomial response surface approximations; Simpson et al (2001b) used kriging and polynomial response surface approximations for the design of aerospike nozzl es; Steffen (2002a) developed an optimal design of a scramjet injector to simultaneously improve efficiency and pressure loss characteristics using polynomial response surfaces. In a followup work, Steffen et al. (2002b) used polynomial response surface approximations to analyze the pa rameter space for the design of a combined cycle propulsion components, namely, mixed-co mpression inlet, hydrogen fueled scramjet combustor, a ducted-rocket nozzle. Charania et al. (2002) used response surface methodology to reduce computational cost in system-level uncertainty assessment for reusab le launch vehicle desi gn. Huque and Jahingir (2002) used neural networks in optimization of integrated inlet/ejector system of an axisymmetric rocket-based combined cycle engi ne. Qu et al. (2003) a pplied different polynomial response surface approximations for the struct ural design of hydrogen tanks; Keane (2003) employed response surfaces for the optimal design of wing shapes; Umakant et al. (2004) used

PAGE 37

37 kriging surrogate models to compute probability density functions for robust configuration design of air-breathing hypers onic vehicle. Baker et al. ( 2004a-b) used response surface methodology for system level optim ization of booster and ramjet combustor while considering multiple constraints. Levy et al. (2005) used s upport vector machines and neural networks to approximate objectives temperatur e field and pressure loss while searching Pareto optimal solutions for the design of a combustor. Issues with Surrogate Modeling The accuracy of surrogate models is an importa nt factor for its eff ective application in design and optimization process. Some of the issu es that influence the accuracy of surrogate models are: (1) number and location of sampled data points, (2) numerical simulations, and (3) choice of the surrogate model. The charac terization of uncertainty in predictions via different error estimation measures is also useful for op timization algorithms such as, EGO (Jones et al., 1998) etc. These issues are widely discussed in literature (Li and Pa dula 2005, Queipo et al. 2005). While a detailed review of different issues and surrogate models is provided in next chapter, we briefly discuss the intent of current research in context of addressing the issues related to surrogate prediction accuracy. Sampling Strategies Typically the amount of data used for surrogate model c onstruction is limited by the availability of computationa l resources. Many design of e xperiments (DOE) techniques are developed to sample the locations for conducti ng simulations such that the errors in approximation are reduced. However, these strate gies mostly optimize a single criterion that caters to an assumed source of error in approxi mation. The dominant sources of errors in approximation are rarely known a priori for practical engineering problems. This renders the selection of appropriate DOE t echnique a very difficult task because an unsuitable DOE may

PAGE 38

38 lead to poor approximation. We de monstrate this issue of high errors in approximation due to a single-criterion based DOE with the help of simple examples and highlight the need to simultaneously consider multiple criteria. Type of Surrogate Model The influence of the choice of surrogate mode l on prediction accuracy is widely explored in literature. Many researchers have compared different surrogates for various problems and documented their recommendations. The general c onclusion is that no single surrogate model may prove adequate for all problems, and the se lection of a surrogate for a given problem is often influenced by the past experience. However, as we demonstrate in this work, the suitability of any surrogate model also depends on the sa mpling density and sampling scheme. Then, the selection of an appropriate su rrogate model becomes more co mplicated exercise. Here, we present methods to exploit available informa tion by simultaneous use of multiple surrogate models. Specifically, we use multiple surrogate m odels to assess the regions of high uncertainty in response predictions. Further we develop a weighted average surrogate model, which is demonstrated to be more robust than the indivi dual surrogate models for a wide variety of problems and sampling schemes. Estimation of Errors in Surrogate Predictions Since surrogate models only approximate the re sponse, there are errors in predictions. An estimate of prediction errors is beneficial in determining the sampling locations in many surrogate-based optimization methods such as EGO (Jones et al., 1998) and for adaptive sampling. The effectiveness of these methods depends on the accuracy of error estimation measures. Mostly, these error estimation measur es are based on statistical assumptions. For example, prediction variance for polynomial response surface approximation is developed assuming that errors are exclusively due to noise that follows a normal di stribution of zero mean

PAGE 39

39 and 2 variance independent of locations. When these assump tions are not satisfied, the accuracy of error estimation measures is questio nable. We compare different error estimation measures using a variety of test problems and give recommendations. We also explore the idea of simultaneously using multiple error estimation measures. While we use the presented surrogate-based framework for design and optimization of diffuser design and cryogenic cavitation model, we employ analytical examples, which are primarily used to test optimization algorithms, to exhibit the key concepts relating to different issues with surrogate modeling. Scope of Current Research In short, the goal of present work is to develop methodologies for designing optimal propulsion systems while addressing issues related to numerical uncertainties in surrogate modeling. The scope of the current work can be summarized as follows: 1) To illustrate risks in using a single criterion based experimental design for approximation and the need to consider multiple criteria. 2) To explore the use of an ensemble of surrogates to help identify regions of high uncertainties in predictions and to possibly pr ovide a robust prediction method. 3) To appraise different error es timation measures and to presen t methods to enhance the error detection capability by combining multiple error measures. 4) To demonstrate the proposed surrogate-model based approach to liquid rocket propulsion problems dealing with (a) cryogenic-cavitati on model validation and sensitivity study to appraise the influence of different parameters on performance, and (b) shape optimization of diffuser vanes to maximize diffuser efficiency.

PAGE 40

40 The organization of this work is as follows We review different surrogate models and relevant issues associated with surrogate modeli ng in Chapter 2. In Chapter 3, we evidence risks in using a single criterion for constructing de sign of experiments. Me thods to harness the potential of multiple surrogate models are presented in Chapter 4. The performance of different error estimation measures is compared in Chap ter 5 and we propose methods to simultaneously use multiple error measures. This surrogate-based analysis and optimization framework is applied to two problems related to liquid rock et propulsion, model validation and sensitivity analysis of a cryogenic cavitation model in Chap ter 6, and shape optimization of diffuser vanes in Chapter 7. We recapitulate major conclusions of the current work and delineate the scope of future work in Chapter 8.

PAGE 41

41 Liquid Fuel Gas Generator Liquid oxidizer Radial Turbine Nozzle Oxidizer Pump Fuel Pump Combustion Chamber Liquid Fuel Gas Generator Liquid oxidizer Radial Turbine Nozzle Oxidizer Pump Fuel Pump Combustion Chamber Liquid Fuel Gas Generator Liquid oxidizer Radial Turbine Nozzle Oxidizer Pump Fuel Pump Combustion Chamber Figure 1-1. Schematic of liquid fuel rocket propulsion system.

PAGE 42

42 A B C D Figure 1-2. Classification of propulsion systems according to pow er cycles. A) Gas-generator cycle. B) Staged combustion cycle. C) Expander cycle. D) Pressure-fed cycle.

PAGE 43

43 CHAPTER 2 ELEMENTS OF SURROGATE MODELING Surrogate models are widely accepted for th e design and optimization of components with high computational or experimental cost (typ ically encountered in CFD simulations based designs), as they offer a computationally less expensive way of evaluating designs. Surrogate models are constructed using the limited data gene rated from the analysis of carefully selected designs. Numerous successful applications of su rrogate models for design and optimization of aerospace systems, automotive components, el ectromagnetic applications and chemical processes etc. are available in literature. A few examples are given as follows. Kaufman et al. (1996), Balaba nov et al. (1996, 1999), Papila and Haftka (1999, 2000), and Hosder et al. (2001) constructed polynomia l response surface appr oximations (PRS) for structural weight based on structural optimizations of high sp eed civil transport. Hill and Olson (2004) applied PRS to approximate noise models in their effort to reduce the noise in the conceptual design of transport aircrafts. Madsen et al. (2000), Papila et al. (2000, 2001), Shyy et al. (2001a-b), Vaidyanathan et al. (2000, 2004a-b), Goel et al. (2004), an d Mack et al. (2005b, 2006) used polynomialand neural networks-based surrogate models as design evaluators for the optimization of propulsion components including tu rbulent flow diffuser, supersonic turbine, swirl coaxial injector element, liquid rocket injector, and radial turbine designs. Burman et al. (2002), Goel et al. (2005), and M ack et al (2005a) used differe nt surrogates to maximize the mixing efficiency facilitated by a trapezoidal-shaped bluff body in the time dependent NavierStokes flow while minimizing the resulting drag coefficient. Knill et al. (1999), Rai and Madavan (2000, 2001) and, Madavan et al. (2001) used surrogate models for airfoil shape optimization. Do rnberger et al. (2000) used neural networks and polynomial response surfaces for design of tu rbine blades. Kim et al. (2002), Keane (2003),

PAGE 44

44 Jun et al. (2004) applied PRS to optimize wing designs. Ong et al. (2003) used radial basis functions to approximate the objective function a nd constraints of an ai rcraft wing design. Bramantia et al. (2001) used neural network-based models to approximate the design objectives in electro-magnetic problems. Farina et al. (2001) used multiquadrics interpolationbased response surface approximations to optim ize the shape of electromagnetic components like, C-core and magnetizer. Wilson et al. ( 2001) used response surface approximations and kriging for approximating the objectives while designing piezomorph actuators. Redhe et al. (2002a-b), Craig et al. (2002), and Stander et al. (2004) used PRS, kriging and neural networks in design of vehicles for cr ashworthiness. Rais-Rohani and Singh (2002) used PRS to approximate limit state functions for estimating the reliability of composite structures. Rikards and Auzins (2002) deve loped PRS to model buckling a nd axial stiffness constraints while minimizing the weight of composite stiffe ned panels. Vittal and Hajela (2002) proposed using PRS to estimate the statistical confidence in tervals on the reliability estimates. Zerpa et al. (2005) used kriging, radial ba sis functions, and PRS to optimi ze the cumulative oil recovery from a heterogeneous, multi-phase reservoir s ubject to an ASP (alka line-surfactant-polymer) flooding. Steps in Surrogate Modeling Li and Padula (2005) and Queipo et al. (2005) have given a comprehensive review of the relevant issues in surrogate modeling. In this chapter, we discuss the key steps in the surrogate modeling as explained with the help of Figure 2-1. Discussion of major issues and the most prominent approaches followed in each step of the surrogate modeling pro cess, briefly described below, lays the outline of this chapter.

PAGE 45

45 Design of Experiments (DOEs) The design of experiment is the sampling plan in design variable space. Other common names for DOEs are experimental designs or sampli ng strategies. The key ques tion in this step is how we assess the goodness of such designs, cons idering the number of samples is severely limited by the computational expense of each sample. We discuss the most prominent approaches related to DOE in a subsequent sec tion. Later in Chapter 3, we demonstrate some practical issues with the construction of DOEs. Numerical Simulations at Selected Locations Here, the computationally expens ive model is executed for all the designs selected using the DOE specified in the previous step. In contex t of the present work, the details of relevant numerical simulation tools used to evaluate designs were brie fly discussed in appropriate chapters. Construction of Surrogate Model Two questions are of interest in this step: 1) what surrogate model(s) should we use (model selection) and, 2) how do we find the correspond ing parameters (model id entification)? A formal description of the problem of interest is discussed in next section. A framework for the discussion and mathematical formulation of alte rnative surrogate-based modeling approaches is outlined in next section and the section on construction of surrogate models. Model Validation The purpose of this step is to establish the predictive capabilities of the surrogate model away from the available data (g eneralization error). Different sc hemes to estimate generalization error for model validation are discussed in this chapter. We compare different error estimation measures specific to a few popular surrogates in a following chapter.

PAGE 46

46 Mathematical Formulation of Surrogate Modeling Problem With reference to Figure 2-2, surrogate modeling can be seen as a non-linear inverse problem for which one aims to determine a continuous function (() y x) of a set of design variables from a limited am ount of available data ( y ). The available data y while deterministic in nature can represent exact evaluations of the function () y x or noisy observations; and in general cannot carry sufficient information to uniquely identify () y x. Thus, surrogate modeling deals with the twin problems of: 1) constructing a model () y x from the available data y (model estimation), and 2) assessing the errors attached to it (model apprai sal). A general description of the anatomy of inverse problem s can be found in Snieder (1998). Using the surrogate modeling approach the prediction of the simulation-based model output is formulated as ()()()yyxxx The prediction expected value and its variance ()Vy are illustrated in Figure 2-3, with being a probability density function. Different model estimation and model appraisa l components of the prediction have been shown to be effective in the context of surr ogate based analysis and optimization (see for example, McDonald et al., 2000; Chung and Al onso, 2000; Simpson et al., 2001a; Jin et al., 2001), namely polynomial response surface approximation (PRS), Gaussian radial basis functions (GRF) (also referred as radial basis neural networks RBNN), and (ordinary) kriging (KRG) as described by Sacks et al. (1989). Mode l estimation and appraisal components of these methods are presented in a following section. A good paradigm to illustrate how particular solutions (y) to the model estimation problem can be obtained is pr ovided by regularization theory (see for example, Tikhonov and Arsenin (1977), and Morozov (1984)), which impos es additional constraints on the estimation.

PAGE 47

47 More precisely, y can be selected as the solution to the following Tikhonov regularization problem: () 11 min(),im i S yS i ssN y LyyDydx N x (2.1) where S is the family of surrogate models under consideration, () Lx is a loss or cost function used to quantify the so ca lled empirical error (e.g., 2() Lxx ), is a regularization parameter, and mDy represents the value of the m -derivative of the proposed model at location x Note that mDy represents a penalty term; for example, if m is equal to two, it penalizes high local curvature. Hence, the first term enforces closen ess to the data (goodness of fit), while the second term addresses the smoothness of the solution with (a real positive number) establishing the tradeoff between the two. Increasing values of provide smoother solutions. The purpose of the regularization parameter is, hence, to help implement O ccam's razor principle (Ariew, 1976), which favors parsimony or simplicity in model construction. A good discussion on statistical regularization of inverse problems can be found in Tenorio (2001). The quadratic loss function (i.e., L2 norm) is most commonly used in part because it typically allows easy estimation of the parameters associated with the surrogate model; however, it is very sensitive to outliers. The linear (also called Laplace) loss function takes the absolute value of its argument (i.e., L1 norm); on the other hand, the Huber loss function is defined as quadratic for small values of its argume nt and linear otherwise. The so called -loss function has received considerable attention in the context of the support vector regression surrogate (Vapnik, 1998; Girosi, 1998), and assigns an er ror equal to zero if the true a nd estimated values are within an distance. Figure 2-4 illustrates the cited loss functions.

PAGE 48

48 Design of Experiments As stated earlier, the design of experiment is the sampling plan in design variable space and the key question in this step is how we assess the goodness of su ch designs. In this context, of particular interest are sampling plans that provide a unique value (in contrast to random values) for the input variables at each point in the input space, and are model-independent; that is, they can be efficiently used for fitting a variety of models. Typically, the primary interest in surrogate modeling is minimizing the error and a DOE is selected accordingly. Two major component s of the empirical er ror and corresponding expressions for average error formulation cons idering quadratic loss f unctions are given as follows. Variance : measures the extent to which the surrogate model ()y x is sensitive to particular data set D Each data set D corresponds to a random sample of the function of interest. This characterizes the noise in the data, 2var ()[()].ADSADSEEyEy xx (2.2) Bias : quantifies the extent to which the surrogate model outputs (i.e., () y x ) differ from the true values (i.e., () y x ) calculated as an average over all possible data sets D (ADS), 22 ()()().ADS biasEEyy xxx (2.3) In both expressions, ADSE denotes the expected value consid ering all possible data sets. There is a natural tradeoff between bias and va riance. A surrogate model that closely fits a particular data set (lower bias) will tend to pr ovide a larger variance and vice-versa. We can decrease the variance by smoothing the surrogate model but if the idea is taken too far then the bias error becomes significantly higher. In princi ple, we can reduce both bias (can choose more complex models) and variance (each model more heavily constrained by the data) by increasing the number of points, provided the latter increas es more rapidly than the model complexity.

PAGE 49

49 In practice, the number of points in the data set is severely limited (e.g., due to computational expense) and often during the cons truction of surrogate model, a balance between bias and variance errors is sought. This balance can be achieved, for example, by reducing the bias error while imposing penalties on the model complexity (e.g., Tikhonov regularization). With reference to most applications, where the actual model is unknown (see previous and next sections), and data is collected from deterministic computer simulations, bias error is the dominant source of error because the numerical noise is small, and a DOE is selected accordingly. When response surface approximation is used, there is good theory for obtaining minimum bias designs (Myers and Montgome ry, 1995, Section 9.2) as well as some implementations in low dimensional spaces (Qu et al., 2004). For the more general case, the bias error can be reduced through a DOE that distribut es the sample points uniformly in the design space (Box and Draper, 1959; Sacks and Ylvisa ker, 1984; as referenced in Tang, 1993). However, fractional factorial designs replace de nse full factorial designs for computationally expensive problems. The uniformity property in designs is sought by, for example, maximizing the minimum distances among design points (Joh nson et al., 1990), or by minimizing correlation measures among the sample data (Iman an d Conover, 1982; Owen, 1994). Practical implementations of a few most comm only used DOEs are discussed next. Factorial Designs Factorial designs are one of the simplest DOEs to investigate the effect of main effects and interactions of variables on the resp onse for box-shaped design domain. 2v N -factorial design where each design variable is allowed to take tw o extreme levels is often used as a screening DOE to eliminate the unimportant variables. Qual itative and binary variables can also be used for this DOE. A typical two-level full factor ial DOE for three variables is shown in Figure 2-5.

PAGE 50

50 These designs can be used to create a linear polynomial response surface approximation. For higher order approximations, the number of levels of each variable is increased for example a quadratic polynomial response surface approximati on can be fitted by a three-level full factorial design (3v N -designs). Some times the number of experiments is reduced by using 2vNp (or 1/2p) fractional factorial designs, which require the selection of p independent design generators (least influential interactions). Typically, these designs are cl assified according to resolution number: Resolution III : Main effects are not aliased with ot her main effects, but are confounded with one or more two-way interactions. Resolution IV : Main effects are not aliased with other main effects or two-way interactions. Two factor inte ractions are confounded with other two-way interactions. Resolution V : Main effects and two-way interac tions are not confounded with one another. More details can be obtained from Myers and Montgomery (1995, Chapter 4, pp. 156179). Factorial designs produce orthogonal designs for polynomial response surface approximation. However, for higher dimensions (6vN ), factorial designs requires a large number of experiments making them particular ly unattractive for computationally expensive problems. Central Composite Designs These include designs on 2vN vertices, 2vN axial points, and cN repetitions of central point. The distance of axial point is varied to generate, facecentered, spherical design, or orthogonal design. A typi cal central composite design for th ree-variable problem is shown in Figure 2-6. These designs reduce variance componen t of the error. The re petitions at the center reduce the variance, improve stab ility (defined as the ratio of maximum variance to minimum variance in the entire design space), and give an idea of magnitude of the noise, but are not

PAGE 51

51 useful for the computer simulations. These design s are also not practical for higher dimension spaces (8vN ) as the number of simulations becomes very high. For 3vN when the designs on the vertices of the design spaces are not f easible, Box-Behnken designs can be used for quadratic polynomial response surface approxi mation. These are spherical designs (sampling designs at these locations enables us to exactly determine the function value on the points equidistant from the center) but these designs intr oduce higher uncertaint y near the vertices. Variance Optimal DOEs for Polynomial Response Surface Approximations Moment matrix T s XX M N 1 for PRS (see next section) is very important quantity as this affects the prediction variance, and the co nfidence on the coefficients, hence, used to develop different variance optimal DOEs. D-optimal design maximizes the determinant of the moment matrix M to maximize the confidence in the coeffi cients of polynomial response surface approximation. A-optimal design minimizes the trace of inverse of M to minimize the sum of variances of the coefficients. G-optimal design minimizes the ma ximum of prediction variance. I-optimal design minimizes the integral of the prediction variance over the design domain. All these DOEs require the solution of difficult optimi zation problem, which is solved heuristically in higher dimensional spaces. Latin Hypercube Sampling (LHS) Stratified sampling ensures that all portions of a given partition are sampled. LHS (McKay et al., 1979) is a stratif ied sampling approach with the restrict ion that each of the input variables (k x ) has all portions of its distribution represented by input values. A sample of size s N can be constructed by dividing the range of each input variable into s N strata of equal marginal 1 An example of matrix X is given in Equation (2.6).

PAGE 52

52 probability 1 s N and sampling once from each stratum. Let us denote this sample by (),1,2,...,;1,2,...,j s v k x jNkN The sample is made of components of each of the vN x s matched at random. Figure 2-7 illustrates a LHS design for two variables, when six designs are selected. While LHS represents an improvement over un restricted stratified sampling (Stein, 1987), it can provide sampling plans with very different performance in terms of uniformity measured by, for example, maximum minimum-distance am ong design points, or by correlation among the sample data. Figure 2-8 illustrates this s hortcoming; the LHS plan in Figure 2-8(B) is significantly better than that in Figure 2-8(A). Orthogonal Arrays (OA) These arrays were introduced by C. R. Rao in the late 40s (Rao, 1947), and can be defined as follows. An OA of strength r is a matrix of s N rows and vN columns with elements taken from a set of qN symbols, such that in any sNr submatrices each of the ()r qN possible rows occur the same number (index) of times. The number of rows ( s N) and columns (vN) in the OA definition represents the number of samples an d variables or factors under consideration, respectively. The qN symbols are related to the levels defined for the variables of interest, and the strength r is an indication of how many effects can be accounted for (to be discussed later in this section) with values typical ly between two and four for reallife applications. Such an array is denoted by OA(,,,svqNNNr). Note that, by definition, a LHS is an OA of strength one, OA (,,,1 s vqNNN). There are two limitations on the use of OA for DOE: Lack of flexibility : Given desired values for the number of rows, columns, levels, and strength, the OA may not exist. For a list of available orthogonal arrays, theory and applications, see, for example, Owen (1992), He dayat et al. (1999), and references therein.

PAGE 53

53 Point replicates : OA designs projected onto the subs pace spanned by the effective factors (most influential design variable s) can result in replication of points. This is undesirable for deterministic computer experiments where the bias of the proposed model is the main concern. Optimal LHS, OA-based LHS, Optimal OA-based LHS Different approaches have been proposed to overcome the potential l ack of uniformity of LHS. Among those, most of them adjust the or iginal LHS by optimizing a spread measure (e.g., minimum distance or correlation) of the sample points. The resulting designs have been shown to be relatively insensitive to the optimal design criterion (Ye et al., 2000). Examples of this strategy can be found in the works of Iman an d Conover (1982), Johnson et al. (1990), and Owen (1994). Tang (1993) and Ye (1998) pr esented the construction of strength r OA-based LHS which stratify each r-dimensional margin, and showed that they offer a substantial improvement over standard LHS. Another strategy optimizes a spread measure of the sample points, but restricts the search of LHS designs, which are orthogonal arrays, resultin g in so called optimal OA-based LHS (Leary et al., 2003). Adaptive DOE, in which model appraisal information is used to place additional samples, have also been proposed (Jones et al., 1998, Sasena et al., 2000, Williams et al., 2000). A summary of main characteristics and limitations of different DOE techniques is listed in Table 2-1. If feasible, two sets of DOE are generated, one (so cal led training data set) for the construction of the surrogate (next section), and second for assessing its quality (validation as discussed in a later section). Given the choice of surrogate, the DOE can be optimized to suit a particular surrogate. This has been done exte nsively for minimizing variance in polynomial RSA (e.g., Dand Aoptimal designs, Myers and Montgomery, 1995, Chapter 8) and to some extent for minimizing bias (e.g., Qu et al., 2004). Ho wever, for non-polynomial models, the cost of the optimization of a surrogate-specific DOE is usually prohibitive, and so is rarely attempted.

PAGE 54

54 Construction of Surrogate Model There are both parametric (e.g., polynomial re sponse surface approximation, kriging) and non-parametric (e.g., projection-pursuit regressi on, radial basis functions) alternatives for constructing surrogate models. The parametric approaches presume the global functional form of the relationship between the resp onse variable and the design variables is known, while the nonparametric ones use different types of simple, local models in different regions of the data to build up an overall model. This section discusse s the estimation and appraisal components of the prediction of a sample of both parametric and non-parametric approaches. Specifically, the model estimation and appraisal components corresponding to polynomial response surface approximation (PRS), krigi ng (KRG), and radial basis functions (RBF) surrogate models are discussed next, followed by a discussion of a more general non-parametric approach called kernel-based regression. Throughout this section a square loss function is assumed unless otherwise specified, and given th e stochastic nature of the surrogates, the available data is considered a sample of a population. Polynomial Response Surface Approximation (PRS) The regression analysis is a methodology to study the quantitative relation between a function of interest y and 1N basis functions j f using s N sample values of the response i y for a set of basis functions () i j f (Draper and Smith, 1998, Section 5.1). Monomials are the most preferred basis functions by pract itioners. For each observation i, a linear equation is formulated: 1()2 1();()0,(), fxiiN i ijji jyfEV (2.4)

PAGE 55

55 where the errors i are considered independents with exp ected value equal to zero and variance 2 and represents the quantitative relation between basis functions. The set of equations specified in Equation (2.4) can be expressed in matrix form as: 2;()0,(), y X EVI (2.5) where X is a 1sNN matrix of basis functions, also kno wn as Gramian design matrix, with the design variable values for sampled points. A Gramian design matrix for a quadratic polynomial in two variables (12;6vNN) is shown in Equation (2.6) (1)(1)(1)2(1)(1)(2)2 121122 (2)(2)(2)2(2)(2)(2)2 121122 ()()()2()()()2 121122 ()()()2()()()2 1211221 1 1 1ssssssiiiiii NNNNNNXxxxxxx xxxxxx xxxxxx xxxxxx (2.6) The polynomial response surface approximation of the observed response () y x is, ()(),jj jybfxx (2.7) where jb is the estimated value of the coefficient associated with the jth basis function ()jf x Then, the error in approximation at a design point x is given as ()()()eyy xxx. The coefficient vector b can be obtained by minimizing a loss function L defined as 1,sN p i iLe (2.8) where ie is the error at ith data point, p is the order of loss function, and s N is the number of sampled design points. A quadratic loss function ( p = 2), that minimizes the variance of the error

PAGE 56

56 in approximation, is mostly used beca use we can estimate coefficient vector b using an analytical expression, as follows, 1().TTXXX b y (2.9) The estimated parameters b (by least squares) are unbiased estimates of that minimize variance. At a new set of basis function vector f for design point P the predicted response and the variance of the estimation are given as 112 1 ()(),()(()). xxxffTT jN j jybfandVyXX (2.10) Kriging Modeling (KRG) Kriging is named after the pioneering work of D.G. Krige (a South African mining engineer), and was formally developed by Math eron (1963). More recently, Sacks et al. (1989, 1993), and Jones et al. (1998) made it well-known in the context of the modeling, and optimization of deterministic func tions, respectively. The kriging method in its basic formulation estimates the value of a functi on (response) at some unsampled location as the sum of two components: the linear model (e.g., polynomial tr end), and a systematic departure representing low (large scale) and high frequency (small s cale) variation components, respectively. The systematic departure component represents the fluctuations around the trend, with the basic assumption being that these are correlated, and the correlation is a function of distance between the locations under cons ideration. More precisely, it is represented by a zero mean, second-order, stationary proce ss (mean and variance constant w ith a correlation depending on a distance) as described by a correlation model. Hence, these models (ordinary kriging) s uggest estimating deterministic functions as: ()()()(),()0,cov((),())0,,ij y Eij xxxx (2.11)

PAGE 57

57 where is the mean of the response at sampled design points, and is the error with zero expected value, and with a correlation structur e that is a function of a generalized distance between the sample data points. A possible correlation structure (S acks et al., 1989) is given by: 2 ()()2()() 1cov(),()exp,ijij kkk kvNxx xx (2.12) where vN denotes the number of dimensions in the set of design variables x ; identifies the standard deviation of the respons e at sampled design points, and, k is a parameter, which is a measure of the degree of correlation among the data along the kth direction. Specifically, the parameters , and are estimated using a set of s N samples ( x y), such that a likelihood function is maximized (Sacks et al., 1989). Given a probability distribution and the corresponding parameters, the likel ihood function is a measure of th e probability of the sample data being drawn from it. The model estimates at unsampled points is: 1 (())(),TpEfR xrf1 (2.13) where ^ above the letters denotes estimates, r identifies the correlation vector between the set of prediction points and the points used to construct the model, R is the correlation matrix among the s N sample points, and 1 denotes an s N-vector of ones. On the other hand, the estimation variance at unsampled desi gn points is given by: 1 21 11 (())1.T T TR VyR R 1r xrr 11 (2.14) Gaussian processes (Williams and Rasmusse n, 1996, Gibbs, 1997), another well-known approach to surrogate modeling, can be shown to provide identical expressions for the prediction and prediction variance to those provided by kr iging, under the stronger assumption that the

PAGE 58

58 available data (model responses) is a sample of a multivariate normal distribution (Rao, 2002, Section 4a). Radial Basis Functions (RBF) Radial basis functions have been developed for the interpolation of scattered multivariate data. The method uses linear combinations of RBFN radially symmetric functions ()ihx based on Euclidean distance or other such metric, to approximate response functions as, 1()(),RBFN iii iywh xx (2.15) where w represents the coefficients of the linear combinations, ()ihx the radial basis functions, and i independent errors with variance 2 The flexibility of the model, that is its ability to fit many different functions, derives from the freedom to choose different values for the wei ghts. Radial basis functi ons are a special class of functions with their main feature being that their response decr eases (or increases) monotonically with distance from a central point. The center, the distance scale, and the precise shape of the radial function ar e parameters of the model. A typical radial function is th e Gaussian, which in the case of a scalar input, is expressed as, 2 2()exp.ih xc x (2.16) The parameters are its center c and its radius Note that the response of the Gaussian RBF decreases monotonically w ith the distance from the center, gi ving a significant response only in the center neighborhood.

PAGE 59

59 Given a set of s N input/output pairs (sample data), a radial basis function model can be expressed in matrix form as, ,H fw (2.17) where H is a matrix given by, (1)(1)(1) 12 (2)(2)(2) 12 ()()() 12.RBF RBF sss RBFN N NNN Nhhh hhh H hhh xxx xxx xxx (2.18) Similar to polynomial response surface a pproximation method, by solving Equation (2.17), the optimal weights (in the least squa res sense) can be found to be, -1 ,TAH w y (2.19) where A-1 is a matrix given by, -11().TAHH (2.20) The error variance estimate can be shown to be given by, 2 2 ()T r rP trP yy (2.21) where Pr is a projection matrix, -1-.T rPIHAH (2.22) The RBF model estimate for a new set input values is given by, (),Ty xhw (2.23) where h is a column vector with the ra dial basis functions evaluations,

PAGE 60

60 1 2() () ()RBFNh h h x x h x (2.24) On the other hand, the prediction variance is the variance of the estimated model () y x plus the error variance and is given by: 1(())()()(1.)TT TT r s R BFP VyVV NNHH hw yy xhh (2.25) Radial basis function is also known as radial basis neural ne tworks (RBNN) as described by Orr (1996, 1999a-b). The MATLAB implementa tion of radial basi s functions or RBNN (function newrb), used in this study, is described as follows. Ra dial basis neural networks are two-layer networks consisting of a radial-basis function and a lin ear output layer. The output of each neuron is given by 0.8326 fradbas spread wx (2.26) 2()exp(), radbasxx (2.27) where w is the weight vector associated with each neuron, x is the input design vector, spread is a user defined value that controls the radius of influence for each neuron where the radius is half the value of parameter spread. Specifically, th e radius of influence is the distance at which the influence reaches a certain small value. If s pread is too small, the prediction will be poor in regions that are not near the posi tion of a neuron; and if spread is too large, the sensitivity of the neurons will be small. Neurons are added to the network one by one until a specified mean square error goal goal is reached. If the error go al is set to zero, neurons will be added until the network exactly predicts the input data. However, this can lead to over-fitting of the data, which may result in poor prediction between data points. On the other hand, if er ror goal is large, the

PAGE 61

61 network will be under-trained and predictions even on data points will be poor. For this reason, an error goal is judiciously selected to preven t overfitting while keep ing the overall prediction accuracy high. Kernel-based Regression The basic idea of RBF can be ge neralized to consider altern ative loss functions and basis functions in a scheme known as kernel-bas ed regression. With reference to Equation (2.1), it can be shown that independent of the form of the loss function L the solution of the variational problem has the form (the Representer Theo rem: see Girosi, 1998; Poggio and Smale, 2003): () 1 ()(,),sN i i i y Gbxxx (2.28) where ()(,)iG xx is a (symmetric) kernel function that determines the smoothness properties of the estimation scheme. Table 2-2 shows the kernel functions of selected estimation schemes with the kernel parameters being estimated by model selection approaches (s ee next section for details). If the loss function L is quadratic, the unknown coefficients in Equation (2.28) can be obtained by solving the linear system, ,sNIG y (2.29) where I is the identity matrix, and G a square positive definite matrix with elements ()(),(,)ijijGG xx. Note that, the linear system is well posed, since ()sNIG is strictly positive and well conditioned for large values of s N If loss function L is non-quadratic, the solution of the variational proble m still has the form of Equation (2.28) but the coefficients i are found by solving a quadratic programming pr oblem in what is known as support vector regression (Vapnik, 1998).

PAGE 62

62 Major characteristics of different surrogate models are summarized in Table 2-3. Comparative studies have shown that depending on the problem unde r consideration, a particular modeling scheme (e.g., polynomial response surface approximation, kriging, radial basis functions) may outperform the others and in general, it is not known a priori which one should be selected. See for example, the works of Friedman and Stuetzle (1981), Yakowitz & Szidarovsky (1985), Laslett (1994) Giunta and Watson (1998), Simpso n et al. (2001a-b), Jin et al. (2001). Considering, plausible alternative surrogate models can reasonably fit the available data, and the cost of constructi ng surrogates is small compared to the cost of the simulations, using multiple surrogates may offer advantages compared to the use of a single surrogate. Recently, multiple surrogate-based analysis and op timization approaches have been suggested by Zerpa et al. (2005) and Goel et al. (2006b) based on the model averaging ideas of Perrone and Cooper (1993), and Bishop (1995). The multiple surr ogate-based analysis approach is based on the use of weighted average models, which can be shown to reduce the prediction variance with respect to that of the individual surrogates. Th e idea of multiple surrogate-based approximations is discussed in Chapter 4. Model Selection and Validation Generalization error estimates assess the quality of the surrogates for prediction and can be used for model selection among alternative mode ls; and establish the ad equacy of surrogate models for use in analysis and optimization studies (validation). This section discusses the most prominent approaches in the context of surrogate modeling. Split Sample (SS) In this scheme, the sample data is divided into training and test sets. The former is used for constructing the surrogate while the latter, if pr operly selected, allows computing an unbiased estimate of the generalization erro r. Its main disadvantages are, that the generalization error

PAGE 63

63 estimate can exhibit a high variance (it may de pend heavily on which points end up in the training and test sets), and that it limits th e amount of data availa ble for constructing the surrogates. Cross Validation (CV) It is an improvement on the split sample scheme that allows the use of the most, if not all, of the available data for constructing the surroga tes. In general, the data is divided into k subsets ( k -fold cross-validation) of approximately equal size. A surrogate model is constructed k times, each time leaving out one of the subsets from tr aining, and using the omitted subset to compute the error measure of interest. The generaliza tion error estimate is computed using the k error measures obtained (e.g., average). If k equals the sample size, this approach is called leave-oneout cross-validation (known also as PRESS in the polynomial response surface approximation terminology). Equation (2.30) represents a leave-one-out ca lculation when the generalization error is described by the m ean square error (GMSE). ()2 11 (),k i ii iGMSEyy k (2.30) where ()i iy represents the prediction at ()ix using the surrogate constructed from all sample points except (()ix ,i y ). Analytical expressions are available for leave-one-out case for the GMSE without actually performing the repeat ed construction of the surrogates for both polynomial response surface approximation (Myers and Montgomery, 1995, Section 2.7) and kriging (Martin and Simpson, 2005). The advantage of cross-validati on is that it provides nearly unbiased estimate of the generalization error, and the co rresponding variance is reduced (whe n compared to split-sample) considering that every point gets to be in a test set once, and in a training set k-1 times

PAGE 64

64 (regardless of how the data is divided); the variance of the estimation though may still be unacceptably high in particular for small data sets The disadvantage is that it requires the construction of k surrogate models; this is alleviated by the increasing availability of surrogate modeling tools. A modified version of the CV approach called GCV-generalized cross validation, which is invariant unde r orthogonal transformations of th e data (unlike CV) is also available (Golub et al., 1979). If the Tikhonov regularization approach for regression is adopted, the regularization parameter can be identified using one or more of the following alternative approaches: CVcross validation (leave-one-out estimates), GCV (smoothed version of CV), or the L -curve (explained below). While CV and GCV can be computed very effici ently (Wahba, 1983; Hutchinson and de Hoog, 1985), they may lead to very small values of even for large samples (e.g., very flat GCV function). The L -curve (Hansen, 1992) is claimed to be more robust and have the same good properties of GCV. The L -curve is a plot of th e residual norm (first term) versus the norm m H Dy of the solution for different values of the regularization parameter and displays the compromise in the minimization of these two quantities. The best regularization parameter is associated with a characteristic L -shaped corner of the graph. Bootstrapping This approach has been shown to work bett er than cross-validation in many cases (Efron, 1983). In its simplest form, instead of splitting the data into subsets, subsamples of the data are considered. Each subsample is a random sample with replacement from the full sample, that is, it treats the data set as a populat ion from which samples can be dr awn. There are different variants of this approach (Hall, 1986; Efron and Tibshirani, 1993; Hesterbe rg et al., 2005) that can be

PAGE 65

65 used for model identification as well as for identifying confidence intervals for surrogate model outputs. However, this may require considering several dozens or even hundreds of subsamples. For example, in the case of polynomial res ponse surface approximation (given a model), regression parameters can be estimated for each of the subsamples and a probability distribution (and then confidence intervals) for the parameters can be identified. Once the parameter distributions are estimated, conf idence intervals on model output s of interest (e.g., mean) can also be obtained. Bootstrapping has been shown to be effective in the context of neural network modeling; recently, its performance in the context of model identification in regression analysis is also being explored (Ohtani, 2000, Kleijnen and Deflandre 2004).

PAGE 66

66 Design of Experiments Numerical Simulations at Selected Locations Construction of Surrogate Models (Model Selection and Identification) Model Validation If necessary Figure 2-1. Key stages of the su rrogate-based modeling approach. Simulationbased model y Data i y Analysis problem Estimation problem Estimated model y Appraisal problem ( ) Figure 2-2. Anatomy of surrogate modeling: mode l estimation + model appraisal. The former provides an estimate of function while the latter forecasts the associated error.

PAGE 67

67 y x () Ey ((),()) EyVy Figure 2-3. A surrogate modeling scheme provides the expected value of the prediction () E y (solid line) and the uncertainty associated w ith that prediction, illustrated here using a probability density function A B C D Figure 2-4. Alternative loss functions for the constr uction of surrogate models. A) Quadratic. B) Laplace. C) Huber. D) -loss function.

PAGE 68

68 Figure 2-5. A two-level full factorial desi gn of experiment for three variables. Figure 2-6. A central composite design for three-dimensional design space.

PAGE 69

69 x1x2 x1x2 Figure 2-7. A representative Latin hy percube sampling design with 6,2svNN for uniformly distributed variables in the unit square. A B C Figure 2-8. LHS designs with sign ificant differences in terms of uniformity (Leary et al., 2003). A) Random LHS. B) Corre lation optimized LHS. C) OA-based LHS. (Figure reprinted with kind permissi on of Taylor and Francis group, Leary et al., 2003, Figure 1).

PAGE 70

70 Table 2-1. Summary of main char acteristics of different DOEs. Name Main features Limitations Factorial designs Used to investigate main effect and interaction of variables for box-shaped domains, gives orthogonal designs, caters to noise Irregular domains not good for Nv > 6 CCD Cater to noise, applicable for box-shaped domains, repetition of points on center to improve stability Irregular domains, not good for Nv > 8, repetition of points not useful for simulations Variance optimal designs D -optimal A -optimal G -optimal I -optimal Cater to noise, Applicable to irregular domains too Maximize confidence in coefficients Minimize the sum of variances of coefficients Minimize the maximum of prediction variance Minimize the integral of prediction variance over design domain High computational cost, not good when noise is low Latin hypercube sampling (LHS) Caters to bias error, stratified sampling method, good for high number of variables Not good when noise is significant, occasional poor DOE due to random components Orthogonal arrays (OA) Box-shaped domains, the moment matrix is diagonal for monomial basis functions so coefficients of approximation are uncorrelated Limited number of orthogonal arrays, difficult to create OAs OA-based LHS Combine OA a nd LHS designs to improve distribution of points Limited OAs, may leave large holes in design space Table 2-2. Examples of kernel func tions and related estimation schemes. Kernel function Estimation scheme (,)(1)iidGxxxx Polynomial of degree d (PRD) (,)iiGxxxx Linear splines (LSP) 2(,)expi i ixx Gxx Gaussian radial basis function (GRF)

PAGE 71

71 Table 2-3. Summary of main characteris tics of different surrogate models. Name Main features Polynomial response surface approximation Global parametric approach, good for slow varying functions, easy to construct, good to handle noise, no t very good for simulations based data Kriging Global parametric approac h, handles smooth and fast varying functions, computationally expens ive for large amount of data Radial basis function Local, non-parametric ap proach, computationally expensive, good for fast varying functions Kernel-based functions Global, non-parametric approach, uses different loss functions, relatively new approach

PAGE 72

72 CHAPTER 3 PITFALLS OF USING A SINGLE CRITER ION FOR SELECTING EXPERIMENTAL DESIGNS Introduction Polynomial response surface (PRS) approximations are widely adopted for solving optimization problems with high computational or experimental cost as they offer a computationally less expensive way of evaluating de signs. It is important to ensure the accuracy of PRSs before using them for design and op timization. The accuracy of a PRS, constructed using a limited number of simulations, is primarily affected by two factors: (1) noise in the data; and (2) inadequacy of the fitting model (called modeling error or bias error). In experiments, noise may appear due to measurement errors and other experimental errors. Numerical noise in computer simulations is usually sm all, but it can be high for illconditioned problems, or if there are some unconverged solutions such as those en countered in computational fluid dynamics or structural optimization. The true model representi ng the data is rarely kn own, and due to limited data available, usually a simple model is fitted to the data. For simulation-based PRS, modeling/bias error due to an inad equate model is mainly responsib le for the error in prediction. In design of experiments techni ques, sampling of the points in design space seeks to reduce the effect of noise and reduce bias errors simu ltaneously. However, these objectives (noise and bias error) often conflict. For example, noise rejection criteria, such as D-optimality, usually produce designs with more points near the boundar y, whereas the bias erro r criteria tend to distribute points more evenly in design space. Thus, the problem of selecting an experimental design (also commonly known as design of experi ment or DOE) is a multi-objective problem with conflicting objectives (noise an d bias error). The solution to this problem would be a Pareto optimal front of experimental designs that yiel ds different tradeoffs between noise and bias errors. Seeking the optimal experimental desi gns considering only one criterion, though popular,

PAGE 73

73 may yield minor improvements in the selected cr iterion with significant de terioration in other criteria. In the past, the majority of the work related to the construction of experimental designs is done by considering only one design objective. When noise is the dominant source of error, there are a number of experimental designs that mi nimize the effect of variance (noise) on the resulting approximation, for example, the Doptimal design, that minimizes the variance associated with the estimates of coefficients of the response surface model. Traditional variancebased designs minimize the effect of noise and attempt to obtain uniformity (ratio of maximum to minimum error in design space) over the design space, but they do not address bias errors. Classical minimum bias designs consider only space-averaged or integrated error measures (Myers and Montegomery, 1995, pp. 208-279) in expe rimental designs. The bias component of the averaged or integrated mean squared error is minimized to obtain so-called minimum bias designs. The fundamentals of minimizing integrat ed mean squared error and its components can be found in Myers and Montgomery (1995, Chapter 9), and Khuri and Cornell (1996, Chapter 6). Venter and Haftka (1997) developed an algorith m implementing a minimum-bias criterion for irregularly shaped design spaces where no closed form solution exists for experimental design. They compared minimum-bias and D-optimal e xperimental designs for two problems with two and three variables. The minimum-bias experime ntal design was found to be more accurate than D-optimal for average error but not for maximum error. Qu et al. (2004) implemented Gaussian quadrature-based minimum bias design and presented minimum bias central composite designs for up to six variables. There is some work done on developing experimental designs by minimizing the integrated mean squared error accounting for bot h variance and bias errors. Box and Draper

PAGE 74

74 (1963) minimized integrated mean squared errors averaged over the design space by combining average weighted variance and average bias e rrors. Draper and Lawrence (1965) minimized the integrated mean square error to account for model inadequacies. Kupper and Meydrech (1973) specified bounds on the coefficients associated with the assumed true function to minimize integrated mean squared error. Welch (1983) us ed a linear combination of variance and bias errors to minimize mean squared error. Montepiedra and Fedorov (1997) investigated experimental designs minimizing the bias component of the integrated mean square error subject to a constraint on the variance component or vice-versa Fedorov et al. (1999) later studied design of experiments via weighted regression prioritizing re gions where the approximation is needed to predict the response. Their approach considered both variance and bias components of the estimation error. Bias error averaged over the design space has been studied extensively, but there is a relatively small amount of work to account for po intwise variation of bias errors because of inherent difficulties. An approach for estima ting bounds on bias errors in PRS by a pointwise decomposition of the mean squared error into va riance and the square of bias was developed by Papila and Haftka (2001). They used the bounds to obtain experimental designs (EDs) that minimize the maximum absolute bias error. Papila et al. (2005) extended the approach to account for the data and proposed data-dependent bounds. They assumed that the true model is a higher degree polynomial than the a pproximating polynomial, and that it satisfies the given data exactly. Goel et al. (2006a) generalized this bias error bounds estimation method to account for inconsistencies between the assumed true model and actual data. They demonstrated that the bounds can be used to develop adaptive experimental designs to reduce the effect of bias errors in the region of interest. Recently, Goel et al. (2006c) presented a method to estimate pointwise

PAGE 75

75 root mean square (RMS) bias errors in approx imation prior to the ge neration of data. They applied this method to construct experimental designs that minimize ma ximum RMS bias error (min-max RMS bias designs). Since minimum-bias designs do not achieve uniformity, designs that distribute points uniformly in design space (space filling designs like Latin hypercube sampling) are popular even though these designs have no claim to optim ality. Since Latin hypercube sampling (LHS) designs can create poor designs, as illustrated by Lear y et al. (2003), differe nt criteria like, maximization of minimum distance between point s, or minimization of correlation between points are used to improve its performance. We will demonstrate in this chapter that even optimized LHS designs can occasionally leave larg e holes in design space, which may lead to poor predictions. Thus, there is a need to consid er multiple criteria. Some previous efforts of considering multiple criteria are as follows. In an effort to account for variance, Tang (1993) and Ye (1998) presented orthogonal array based LHS desi gns that were shown to be better than the conventional LHS designs. Leary et al. (2003) presen ted strategies to find optimal orthogonal array based LHS designs in a more efficient manner. Palmer and Tsui (2001) generated minimum-bias Latin hypercube experimental designs for sampling from deterministic simulations by minimizing integrated squared bi as error. Combination of face-centered cubic design and LHS designs is quite wi dely used (Goel et al., 2006d). The primary objective of this work is to de monstrate the risks associated with using a single criterion to construct e xperimental designs. Firstly, we compare LHS and D-optimal designs, and demonstrate that both these design s can leave large unsampled regions in design space that may potentially yield high errors. In addition, we illustrate the need to consider multiple criteria to construct experimental de signs, as single-criterion based designs may

PAGE 76

76 represent extreme tradeoffs among different crit eria. Min-max RMS bias design, which yields a small reduction in the maximum bias error at the cost of a huge increase in the maximum variance, is used as an example. While the above issue of tradeoff among multiple criteria requires significant future research effort, we e xplore several strategies for the simultaneous use of multiple criteria to guard against selecting experimental designs that are optimal according to one criterion but may yield very poor performance on other criteria. In this context, we firstly discuss which criteria can be simultaneously us ed meaningfully; secondl y, we explore how to combine different criteria. We show that complimentary criter ia may cater to competing needs of experimental designs. Next, we demonstrate improvements by combining a geometry-based criterion LHS and a model-based D-optimality criter ion, to obtain experimental designs. We also show that poor experimental designs can be filtered out by creating multiple experimental designs and selecting one of them using an appr opriate error-based (pointwise) criterion. Finally, we combine the above mentioned strategi es to construct experimental designs. The chapter is organized as follows: Differe nt error measures used in this study are summarized in the next section. Following that we show major results of this study. We illustrate the issues associated with single crite rion-based experimental designs, and show a few strategies to accommodate multiple criteria. We close the chapter by recapitulating major findings. Error Measures for Experimental Designs Let the true response (x) at a design point x be represented by a polynomial Tf(x) where f(x) is the vector of basis functions and is the vector of coefficients. The vector f(x) has two components: (1)f(x) is the vector of basis functions us ed in the PRS or fitting model, and (2)f(x) is the vector of additional basis functions that are missing in the linear regression model

PAGE 77

77 (assuming that the true model is a polynom ial). Similarly, the coefficient vector can be written as a combination of vectors (1) and (2) that represent the true coefficients associated with the basis function vectors (1)f(x) and (2)f(x) respectively. Precisely, 1 121(1)22 2()()(())(()).TTTTT () ()()()()() () xfx fffx fx (3.1) Assuming normally distributed noise with zero mean and variance 2 (2(0,) N ), the observed response y(x) at a design point x is given as ()().y xx (3.2) The predicted response ()yx at a design point x is given as a linear combination of approximating basis functions vector (1)f(x) with corresponding estimated coefficient vector b: (1) ()(()).Tyxfxb (3.3) The estimated coefficient vector b is evaluated using the data ( y ) for s N design points as (Myers and Montgomery, 1995, Chapter 2): (1)(1)-1(1),TTXXXb() y (3.4) where (1) X is the Gramian matrix constructed using (1)f(x) (refer to Appendix A). The error at a ge neral design point x is the difference between the true response and the predicted response, ()()() ey xxx When noise is dominant, estimated standard error ()ese x used to appraise error, is gi ven as (Myers and Montogomery, 1995) (1)(1)(1)(1)1 2 ()()()(),TTesaeVaryXXxxfxfx (3.5) where 2a is the estimated variance of the noise, and a is the standard error in approximation.

PAGE 78

78 When bias error is dominant, the root mean square of bias error ()rms be x at design point x can be obtained as (Goel et al. 2006c, and Appendix A) (2)(1)(2)(2)(2)(1)2()()()()()(),TTTTrms bbeEeAEA xxfxfx fxfx (3.6) where (()) E gx is the expected value of ()gx with respect to and A is the alias matrix (1)(1)(1)(2)-1.TTAXXXX= However, the true model may not be known, and Equation (3.6) is only satisfied approximately fo r the assumed true model. Prior to generation of data, all components of the coefficient vector (2) are assumed to have a uniform distribution between and ( is a constant) such that 2 (2)(2)3T E I where I is an 22NN identity matrix. Substituting this in Equation (3.6), the pointwise RMS bias error (Goel et al. 2006c, and Appendix A) is (2)(1)(2)(1) (2)(1)(2)(1)2()()()()() 3 ()()()(). 3rmsTTT b TTTeAIA AA xfxfxfxfx fxfxfxfx (3.7) Since has a uniform scaling effect, prior to gene ration of data it can be taken as unity for the sake of simplicity. It is clear from Equation (3.7) that the RMS bias error at a design point x can be obtained from the location of data points (defines alias matrix A ) used to fit the response surface, the form ((1)f and (2)f ) of the assumed true function (which is a higher order polynomial than the approximating polynomial), and the constant Goel et al. (2006c) demonstrated with examples that this error predic tion model gives good estimates of actual error fields both when the true function is polynomial and when it is non-polynomial. Two representative examples of a

PAGE 79

79 polynomial true function and a non-polynomial true function (trigonometric function with multiple frequencies) are presented in Appendix B, respectively. Many criteria used to construct/compare diffe rent EDs are presented in the literature. A few commonly used error metrics as well as new bias error metrics are listed below. Maximum estimated standard er ror in design space (Equation (3.5) with 1a ) max() ()max.es es Ve e x (3.8) Space-averaged estimated standard error (Equation (3.5) with 1a ) 1 ()(). ()esavges Veed volV xx (3.9) Maximum absolute bias error bound (Pap ila et al., 2005) in design space max()max()II bbVee x (3.10) where c(2) = 1 and 2 1(2) (2)(1) 1 1max(). ()() xxN N I bj jiji j iec fAf (3.11) Maximum RMS bias error in design space (Equation (3.7) with = 1) max() ()max.rms rms b bVe e x (3.12) Space-averaged RMS bias error (Equation (3.7) with = 1) 1 ()(). ()rmsrms bbavg Veed volV xx (3.13) This criterion is the same as space-averaged bias error. Among all the above criteria, th e standard error ba sed criteria are the most commonly used. For all test problems, a design space coded as an Nv-dimensional cube 1,1vNV is used and

PAGE 80

80 bias errors are computed following the comm on assumption that the true function and the response surface model are cubic and quadratic polynomials, respectively. Besides the above error-metric based criteria the following criteria are also frequently used. D-efficiency (Myers and Montgomery, 1995, pp. 393) 1 1(1)(1) /1;||. maxT effsN NXX M DM M N (3.14) Here, max M in Equation (3.14) is taken as the maximu m of all experimental designs. This criterion is primarily used to cons truct D-optimal designs. A high value of Defficiency is desired to minimize the variance of the estimated coefficients b Radius of the largest unoccupied sphere ( rmax) We approximate the radius of the largest s phere that can be placed in the design space such that there are no experimental design poi nts inside this sphere. A large value of rmax indicates large holes in the design space and hence a potentially poor experimental design. This criterion is not us ed to construct experimental de signs, but this allows us to measure the space-filling capability of any experimental design. Test Problems and Results This section is divided into two parts. In the first subsect ion, we compare widely used experimental designs, like LHS designs, D-optimal designs, central composite designs and their minimum bias error counterparts. We show that different designs offer tradeoffs among multiple criteria and experimental desi gns based on a single error crite rion may be susceptible to high errors on other criteria. In the second subsect ion, we discuss a few possible strategies to simultaneously accommodate multiple criteria. Sp ecifically, we present two strategies, (1)

PAGE 81

81 combination of a geometry-based criterion (LHS ) with a model-based criterion (D-optimality), and (2) simultaneous use of multiple experiment al designs combined with pointwise error estimates as a filtering criterion to seek protection against poor designs. Comparison of Different Experimental Designs Space filling characteristics of D-optimal and LHS designs Popular experimental designs, like LHS designs that cater to bias errors by evenly distributing points in design sp ace or numerically obtained D-op timal designs that reduce the effect of noise by placing the de sign points as far apart as possibl e, can occasiona lly leave large holes in the design space due to the random na ture of the design (D-optimal) or due to convergence to local optimized LHS designs. This may lead to poor approximation. Firstly, we demonstrate that for D-optimal and optimized LHS designs, a large portion of design space may be left unsampled even for moderate dimensional spaces. For demonstration, we consider twoto four-dimensional spaces 1,1vNV The number of points in each experimental design is twice the number of coefficients in the corresponding quadratic pol ynomial, that is, 12 points for two dimensions, 20 points for three dimensi ons, and 30 points for four-dimensional design spaces. We also create experimental designs w ith 40 points in four-dimensional design space. We generate 100 designs in each group to alleviate the effect of randomness. D-optimal designs were obtained using th e MATLAB routine candexch such that duplicate points are not allowed (duplicate point s are not useful to approximate data from deterministic functions or numerical simulations ). We supplied a grid of points (grid includes corner/face points and points samp led at a grid spacing randomly selected between 0.15 and 0.30 units) and allocated a maximum of 50000 iterati ons to find a D-optimal design. LHS designs

PAGE 82

82 were constructed using the MA TLAB routine lhsdesign with the maximin criterion that maximizes the minimum distance between points. We allocated 40 iterations for optimization. For each experimental design, we estimated the radius ( rmax) of the largest unsampled sphere that fits inside the design space and su mmarized the results with the help of boxplots in Figure 3-1. The box encompasses the 25th to 75th percentiles and the horizontal line within the box shows the median value. The notches near th e median represent the 95% confidence interval of the median value. It is obvious from Figure 3-1 that rmax increases with the dimensionality of the problem, i.e., the distribution of points in high dimensional spaces tends to be sparse. As expected, an increase in th e density of points reduced rmax (compare four-dimensional space with 30and 40points) The reduction in rmax was more pronounced for D-optimal designs than LHS designs. LHS designs had a less sp arse distribution compared to D-optimal designs, however, the median rmax of approximately 0.75 units in four-d imensional space for LHS designs indicated that a very large region in the design space rema ined unsampled and data points are quite far from each other. The sparse distribution of points in the design space is illustrated with the help of a threedimensional example with 20 points in Figure 3-2, where the largest unsampled sphere is shown. For both D-optimal and LHS designs, the large si ze of the sphere clearly demonstrates the presence of large gaps in the design space that makes the surrogate predictions susceptible to errors. This problem is expected to become more severe in high dimensional spaces. The results indicate that a single criterion (D-optimality for D-optimal designs, and max-min distance for LHS designs) based experimental design may l ead to poor performance on other criteria. Tradeoffs among various experimental designs Next, we illustrate tradeoffs among different experimental designs by comparing min-max RMS bias design (refer to Appendix B), facecentered cubic design (FCCD), D-optimal design

PAGE 83

83 (obtained using JMP, Table 3-1), and LHS design (gen erated using MATLAB routine lhsdesign with maximin criterion, and allocating 1000 iterations to get a design, Table 3-2) in four-dimensional space. Note that all experi mental designs, except FCCD, optimize a single criterion, i.e., D-optimal designs optimize Defficiency, LHS designs maximize the minimum distance between points, and min-max RMS bias de signs minimize the influence of bias errors. On the other hand, FCCD is an intuitive desi gn obtained by placing the points on the faces and vertices. The designs were tested using a uniform 114 grid in the design space 41,1 V and different metrics are documented in Table 3-3. We observed that no single design (used in the generic sense, meaning a class of designs) outpe rformed other designs on all criteria. The Doptimal and the face-centered cubic design had high D-efficiency; the min-max RMS bias design and the LHS design had low D-efficiency. Th e min-max RMS bias design performed well on bias error based criteria but cause d a significant deterioration in st andard error based criteria, due to the peculiar placement of ax ial points near the center. Wh ile the D-optimal design was a good experimental design according to standard error based criteria, large holes in the design space ( rmax = 1) led to poor performan ce on bias error based criteri a. Since LHS designs neglect boundaries, they resulted in very high maximum standard error and bias errors. However, LHS designs yielded the least space-averaged bias er ror estimate. The FCCD design, which does not optimize any criterion, performed reasonably on all the metrics. However, we note that FCCD designs in high dimensional spaces are not prac tical due to the high ratio of the number of simulations to the number of polynomial coefficients. We used polynomial examples to illustrate the risks in using experimental designs constructed with a single criterion. Firstly we considered a simple quadratic function 1() F x

PAGE 84

84 (Equation (3.15) in four-dimensional space) with normally distributed noise (zero mean and unit variance), 1 22 3241()(), ()10(1). F x xxx xx x (3.15) We construct a quadratic polynomial response surface approximation using the data at 25 points sampled with different experimental designs (min-max RMS bias design; FCCD; Doptimal, Table 3-1; and LHS, Table 3-2) and compute actual absolu te error in approximation at a uniform grid of 411 points in the design space 41,1 V The accuracy measures based on the data, that are most commonly used, are the adjusted coefficient of determination 2adj R and the standard error normalized by the ra nge of the function (RMSE) in Table 3-4. Very high values of normalized maximum, root mean square, and average absolute errors (normalized by the range of the data for the respective experimental design) in Table 3-4 indicate that the min-max RMS bias design (also referred to as RMS bias CCD) is indeed a poor choice of experimental design when the error is due to noise, though all approximation accuracy measures (2adj R standard error) suggested otherwise. That is, the high errors come with no warning from the fitting process! High values of the ratio of space-averag ed, root mean square, or maximum actual errors to standard error indicate the risks asso ciated with relying on measures such as 2adj R to determine the accuracy of approximations (We pursue this issue in detail in a Chapter 5). Among other experimental designs, LHS design had a high norma lized maximum error near the corners, where no data is sampled. FCCD and D-optimal designs performed reasonably, with FCCD being the best design. Secondly, we illustrate errors due to large holes in design space observed in the previous section. A simple function that is likely to produce large errors would be a cosine with maximum

PAGE 85

85 at the center of the sphere. However, to ensure a reasonable approximati on of the true function with polynomials, we used a truncated Maclaurin se ries expansion of a translated radial cosine function cos()c lhsk xx, namely 24()20;, 1 24!c edrr Frk x xx (3.16) where c edx is a fixed point in design space, and k is a constant. We considered two instances of Equation (3.16) by specifying the center of the largest unoc cupied sphere associated with LHS design (Table 3-2) and D-optimal design (Table 3-1) as the fixed point. 24 22()20;,[0.168,0.168,0.141,0.167], 1 24!c c lhs lhsrr Frk xx xx (3.17) 24 33()20;,[0.0,0.0,0.0,0.0], 1 24!c c dopt doptrr Frk xx xx (3.18) The constants k2 and k3 were obtained by maximizing the normalized maximum actual absolute error in approximation of 2() Fx using LHS experimental design and approximation of 3() Fx using D-optimal experimental design, respectiv ely, subject to a reasonable approximation (determined by the condition, 20.90adjR) of the two functions by a ll considered experimental designs (FCCD, D-optimal, LHS, and RMS bias CCD ). As earlier, the errors were normalized by dividing the actual absolute errors by the range of data values used to construct the experimental design. Subsequently, the optimal value of the constants were 21.13 k and 31.18 k We used quadratic polynomials to approximate () Fx and errors are evaluated at a uniform grid of 411 points in the design space 41,1 V The quality of fit, maximum, root mean square, and average actual absolute errors in approximation for each experimental design are summarized in Table 3-4. We observed that despite a good quality of fit (high 2adj R and low

PAGE 86

86 normalized standard error), the normalized maxi mum actual absolute errors were high for all experimental designs. In particul ar, the approximations constructe d using data sampled at the Doptimal and the LHS designs performed very po orly. This means that the accuracy metrics, though widely trusted, can mislead the actual pe rformance of the experimental design. The high maximum error in approximations using the LHS de signs occurred at one of the corners that was not sampled (thus extrapolation error), however, we note that LHS designs yielded the smallest normalized space-averaged and root mean square error in approximation. On the other hand, the maximum error in approximations using D-optimal de signs appeared at a test point closest to the center c lhsx in the case of 2() Fx, and near c doptx in the case of 3() Fx. Besides, high normalized space-averaged errors indicated poor approximation of the true function () Fx. The other two experimental designs, FCCD and RMS bias CCD performed reasonably on maximal errors. The relatively poor performance of RMS bias CCD for average and RMS errors is explained by recalling that the experimental design was cons tructed by assuming the true function to be a cubic polynomial, whereas () Fx was a quartic polynomial. An important characteristic for all experimental designs is the ratio of space-averaged, root mean square, or maximum actual absolute error to estimated standard error. When this ratio is large, the errors are unexpected and therefor e, potentially damaging. The FCCD design provided a reasonable value of the ratio of actual to es timated standard errors, however, RMS bias design performed very poorly as the actual errors were much higher than the standard estimated error. This means that the estimated standard error is misleading about the actual magnitude of error that cannot be detected in an engineering example where we do not have the luxury of using a large number of data points to test the accuracy of approximation. Similarly, for all functions, the ratio of maximum actual absolute error to st andard error for LHS designs (29) was much

PAGE 87

87 higher than for D-optimal designs (about 9). The surprise element is also evident by the excellent values of 2adj R of 0.99 and 1.00 compared to 0.90 for the D-optimal design. The results presented here clearly suggest th at different experimental designs were nondominated with respect to each other and offere d multiple (sometimes extreme) tradeoffs, and that it might be dangerous to use a single crite rion based experimental design without thorough knowledge of the problem (which is rarely the case in practice). Extreme example of risks in single criterion based design: Min-max RMS bias CCD A more pathological case, demonstrating the risks in developing experimental designs using a single criterion, was encountered for moderate dimensional cases while developing the central composite design counterpart of the mi nimum bias design, i.e ., minimizing the maximum RMS bias error. The performance of the minmax RMS bias designs constructed using two parameters (refer to Appendix B) for twoto five-dimensional spaces on different metrics is given in Table 3-5. For twoand three-dimensi onal spaces, the axial points (given by 2) were located at the face and the vertex points (given by 1) were placed slightly inwards to minimize the maximum RMS bias errors. The RMS bias designs performed very reasonably on all the error metrics. A surprising result was obtained fo r optimal designs for fourand five-dimensional spaces: while the parameter corre sponding to vertex points ( 1) was at its upper limit (1.0), the parameter corresponding to th e location of axial points ( 2) hit the corresponding lower bound (0.1). This meant that to minimize maximum RMS bias error, the points should be placed near the center. The estimated standard error was expe ctedly very high for this design. Contrasting face-centered cubic design for four-dimensional cases with threeand four-dimensional min-max RMS bias designs (Table 3-5) isolated the effect of dimensionality and the change in experimental design (location of axial points) on different error me trics. The increase in bias

PAGE 88

88 errors (bounds and RMS error) was attributed to increase in dimensiona lity (small variation in bias errors with different experimental de signs in four-dimensiona l design space), and the increase in standard error for min-max RMS bi as design was the outcome of the change in experimental design (the locat ion of axial points given by 2). This unexpected result for fourand higher dimensional cases is supported by theoretical reasoning (A ppendix B), and very strong agreement between the predicted and the actual RMS bias errors for the min-max RMS bias design and the face-centered cen tral composite design (Appendix B). To further illustrate the severi ty of the risks in using a si ngle criterion, we show the tradeoffs among the maximum errors (RMS bias erro r, estimated standard error, and bias error bound) for a four-dimensional design space 41,1 obtained by varying the location of the axial points ( 2) from near the center ( 2=0.1, min-max RMS bias design) to the face of the design space ( 2=1.0, central composite design), while keeping the vertex locations ( 1=1.0) fixed. The tradeoff between maximum RMS bias error and ma ximum estimated standard error is shown in Figure 3-3(A), and the tradeoff between maximu m RMS bias error and maximum bias error bound is shown in Figure 3-3(B). Moving the axial point away from the center reduced the maximum bias error bound and the maximum es timated standard error but increased the maximum RMS bias error. The relatively small variation in maximum RMS bias error compared to the variation in maximum estimated st andard error and maximum bias error bound demonstrated the low sensitivity of maximum RMS bias error with respect to the location of axial points ( 2), and explains the su ccess of the popular central composite designs ( 2=1.0) in handling problems with bias errors. While we noted that each design on the curves in Figure 3-3 corresponds to a non-dominated (tradeoff) point, a small increase in maximum RMS bias error permits a large reduction in maximum estimated standard error, or in other words, the

PAGE 89

89 minimization with respect to a single criterion (here maximum RMS bias error) may lead to small gains at a cost of significant loss with re spect to other criteria. Tradeoff between maximum bias error bound and maximum RMS bias error also reflected similar results, though the gradients were relatively small. The most important implication of the results pres ented in this section is that it may not be wise to select experimental designs based on a single criterion. In stead, tradeoff between different metrics should be expl ored to find a reasonable experi mental design. While detailed exploration of this issue requires significantly more research, our initial attempts to simultaneously accommodate multiple criteria are illustrated next. Strategies to Address Multiple Cr iteria for Experimental Designs As discussed in the previous section, the e xperimental designs optimized using a single criterion may perform poorly on ot her criteria. While a bad experime ntal design can be identified by visual inspection in low dimensional spaces, we need additional measures to filter out bad designs in high dimensional spaces (Goel et al ., 2006e). We explored several strategies to simultaneously accommodate multiple criteria in an attempt to a void poor experimental designs. In this context, we discuss two issues: Which criteria are meaningful for di fferent experimental designs? and How can we combine different criteria? Since the experimental designs are constructed to minimize the influence of bias error and noise, a sensible choice of suitable criteria for any experimental design should seek balance among the two sources of errors, i.e ., bias and noise. Consequently, if we select an experimental design that primarily caters to one source of e rror, for example, noise, the secondary criterion should be introduced to address the other source of error, bi as error in this case, and vice-versa We elaborate on this idea in a following subsection.

PAGE 90

90 Once we have identified criteria to constr uct experimental designs, we seek ways to combine different criteria. Taking inspiration from multi-objective optimization problems, we can accommodate multiple crite ria according to several methods, for example, Optimize the experimental design to minimize a composite function that represents the weighted sum of criteria, Optimize the experimental design to minimize the primary criterion while satisfying constraints on the secondary criteria, and Solve a multi-objective optimization problem to identify different tradeoffs and then select a design that suits the requirements the most. Here, we show two ways to avoid poor expe rimental designs using a four-dimensional example. Firstly, we present a method to combine the model-based D-optimality criterion that caters to noise with the geometry-based LHS cr iterion that distributes points evenly in design space and reduces space-averaged bias errors. Sec ondly, we demonstrate that selecting one out of several experimental designs according to an appr opriate pointwise error-based criterion reduces the risk of obtaining poor experimental designs. Further, we show that the coupling of multiple criteria and multiple experimental designs may be effective to avoid poor designs. Combination of model-based Doptimality criterion with ge ometry based LHS criterion We used an example of constructing an experimental design for a four-dimensional problem with 30 points (response surface model a nd assumed true model were quadratic and cubic polynomials, respectiv ely). Three sets of experimental designs were generated as follows. The first set comprised 100 LHS experimental designs generated using the MATLAB routine lhsdesign with the maximin criterion (a maxi mum of 40 iterations were assigned to find an experimental design). The sec ond set comprised 100 D-optimal e xperimental designs generated using the MATLAB routine candexch with a ma ximum of 40 iterations for optimization. A grid of points, (with grid sp acing randomly selected between 0.15 and 0.30) including face and

PAGE 91

91 corner points, was used to search for the Doptimal experimental de signs. The third set of (combination) experimental designs was obtained by combin ing D-optimal (model-based criterion) and LHS designs (geometry-based cr iterion). We selected 30 design points from a 650 point2 LHS design (lhsdesign with maximin cr iterion and a maximum of 100 iterations for optimization) using the D-optimality criterion (candexch with a maximum of 50000 iterations for optimization). For each design, we computed the radius rmax of the largest unsampled sphere, D-efficiency, maximum and space-averaged RMS bias and estimated standard error using a uniform 114 grid in the design space 41,1 We show the tradeoff among different crite ria for D-optimal, LHS, and combination designs in Figure 3-4. As can be seen from Figure 3-4(A), the D-optimal designs were the best and LHS designs were the worst with respect to the maximum estimated standard and RMS bias error. Compared to the LHS designs, the co mbination designs signi ficantly reduced the maximum estimated standard error with margin al improvement on the maximum RMS bias error criterion (Figure 3-4(A)), and improved D-e fficiency without sacrificing rmax (Figure 3-4(D)). The advantages of using combination designs were more obvious in Figure 3-4(B), where we compared space-averaged bias and estimated sta ndard errors. We see that D-optimal designs performed well on space-averaged estimated sta ndard errors but yielded high space-averaged RMS bias errors. On the other hand, the LHS de signs had low space-averaged RMS bias errors but high space-averaged estimated standard e rrors. The combination designs simultaneously yielded low space-averaged RMS bias and estimate d standard errors. This result was expected because the Latin hypercube samp ling criterion allows relativel y uniform distribution of the 2 The average number of points in the uniform grid used to generate D-optimal designs was 1300. So to provide a fair comparison while keeping the computational cost low, we obtain 650 points using LHS and use this set of points to develop combination experimental designs.

PAGE 92

92 points by constraining the location of points that are used to gene rate combination designs using the D-optimality criterion. Similarly, we obser ved that unlike D-optimal designs, combination experimental designs performed very well on the space-averaged RMS bias error and the rmax criterion (refer to Figure 3-4(C)), and the performance wa s comparable to that of the LHS designs. Mean and coefficient of variation (COV) of different metrics for the three sets of experimental designs are tabulated in Table 3-6. D-optimal designs outperformed LHS designs in terms of the ratio of maximum to average erro r (stability), D-efficiency, maximum RMS bias error, and maximum estimated standard error. Also for most metrics, the variation in results due to sampling (COV) was the least among the three. As seen before, LHS designs performed the best on space-averaged RMS bias errors. The designs obtained by combining two criteria (Doptimality and LHS), were substantially closer to the best of the two except for max()rms be Thus, they reduced the risk of large errors. Furtherm ore, the variation with samples (COV) is also reduced. The results suggested that though differe nt experimental desi gns were non-dominated (tradeoffs) with respect to each other, simultane ously considering multiple criteria by combining the model-based D-optimality criterion and the ge ometry-based LHS criterion may be effective in producing more robust experime ntal designs with a reasonable tradeoff between bias errors and noise. Multiple experimental designs combined with pointwise error-based filtering Next, we demonstrate the poten tial of using multiple experi mental designs to reduce the risk of finding poor experimental designs. The ma in motivation is that the cost of generating experimental designs is not high so we can construct two or th ree experimental designs using LHS, or D-optimality, or a combination of the two criteria, and pick the best according to an

PAGE 93

93 appropriate criterion. To illustrate the improve ments by using three EDs over a single ED, each of the two criteriamaximum RMS bias error and maximum estimated standard errorwere used to select the best (least erro r) of the three EDs. For illustration, 100 such experiments were conducted with LHS designs, D-optimal designs, and the combination of LHS and D-optimal designs (as described above). Actual magnitudes of maximum RMS bias er ror and maximum estimated standard error for all 300 designs and the 100 designs obtained after filtering using min-max RMS bias or maximum estimated standard er ror criteria are plotted in Figure 3-5 for three sets of (100) experimental designs. As is evident by the s hortening of the upper ta il and the size of the boxplots in Figure 3-5, both the min-max RMS bias and maximum estimated standard error criteria helped eliminate poor e xperimental designs for all three sets. Filtering had more impact on the maximum error estimates than the spac e-averaged error estimates. The numerical quantification of improvements in actual magnitu de of maximum and space-averaged error based on 100 experiments is summarized in Table 3-7. We observed that the pointwise error-based (min-max RMS bias or estimated standard error) filtering significantly reduced the mean and COV of maximum errors. We also noted improve ments in the individual experimental designs using multiple criteria. LHS designs were most significantly improved by picking the best of three based on estimated maximum standard erro r. D-optimal designs combined with the minmax RMS bias error based filtering criterion he lped eliminate poor designs according to the RMS bias error criterion. It can be concluded fr om this exercise that potentially poor designs can be filtered out by considering a small number of experimental designs with an appropriate (minmax RMS bias or maximum estimated standard) e rror criterion. The filteri ng criterion should be complimentary to the criterion used for constructi on of the experimental design, i.e., if a group of

PAGE 94

94 EDs are constructed using a varian ce based criterion, then the sele ction of an ED from the group should be based on bias erro r criterion, and vice-versa. Results presented in this section indicate that use of multiple criteria (LHS and Doptimality) and multiple EDs help reduce maxi mum and space-averaged bias and estimated standard errors. Implementing th e above findings, we can obtai n experimental designs with reasonable tradeoff between bias error a nd noise in three steps as follows: Generate a large number of LHS experimental design points, Select a D-optimal subset within the LHS design (combine model-based and geometrybased criteria), Repeat first two steps three times and select th e design that is the be st according to one of the min-max RMS bias or maximum estimated standard error criteria (filtering using pointwise error-based criterion). Concluding Remarks In this chapter, we demonstrated the ri sks of using a single criterion to construct experimental designs. We showed that construc ting experimental designs by combining multiple (model, geometry, and error based) criteria and/ or using multiple experimental designs reduces the risk of using a poor experimental design. For four-dimensional space, comparison of co mputed LHS and D-optimal designs, that involve random components and may yield poo r approximation due to random components or convergence to local optima, revealed that th e D-optimal designs were better for maximum errors, and LHS designs were better for space-av eraged RMS bias errors. Both designs were susceptible to leaving large spheres in de sign space unsampled. A comparison of popular experimental designs (face-centered cubic design min-max RMS bias design, D-optimal design, and LHS design) revealed the non-dominated (tradeoff among di fferent criteria) nature of different designs. The min-max RMS bias design, obtained by placing the ax ial points close to

PAGE 95

95 the center, performed the best in reducing maxi mum RMS bias error, but was the worst design for estimated standard error metrics and D-effici ency. LHS design gave the best performance in terms of space-averaged bias errors. However, face-centered cubic design that is an intuitive design yielded a reasonable tradeoff between bias error and noise reduction on all metrics. The same conclusions were supported by approxi mation of three example polynomials that highlighted the susceptibility of different experi mental designs to the nature of the problem, despite the fact that the accuracy metrics s uggested a very good fit for each example. We concluded that different experi mental designs, constructed usi ng one error criterion, do not perform the best on all criteria. Instead, they offer tradeoffs. In moderate dimensional spaces these single criterion-based designs can often lead to extreme tradeoffs, particularly by using the maximum RMS bias error measure as a design criterion, such that small gains in the desire d criterion are achieved at the cost of significant deterioration of performance in other criteria A tradeoff study, conducted to study the variation of different error metrics with the location of axial points in central-composite designs, illustrated the perils of using a single criterion to construct experimental designs and emphasized the need to consider multiple criteria to tradeoff bias error and noise reduction. To address the risk of using a poor experime ntal design by consideri ng a single criterion, we explored a few strategies to accommodate multiple criteria. We demonstrated that the experimental design obtained by combining two criteria, the D-optimality criterion with LHS design, offered a reasonable tradeoff between spa ce-averaged RMS bias and estimated standard error, and space-filling criteria. Specifically, co mbination designs signif icantly improved the poor experimental designs. We s howed that the risk of getting a poor experimental design could be further reduced by choosing one out of three experimental designs us ing a pointwise error-

PAGE 96

96 based criterion, e.g., min-max RMS bias or ma ximum estimated standard error criterion. The combination of D-optimal designs and min-max RMS bias error was particularly helpful in reducing bias errors. Finally, we adopted select ion of experimental designs by combining the Doptimality criterion with LHS design and selecting one out of three such combination designs to cater to both bias error and noise reduction. However, since these results are based on a limited number of examples, we note the need of future research to address the issues related to accommodating multiple criteria while constructing experimental designs.

PAGE 97

97 A B Figure 3-1. Boxplots (based on 100 designs) of radius ( rmax) of the largest unoccupied sphere inside the design space [-1, 1]Nv (where Nv is the number of variables). x -axis shows the dimensionality of the design space and corresponding number of points in the experimental design. Smaller rmax is desired to avoid la rge unsampled regions. Doptimal designs are selected using MATLAB r outine candexch (we specify a grid of points with grid spacing between 0.15 a nd 0.30, and a maximum of 50000 iterations for optimization). LHS designs are gene rated using MATLAB routine lhsdesign with a maximum of 40 iterations for optim ization. A) D-optimal designs. B) LHS designs. A B Figure 3-2. Illustration of the largest spherica l empty space inside the 3D design space [-1, 1]3 (20 points). A) D-optimal designs. B) LHS designs.

PAGE 98

98 A Axial point near center 2 = 0.10 Axial point on face 2 = 1 2 Axial point on face 2= Axial point near center 2 2 B Figure 3-3. Tradeoffs between different error me trics. A) Maximum estimated standard error max()ese and maximum RMS bias error max()rms be. B) Maximum bias error bound max()I be and maximum RMS bias error max()rms be for four-dimensional space. 25 points were used to construct central composite experimental designs with vertex location fixed at 1= 1.0.

PAGE 99

99 A B C D Figure 3-4. Comparison of 100 D-optimal, LHS, and combination (D-optimality + LHS) experimental designs in four-dimensional space (30 points) using different metrics. rmax: radius of the largest unsampled sphere, ()rms be x: RMS bias error, ()ese x: estimated standard error, (.)max: maximum of the quantity inside parenthesis, (.)avg: spatial average of the quantity inside parenthesi s. All metrics, except D-efficiency ( Deff), are desired to be low. A) max()ese vs. max()rms be. B) ()esavge vs. ()rms avg be. C) maxr vs. ()rms avg be. D) eff D vs. maxr.

PAGE 100

100 A B C Figure 3-5. Simultaneous use of multiple experime ntal designs concept, where one out of three experimental designs is sele cted using appropriate criter ion (filtering). Boxplots show maximum and space-averaged RMS bias, and standard estimated errors in fourdimensional design space; considering a ll 300 designs, 100 designs filtered using min-max RMS bias error as a criterion and 100 designs filtered using min max estimated standard error as a criterion (m BE denotes maximum RMS bias error, mSE

PAGE 101

101 denotes maximum estimated standard error, aBE denotes space-averaged RMS bias error, aSE denotes space-averaged estimat ed standard error). Each experimental design has 30 points and errors are computed in design space [-1, 1]4 on 114 uniform grid. A) LHS designs. B) D-optimal design. C) Combination designs.

PAGE 102

102 Table 3-1. D-optimal design (25 points, 4-dimensional space) obtained using JMP. # no. x1 x2 x3 x4 # no. x1 X2x3 x4 1 -1 -1 -1 -1 13 0 0 -1-1 2 -1 -1 -1 1 15 0 1 0 0 3 -1 -1 0 -1 16 1 -1 -1-14 -1 -1 1 0 17 1 -1 -10 5 -1 -1 1 1 18 1 -1 0 1 6 -1 0 0 1 19 1 -1 1 -17 -1 0 1 -1 20 1 -1 1 1 l8 -1 1 -1 -1 21 1 0 1 0 9 -1 1 -1 1 22 1 1 -1-110 -1 1 1 -1 23 1 1 -11 11 -1 1 1 1 24 1 1 1 -112 0 -1 -1 1 25 1 1 1 1 13 0 -1 1 -1 Table 3-2. LHS designs (25 points, 4-dimensional space) obtained using the MATLAB routine lhsdesign with maximin criterion and a maximum of 1000 iterations for optimization. # no. x1 x2 x3 x4 # no. x1 x2 x3 x4 1 -0.970 -0.344 0.055 -0.80913 -0.717-0.812-0.1250.396 2 0.679 0.123 0.709 0.520 15 -0.3760.986 -0.686-0.463 3 -0.093 -0.233 -0.929 -0.65916 0.002 -0.361-0.101-0.936 4 -0.578 0.659 0.943 -0.53117 -0.464-0.862-0.589-0.256 5 0.798 -0.657 -0.829 0.355 18 0.972 -0.5470.424 -0.411 6 0.328 0.282 -0.619 0.051 19 0.689 0.557 -0.3240.979 7 0.908 -0.059 -0.207 -0.71420 0.591 -0.1750.010 0.471 8 -0.142 0.027 0.879 0.271 21 0.065 0.457 0.255 -0.281 9 -0.679 0.427 -0.507 0.700 22 0.160 -0.935-0.901-0.075 10 0.427 0.265 0.552 -0.90023 -0.8390.702 0.329 0.618 11 0.470 0.090 0.604 -0.16024 -0.916-0.6910.777 0.808 12 0.226 -0.474 0.504 0.006 25 -0.3270.806 -0.4080.142 13 -0.224 0.873 0.150 0.872 Table 3-3. Comparison of RMS bias CCD, FCCD, D-optimal, and LHS designs for 4dimensional space (all designs have 25 point s), errors are computed on a uniform 114 grid in space V = [-1, 1]4. (**Table 3-1, *Table 3-2) Experimental design Defficiency max()ese ()esavge max||I bemax()rms be ()rms avg be rmax RMS bias CCD 0.148 70.71 35.22 6.996 1.155 0.927 0.65 FCCD 0.932 0.877 0.585 6.208 1.176 0.827 0.67 LHS* 0.256 3.655 1.032 21.48 3.108 0.588 0.83 D-optimal** 1.000 0.933 0.710 12.00 1.996 1.004 1.00

PAGE 103

103 Table 3-4. Prediction performance of different 25-point experimental designs in approximation of example functions F1 (Equation (3.15)), F2 (Equation (3.17)), and F3 (Equation (3.18)) in four-dimensional spaces. We use quadratic polynomial response surface approximations and errors ar e computed on a uniform 114 grid in space V = [-1, 1]4. 2 adjR indicates the quality of approximation. The true sta ndard error, space-averaged actual absolute error, root mean square error in space, and maximum actual absolute error are normalized by the range of the da ta at respective experimental designs. Doptimal design is shown in Table 3-1 and LHS design is given in Table 3-2. ED 2 adjR Standard error Average error RMS error Maximum error RMS bias CCD 1.00 0.021 0.84 1.05 3.22 FCCD 1.00 0.018 0.0081 0.010 0.044 LHS 0.99 0.032 0.032 0.043 0.26 F1 (noise) D-optimal 0.99 0.022 0.023 0.025 0.042 RMS bias CCD 1.00 0.035 0.15 0.18 0.36 FCCD 0.91 0.094 0.090 0.11 0.25 LHS 0.99 0.024 0.042 0.072 1.29 F2 (LHS sphere) D-optimal 0.90 0.090 0.25 0.31 0.82 RMS bias CCD 1.00 0.00065 0.17 0.18 0.22 FCCD 0.96 0.062 0.077 0.088 0.18 LHS 0.99 0.024 0.035 0.055 0.69 F3 (Doptimal sphere) D-optimal 0.90 0.075 0.17 0.22 0.65 Table 3-5. Min-max RMS bias central com posite designs for 2-5 dimensional spaces and corresponding design metrics ( Nv is the number of variable s), errors are computed on a uniform 412 grid ( Nv = 2), 213 grid ( Nv = 3) and, 11Nv grid ( Nv > 3), in space V = [-1, 1]Nv. Refer to Appendix B for details of creating CCD. Nv 1 2 max()ese ()esavgemax||I bemax()rms be()rms avg be 2 0.954 1.000 0.973 0.688 1.029 0.341 0.269 3 0.987 1.000 0.913 0.607 2.832 0.659 0.518 4* 1.000 1.000 0.877 0.585 6.208 1.176 0.827 4 1.000 0.100 70.71 35.22 6.996 1.155 0.927 5 1.000 0.100 77.46 41.60 12.31 1.826 1.200 *FCCD for four variables (not min-max RMS bias design)

PAGE 104

104 Table 3-6. Mean and coefficient of variation (based on 100 instances) of different error metrics for various experimental designs in four-d imensional space (30 points). Errors are computed on a uniform 114 grid in space V = [-1, 1]4. Deff: D-efficiency, rmax: radius of the largest unsampled sphere, rms be: RMS bias error, ese: estimated standard error, (.)max: maximum of the quantity in side parenthesis, (.)avg: average of the quantity inside parenthesis. ED max()ese ()esavgemax()rms be()rms avg bermax Deff LHS 3.82 0.94 3.02 0.57 0.76 0.26 D-optimal 0.80 0.61 1.67 0.89 0.90 0.99 Mean Combination 2.02 0.68 2.67 0.59 0.75 0.47 LHS 0.209 0.074 0.105 0.058 0.0660.056 D-optimal 0.025 0.042 0.075 0.037 0.15 0.006 COV Combination 0.126 0.026 0.102 0.038 0.0640.053 Table 3-7. Reduction in errors by considering multiple experimental designs and picking one experimental design using appropriate cr iterion (filtering). We summarize actual magnitudes of maximum and space-average d errors for LHS/D-optimal/combination designs in four-dimensional space with 30 points. All-ED refers to error data corresponding to all 300 LHS/ D-optimal/combination designs. BE and SE refer to error data from 100 EDs selected using min-max RMS bias error and min-max estimated standard error criterion, respect ively. Errors are computed on a uniform 11Nv grid in space V = [-1, 1]Nv. Mean and coefficient of variation (COV) are based on 300 (ALL-ED) or 100 (BE/SE) EDs. LHS designs D-optimal designs Combination designs All-ED BE SE All-EDBE SE All-ED BE SE Mean-max()ese 3.85 3.62 3.29 0.80 0.80 0.78 2.00 1.96 1.80 COV-max()ese 0.21 0.19 0.11 0.025 0.0200.0160.12 0.12 0.066 Mean-()esavge 0.94 0.92 0.90 0.62 0.62 0.61 0.68 0.67 0.67 COV-()esavge 0.071 0.063 0.0530.043 0.0450.0390.024 0.024 0.019 Mean-max()rms be 3.08 2.83 2.95 1.67 1.57 1.69 2.68 2.46 2.60 COV-max()rms be 0.099 0.053 0.0830.074 0.0490.0690.094 0.066 0.091 Mean-()rms avg be 0.58 0.57 0.57 0.89 0.88 0.89 0.59 0.58 0.59 COV-()rms avg be 0.049 0.036 0.0430.037 0.0270.0320.037 0.037 0.036

PAGE 105

105 CHAPTER 4 ENSEMBLE OF SURROGATES Surrogate models have been extensively used in the design and optimization of computationally expensive problems. Different su rrogate models have been shown to perform well in different conditions. Barthelemy and Ha ftka (1993) reviewed the application of metamodeling techniques in structur al optimization. Sobieszczansk i-Sobieski and Haftka (1997) reviewed different surrogate modeling applications in multi-disciplinary optimization. Giunta and Watson (1998) compared polynomial response surface approximations and kriging on analytical example problems of varying dimensi ons. Simpson et al. (2001 a) reviewed different surrogates and gave recommendations on the us age of different surrogates for different problems. Jin et al. (2001) compared different surrogate models based on multiple performance criteria such as accuracy, robustness, efficiency, transparency, and conceptual simplicity. They recommended using radial basis function for hi gh-order nonlinear problems, kriging for loworder nonlinear problems in high dimension sp aces, and polynomial response surfaces for loworder nonlinear problems. They also noted difficulties in construc ting different surrogate models. Li and Padula (2005) and Queipo et al. (2005) recently reviewed di fferent surrogate models used in the aerospace industry. There are also a number of studies comparing di fferent surrogates for specific applications. Papila et al. (2001), Shyy et al (2001a-b), Vaidyanathan et al (2004), Mack et al. (2005a-b, 2006) presented studies comparing radial basis neural networks and response surfaces while designing the propulsion systems lik e, liquid rocket injector, supersonic turbines, and the shape of bluff body for mixing enhancement. For crashw orthiness optimization, Stander et al. (2004) compared polynomial response surfa ce approximation, kriging, a nd neural networks, while Fang

PAGE 106

106 et al. (2005) compared polynomial response surfa ce approximation and ra dial basis functions. Most researchers found that no single surr ogate model was superior in general. While most researchers have primarily been concerned with the choice among different surrogates, there has been relativel y very little work about the use of an ensemble of surrogates. Zerpa et al. (2005) presented one application of simultaneously using multiple surrogates to construct a weighted average surrogate model for the optimization of an alkali-surfactantpolymer flooding process. They su ggested that the weighted aver age surrogate model has better modeling capabilities than individual surrogates. Typically, the cost of obtaining data required for developing su rrogate models is high, and it is desired to extract as much information as possible from the data. Using an ensemble of surrogates, which can be constructed without a significant expense compar ed to the cost of acquiring data, can prove effective in distilling correct trends from the data and may protect against bad surrogate models. Averaging surrogates is one approach motivated by our inability to find a unique solution to the non-linear inverse pr oblem of identifying the model from a limited set of data (Queipo et al., 2005). In this contex t, model averaging essentially serves as an approach to account for model un certainty. In this work, we e xplore methods to exploit the potential of using an ensemble of surrogates. Sp ecifically, we present th e following two aspects: Ensemble of surrogates can be used to identify regions, where we expect large uncertainties (contrast). Use of an ensemble of surrogates via weighted averaging (combi nation) or selection of best surrogate model based on error statis tics to induce robustness in approximation compared to individual surrogates. We demonstrate the advantages of an ensemb le of surrogates using analytical problems and an engineering problem of ra dial turbine design for space la unch vehicle. This chapter is organized as follows. In the next section, we pr esent a method to use an ensemble of surrogates

PAGE 107

107 to identify the regions with large uncertainty, and the conceptual framework of constructing weighted average surrogate models. Thereafter, we discuss the test problems, numerical procedure, and results supporti ng our claims. We close the ch apter by recapitulating salient points. Conceptual Framework Identification of Regi on of Large Uncertainty Surrogate models are used to predict the re sponse in unsampled regions. There is an uncertainty associated with the predictions. An en semble of surrogates can be used to identify the regions of large uncertainty. The concept is described as follows: Let there be NSM surrogate models. We compute the standard deviati on of the predictions at a design point x as, 21 1() () () 1 () where().SM SMSM i SMN i i resp N iyy sy N y y N x xx x x (4.1) The standard deviation of the predictions wi ll be high in regions where the surrogates differ greatly. A high standard deviation may i ndicate a region of hi gh uncertainty in the predictions of any of the surroga tes, and additional sampling point s in this region can reduce that uncertainty. Note that, while high standard devi ation indicates high uncertainty, low standard deviation does not guarantee high accuracy. It is possible for all surrogate models to predict similar response (yielding low standard de viation) yet perform poorly in a region. Weighted Average Surrogate Model Concept We develop a weighted average surrogate model as, ()()()SMN wtaii iywy xxx (4.2)

PAGE 108

108 where ()wtay x is the predicted response by the wei ghted average of surrogate models, ()iy x is the predicted response by the ith surrogate model, and ()iw x is the weight associated with the ith surrogate model at design point x Furthermore, the sum of the weights must be one 11SMN i iw so that if all the surrogates agree, ()wtay x will also be the same. A surrogate model, deemed more accurate, should be assigned a large weight, and conversely, a less accurate model should ha ve lower influence on the predictions. The confidence in surrogate models is given by diffe rent measures of goodne ss (quality of fit), which can be broadly characterized as: (i) global vs. local measur es, and (ii) measures based on surrogate models vs. measures based on data. We ights associated with each surrogate, based on the local measures of goodness, are functions of space ()iiww x For example, weights, which are based on the pointwise error measures, li ke prediction variance, mean squared error (surrogate based), or weights based on the inte rpolated cross-validati on errors (data based). When weights are selected based on the global m easures of goodness, they are fixed in design space (),iiwC xx For example, weights based on RMS error a for polynomial response surface approximation, process variance for kriging (surrogate based), or weights based on crossvalidation error (data based). While variable we ights may capture local behavior better than constant weights, reasonable selection of weight functions is a formidable task. Zerpa et al. (2005) constructed a local wei ghted average model from three surrogates (polynomial response surface approximation, kriging, and radial basis functions) for the optimization of an alkali-surfactant-polymer fl ooding process. Their approach was based on the pointwise estimate of the variance pred icted by the three surrogate models. There are different strategies of selecting weights. A few can be enumerated as follows:

PAGE 109

109 Non-parametric surrogate filter (NPSF) Weights are a function of relative magnitude of (global data-based) errors. The weight associated with ith surrogate is given as: 1, 1(1)j SMjSM SMjji i jN NE w NE (4.3) where Ej is a global or a local data-based error measure for the jth surrogate model. This choice of weights gives only a small premium to the better surrogates when NSM is large. For example, the best surrogate has a weight equal to or less than 1(1)SMN which becomes unreasonably low when NSM is large. On the positive side, the weight s selected this way protect against errors induced by the surrogate models, which perform extremely well at the sampled data points but give poor predictions at unsampled locations. Best PRESS for exclusive assignments The traditional method of using an ensemble of surrogates is to select the best model among all considered surrogate models. However, once the choice is made it is usually kept even as the design of experiment is refined. If the choice is revisite d for each new design of experiment, we consider it as a weighting scheme where the model with least (global data-based) error is assigned a weight of one and all other models are assigned zero weight. In this study, we call this strategy the best PRESS model. Parametric surrogate filter (PSF) As discussed above, there are two issues asso ciated with the selec tion of weights: (i) weights should reflect our confidence in the su rrogate model, and (ii) weights should filter out

PAGE 110

110 adverse effects of the model, wh ich represents the data well bu t performs poorly in unexplored regions. A strategy to select weig hts, which addresses both issues, can be formulated as follows: * 1, ;0,1.SMi iiavgi i i N avgi SM iw wEEw w EEN (4.4) This weighting scheme requires the user to specify two parameters and which control the importance of averag ing and importance of individual surrogate, respectively. Small values of and large negative values of impart high weights to the best surrogate model. Large values and small negative values represent high confidence in the averaging scheme. In this study, we have used 0.05and1 The sensitivity to these parameters is studied in a section on parameter sensitivity. The above-mentioned formulation of weighting schemes is used with generalized mean square cross-validation error (G MSE; leave-one-out cross vali dation or PRESS in polynomial response surface approximation terminology), defined in Chapter 2, as global data-based error measure, by replacing j E with j GMSE We have used three su rrogate models, polynomial response surface approximation (PRS), kriging (KRG), and radial basis neural networks (RBNN) (Orr, 1996), to construct the weighted average surrogate model. The PRESS-based weighted surrogate model (PWS) can th en be given as follows: pwsprsprs krgkrgrbnnrbnn y wywywy (4.5) where weights are selected according to the parametric surrogate filter PSF (Equation (4.4)). The rationale behind selecting these su rrogate models to demonstrate th e proposed approach was, (i)

PAGE 111

111 these surrogate models are commonly used by pract itioners, and (ii) they represent different parametric and non-parametric appr oaches (Queipo et al., 2005). The cost of constructing surrogate models is usually low compared to that of analysis. If this cost is not small (for example, when using a kriging model and GMSE for large data sets), the user may want to explore surrogate models that provide a compromise solution between accuracy and construction cost. In general, th e choice of surrogate models, which are most amenable to averaging and uncertainty identific ation, remains a question of future research (Sanchez et al., 2006). Since global measures of error depend on the data and design of experiments, weights implicitly depend on the choice of the design of experiments. This dependence can be seen from Figure 4-1, where we show boxpl ots of weights obtained for 1000 instances of Latin hypercube sampling (LHS) design of experiments (DOEs) for Camelback function (described in next section). The center line of each boxplot shows the 50th-percentile (median) value and the box encompasses the 25thand 75thpercentile of the data. The leader lines (horizontal lines) are plotted at a distance of 1.5 times the inter-quartile range in each direction or the limit of the data (if the limit of the data falls within 1.5 times the inter-quartile range). The data points outside the horizontal lines are shown by placing a + sign for each point. We can see that the weights for different su rrogates vary over a wide range with DOEs. The weights also give an assessment of relative contribution of different surrogate models to the weighted average surrogate model. In this example, polynomial response surface approximation had the highest weight most of the time (880 time s), but not all the times (59 times kriging had the highest weight and 61 times RBNN had the highest weight).

PAGE 112

112 Test Problems, Numerical Procedure, and Prediction Metrics Test Problems To test the predictive capabil ities of the proposed approach of using an ensemble of surrogates, we employ two types of problems: (i) analytical (DixonSzeg, 1978), which are often used to test global optimization methods, and (ii) engineering, a radial turbine design problem (Mack et al., 2006), motivated by space la unch. The details of each test problem are given as follows: Branin-Hoo function 2 2 2 5.15 1 (,) 6101cos()10, 8 4 [5,10],[0,15]. xx fxyyx xy (4.6) Camelback function 4 2222 (,) 42.1 44, 3 [3,3],[2,2]. x f xyxxxyyy xy (4.7) Goldstein-Price function 2 22 2 22 (,) 1119431463 3023183212483627, ,[2,2]. fxyxyxxyxyy xyxxyxyy xy (4.8) Figure 4-2 depicts these two-va riable test problems and shows zones of high gradients. Hartman functions 2 1211()exp(), where,,,,[0,1]. x xijjij nimn i ijfcaxp xxxx (4.9) Two instances of this problem are considered based on the number of design variables. For the chosen examples, m = 4

PAGE 113

113 Hartman-3 This problem has three variables. Th e choice of parameters is given in Table 4-1 (DixonSzeg, 1978). Hartman-6 This instance of the problem has six design variables and the parameters used in the function are tabulated in Table 4-2 (Dixon-Szeg, 1978). Figure 4-3 illustrates the comple xity of the analytical probl ems. It shows the boxplots of function values at a uniform grid of points with 21 points in each directi on (for Hartman problem with six variables, we used five points in each di rection); the mean, coeffi cient of variation, and median are given in Table 4-3. We can see that for all the problems, the coefficient of variation was close to one or more, which indicates large va riation in the function va lues. It is clear from Figure 4-3 that the function valu es follow nonuniform distribution, which is also reflected by large differences in the mean and median. These conditions translate into high gradients in the functions and may pose difficulties in accurate m odeling of the responses. Goldstein-Price and Hartman problem with six variables had a sign ificant number of points, which had higher function values than the inter-quart ile range of the data. This is reflected in the high coefficient of variation of these two functions. Radial turbine design for space launch As described by Mack et al. ( 2006), this six-variable problem is motivated by the design of compact radial turbine used to drive pumps that deliver liqui d hydrogen and liquid oxygen to the combustion chamber of a spacecraft. The objective of the design is to increase the work output of a turbine in the liquid rocket expander cycle e ngine while keeping the overall weight of the turbine low. If the turbine inlet temperature is held constant, the increase in turbine work is directly proportional to the increase in effici ency. Thus, the design goal is to maximize the turbine efficiency while minimizing the turbine wei ght. Our interest in this problem is to develop

PAGE 114

114 accurate surrogate model(s) of the efficiency as a function of six design variables. The description of design variables and th eir corresponding rang es are given in Table 4-4 (Mack et al., 2006). The objectives of the design were calcu lated using a one-dimensional flow analysis meanline code (Huber, 2001). Mack et al. (2006) identified the a ppropriate region of interest by iteratively refining the design sp ace. They also identified the mo st important variables using global sensitivity analysis. Numerical Procedure For all analytical problems, Latin hypercube sampling (LHS) was used to pick design points such that the minimum distance between the design points is maximized. We used MATLAB routine lhsdesign with maximin criterion (maximize the minimum distance between points) and a maximum of 40 iterations to obtain optimal configuration of points. For the radial turbine design problem Mack et al. (2006) sampled 323 designs in the six-dimensional region of interest, using LHS and a five level fa ctorial design on the thre e most important design variables (identified by global sensitivity analys is). Out of these 323 designs, 13 designs were found infeasible. The remaining 310 design points were used to construct and test the surrogate models. For this study, we randomly selected 56 poi nts to construct the su rrogate model and used the remaining 254 points to test the surrogate mo del. To reduce the effect of random sampling for both analytical and radial turbine design problems, we present results based on 1000 instances of design of experiments for all the problems in low dimension spaces. However, to keep computational cost low fo r six-variable problems, we used 100 design of experiments and then, used 1000 bootstrap (Hes terberg et al., 2005) samp les to estimate results. The numerical settings used to fit different surrogate models for each problem are given in Table 4-5. The total number of test points (on a uniform grid) is Nv p where Nv is the number of

PAGE 115

115 variables and p is the number of point s along each direction (Table 4-5), except for the radial turbine problem, where the number represents the total number of test points. We used reducedquadratic or reduced-cubic pol ynomials for PRS. A Gaussian correlation function and a linear trend model were used in kriging approximation of all test problems. Parameters spread and goal for radial basis neural network were selected accordin g to problem characteristics (spread controls the decay rate of radial basis function; and goal is the desired level of accuracy of the RBNN model on tr aining points). It should be pointed out that no attempt was made to improve the predictions of any surrogate model. Prediction Metrics The following metrics were used to compar e the prediction capabilities of different surrogate models: Correlation coefficient The correlation coefficient between actual and predicted response at the test points (,) ryy is given as 1 ()() (,) ()()V y yyydv V ryy yy (4.10) It is numerically evaluated from the data for test points by implementing quadrature3 for integration (Ueberhuber, 1997) as given in Equation (4.11). 2 211 1 1 ; () 1 ()()testtest testNN iiiii testtest ii V N iii test i Vyyy yydvy VNN yy yyydv N V (4.11) 3 Here we used trapezoidal rule for integration.

PAGE 116

116 where y is the mean of actual response, y is the mean of predicted response, testN is the number of test points, and i is the weight used for integrat ion using the trapezoidal rule. For radial turbine problem, we used a nonuniform set of data points so the correlation coefficient is obtained using Equation (4.11) with weight 1i For a high-quality surrogate model, the correlation coefficient should be as hi gh as possible. The maximum value of (,)ryy is one that defines exact linear relationship between the predicted and the actual response. RMS error For all the test problems, the actual response at test points was known, which allowed us to compute error at all test points. The root mean square error (R MSE) in the design domain, as defined in Equation (4.12), was used to assess the goodness of the predictions. 21 ()V R MSEyydv V (4.12) Equation (4.12) can be evaluated using trapez oidal rule as denoted in Equation (4.13). 2 1 ()testN iii test iyy RMSE N (4.13) For radial turbine problem, we used Equation (4.13) with weight 1i to estimate the RMS error. Of course, a good surrogate model gives low RMS error. Maximum error Another measure of the quality of prediction of a surrogate is the maximum absolute error at the test points. This is required to be low. A combination of high correlation coefficien t, low RMS error, and low maximum error indicates a good prediction.

PAGE 117

117 Results and Discussion In this section, we present some numerica l results to demonstrate the capabilities of multiple surrogate models using the test prob lems discussed in previous section. Identification of Zones of High Uncertainty We demonstrate the application of an ensemble of surrogates to identify the region of high uncertainty with the help of different test pr oblems. Results for a singl e instance of a DOE for Branin-Hoo example are presented in detail. Figure 4-4 shows the cont our plots of absolute errors in the prediction ( ()() xx yy ) due to different surrogate models and the standard deviation of the responses. Figure 4-4(A)-(C) shows contour plots of actua l absolute errors in different surrogate models. It can be seen that the middle section of the design space was approximated very well (errors are low) but the left boundary was poorly represented by different surrogate models. The errors (and hence, responses) from PRS, KRG, and RBNN differed in the region close to the top left corner. The contour plot of the standard deviation (Figure 4-4(D)) of predicted responses correctly indicated the re gion of high uncertainty near the top left corner due to high standard deviation. It also appropriately identified good pr edictions in the central region of design space. The predictions in the region of high uncertainty can be improved by sampling additional points. It is also noted that altho ugh all surrogate models had high errors near the bottom left corner of the design space, (Figure 4-4(A)-(C)), the standard devi ation of the predicted responses was not high. This means that we can use the stan dard deviation of surrogate models to identify regions of high uncertainty, but we cannot use it to identify regions of high fidelity. This particular situation demands further investigatio n, if the objective of using an ensemble of surrogates was to identify a region of high error in the predictions.

PAGE 118

118 To further show the independence of the result with respect to the de sign of experiments, we simulated the Branin-Hoo function with 1000 DOEs. For each DOE, we computed the standard deviation of responses in design space. At the location of maximum standard deviation for each DOE, we computed actual e rrors in the predictions of diffe rent surrogates. Similarly, we calculated actual errors in the predictions of different surrogates at the location of minimum standard deviation. Figure 4-5(A) shows the magnitude of maximum standard deviation and actual errors in predictions using di fferent surrogates for 1000 DOEs, and Figure 4-5(B) shows the magnitude of minimum standard deviation a nd actual errors in predictions using different surrogates from 1000 DOEs. By comparing Figure 4-5(A) and (B), it is clear that high standard deviation of responses corresponded to the regions with large uncerta inties in the predictions and low standard deviation corresponded to regions with low uncertain ty and there was an order of magnitude difference. To generalize the findings, we simulated all te st problems and identified the actual errors at the locations of maximum and minimum standa rd deviation of responses. The results are summarized in Table 4-6 and Table 4-7. A one-to-one comparison of the results for different test problems shows that when the standard deviation of responses was highest, the actual errors in predictions were high, and when the standard devi ation of responses was lo west, the act ual errors in predictions were low. We note that the resu lts are more useful for a qualitative comparison than quantitative, i.e., identifying the regions where we expect large uncertainties in prediction rather than quantifying the magnitude of actual errors. We also estimated the maximum (over the enti re design space) error due to each surrogate model for different test problems and compared with the maximum standard deviation of responses. The results are presented in Table 4-8. While the maximum standard deviation of

PAGE 119

119 responses was the same order of magnitude as the maximum actual error for all surrogate models, it underestimated the maximum error by a factor of 2.5.0. When the number of data points to construct the surrogate model was in creased (Branin-Hoo function was modeled with 31 points and Camelback function was modeled with 40 points, refer to a following section for details about modeling with incr eased sample density), the un derestimation of the maximum actual error was reduced. The main conclusions of the results presented in this section are: (1) dissimilar predictions of surrogate models (high standard deviation of responses) indicate regi ons of high errors, (2) similar predictions of surrogate models (low st andard deviation of responses) do not necessarily imply small errors, and (3) the maximum standa rd deviation of respons es underestimates the actual maximum error. Robust Approximation via Ensemble of Surrogates Next, we demonstrate the need of ro bust approximation with the help of Table 4-9 that enlists the number of times each surrogate yields the least PRESS error for all test problems. As can be seen, no surrogate model is universally the best for all pr oblems. Besides, for any given problem, the choice of best surrogate model is a ffected by the design of experiment (except the radial turbine design problem). The results presented in Table 4-9 clearly establish the need to search robust approximation models (i.e., the same surrogate model can be applied to different problems, and the results produced are not signi ficantly influenced by the choice of DOE). We present results to reflect the advantages of using an ensemble of surrogates. Firstly, we quantify the number of points (r eflects portion of design space), where different surrogates yield errors of opposite signs in Table 4-10. The predictions at these locations can be potentially improved due to error cancellation via proposed PRESS-based weight ed surrogate (PWS) model. Next, we compare the performance of the PWS model and the surrogate model corresponding to

PAGE 120

120 the best generalization error among the three su rrogates (best PRESS mo del) with individual surrogate models (PRS, kriging, and RBNN). Fo r each problem, the summary of the results based on 1000 DOEs is shown with the help of b oxplots. A small size of the box suggests small variation in results with respect to the choice of design of experiment. Correlations Figure 4-6 shows the correlation coefficient (between actual and predicted responses) for different test problems. The results were statistically significant ( p -value is smaller than 1e-4 ) for all problems and DOEs. It is evident that no singl e surrogate worked the best for all problems, and the correlation coefficient for individual su rrogates varied with DOE. Both, the best PRESS and the PWS, models were better than the worst surrogate model and at par with the corresponding best surrogate for most problems. The PWS model generally performed better than the best PRESS model. The variation in resu lts with respect to the design of experiments for both, the PWS model and the best PRESS model, wa s also comparable to the best surrogate for all problems, except Hartman problem with six variables. For all problems, we observed that some desi gn of experiments (DOEs) yielded very poor correlations. Analysis of the corresponding experi ments revealed two scenarios: (1) Sometimes, the DOE was not satisfactory and a large portion of the design space was unsampled. This led to poor performance of all the surrogate models. (2 ) For a few poor correlation cases, despite a good DOE, one or more surrogates failed to capture the correct trends. The PWS model and the best PRESS model were able to correct the anoma lies in these scenarios to some extent. The tail of the boxplot corresponding to the PWS model and the best PRESS model was shorter compared to the worst surrogate (Figure 4-6). Table 4-11 shows the mean and the coefficient of variation for differe nt test problems to assess the performance of different surrogate mo dels. It is clear that the average correlation

PAGE 121

121 coefficient for the PWS model was either the best or the second best for all the test problems. Also, the low coefficient of variation undersco red the relatively low sensitivity of the PWS model with respect to the choice of design of experiments. Performance of the best PRESS model was also comparable to the best su rrogate model for each problem. The overall performance of all three su rrogates was comparable. It can also be seen from Table 4-11 that the PWS model outperformed the best PRESS model fo r all cases, but radial turbine design problem. The mean of the correlation coefficient for di fferent problems is reported based on one set of 1000 DOEs. Since the distributio n of mean is approximately Gaussian, the coefficient of variation of the mean (of correlati on coefficient) can be given as 0.5()DOECOVN, where COV is the coefficient of variation (of corr elation coefficient) based on 1000 DOEs (1000DOEN ), leading to a coefficient of vari ation of the mean that is about 30 times lower than the native coefficient of variation. The number of digits in the table is based on this estimate of the coefficient of variation. We verified the results by performing the bootstrap analysis (Hesterberg et al., 2005) by considering 1000 samples of 1000 DOEs each. The distribution of the mean for one representative case (mean corr elation coefficient predicted using kriging approximation for Branin-Hoo function) is plotted in Figure 4-7. The mean correlation coefficient evidently follows the Gaussian distribution as the data falls on th e straight line depicting the normal distribution. Similar results were observed for all other ca ses. Bootstrapping also confirmed that the coefficient of variation of the mean valu e followed the simple expression given above. RMS errors Next, we compared different surrogate models based on the RMS errors in predictions at test points. Figure 4-8 shows the results on different test problems. While no single surrogate

PAGE 122

122 performed the best on all problems, individua l surrogate models approximated different problems better than others. The PRESS-based we ighted surrogate (PWS) model and the best PRESS model performed reasonably for all test problems. The resu lts indicate that if we know that a particular surrogate perfor ms the best for a given problem, it is best to use that surrogate model for approximation. However, for most pr oblems, the best surrogate model is not known a priori or the choice of best surrogate may get affected by choice of DOE (Table 4-9). Then an ensemble of surrogates ( via the PWS or the best PRESS model) may prove beneficial to protect against the worst surrogate model. The mean and coefficient of variation of RMS errors using different surrogates on different problems are tabulated in Table 4-12. Note that, kriging most often had the lowest RMS errors compared to other surrogates. When the RMS erro rs due to all surrogates were comparable, as was the case for Branin-Hoo and Camelback functions, the predictions using the PWS model were more accurate (lower RMS error) than a ny individual surrogate. However, when one or more surrogate models were much more inaccurate than others, the predictions using the PWS model were only reasonably close to the accurate surrogate model(s). We also observed that both the best PRESS model and the PWS model were able to significantly reduce the errors compared to the worst surrogate. This sugge sts that using an ensemble of surrogate models, we can protect against poor choice of a surrogate. The PWS model generally yielded lower RM S errors than the best PRESS model. Relatively poor performance of the PWS model (compared to th e best PRESS model) for sixvariable Hartman problem and the radial turbine problem was attributed to accurate modeling of the response by one surrogate or inaccuracy in the representation of weights (see section on the role of generalized cr oss-validation errors).

PAGE 123

123 Maximum absolute errors Figure 4-9 shows the maximum absolute e rror for 1000 DOEs using different surrogate models on different test problems. As was obser ved for RMS errors, the PWS model and the best PRESS model performed reasonably for all test problems, though individu al surrogate models performed better for different test problems. Numerical quantification of the results is given in Table 4-13. The maximum absolute error obtained using the PWS model and the best PRESS model were comparable to the maximum absolute error obtained us ing the best surrogate model for that test problem. For most cases, the PWS model also delivered a lower ma ximum absolute error than the best PRESS model. Relatively poor performance of the PWS model for the Goldstein-Price test problem was attributed to the poor performance of one of the surrogate models (RBNN) on the prediction points. The results presented in this section suggest that the strategy of us ing an ensemble of surrogate models potentially yields robust approximation (good correlation, low RMS, and maximum errors) for problems of varying complexities and dimensions, and the results are less sensitive to the choice of DOE. The PWS model ma y have an advantage compared to the best PRESS model. Studying the role of generalized cross-validation errors We observed that the PWS model did not perf orm well for Camelback and Goldstein-Price function where the RBNN model noticeably yiel ded large variations. To investigate the underlying issue, we studied the weights and, hence, the role of PRESS error, which is used to determine the weights. Our initial assumption was that the PRESS error is a good estimate of the actual RMS errors for all surrogate models. To validate this assumption, we computed the ratio of actual RMS errors and PRESS for different su rrogate models over 1000 DOEs. The results are

PAGE 124

124 summarized in Figure 4-10, and the corresponding mean and standard deviation (based on 1000 DOEs) are given in Table 4-14. It is observed from the results that PRESS (g eneralized cross-validation error), on average, underestimated actual RMS errors for polynomial response surface approximation but overestimated RMS error in kriging and RBNN. Fo r Goldstein-Price, the mean was skewed for RBNN because of three simulations, which gave a very large ratio of RMS error and PRESS (the median is 0.44). The implication of this undere stimation/overestimation was that the weights associated with polynomial response surface model were overestimated, and weights for kriging and radial basis neural network were underestimat ed. Noticeably, there were a large number of instances for Camelback and Goldstein-Price functions where PRESS underestimated the RMS errors for RBNN (see long tail of points, with RM S error to PRESS ratio gr eater than two). This indicated wrong emphasis of RBNN model for thes e models compared to other more accurate surrogates; hence, a relatively poor performance of the parametric weighted surrogate model was observed. This anomaly in accurately representing the actual errors or developing measures to correct the weight to account for the underestim ation/overestimation is a scope of future research. Effect of sampling density Often an initial DOE identifies regions of inte rest, and then the DOE is refined in these regions. At other times, the initial DOE is f ound insufficient for good approximation, so that it must be refined. The refinement of the DOE can be carried out in two ways: (i) increasing the number of points in the original design space, and (ii) reducing the size of design space. The refinement of the DOE may change the identity of the best surrogate model, so that even if a single surrogate model is used, it may be useful to switch surrogates. In addition, the choice between the best PRESS and the PWS model ma y depend on sampling density. To investigate

PAGE 125

125 these issues, we study two representative pr oblems: Branin-Hoo function and Camelback function, which were not adequately approxi mated by different surrogate models (low correlations). Both problems are now modeled with increased number of points (31 points were used for Branin-Hoo function and Camelback functi on was modeled with 40 poi nts) such that all regions were adequately modeled. We used a cubic polynomial to mo del Branin-Hoo function and a quartic polynomial to model Camelback functi on. All other parameters were kept the same. The results obtained for the increased number of points were compared with the previously presented results for smaller number of points in Table 4-15 and Table 4-16. As can be seen from Table 4-15 and Table 4-16, the predictions improved with increasing number of points. The improvement in kriging (which models the local behavior better) was significantly more than the othe r two surrogates. The performan ce of both the best PRESS model and the PWS model was comparable to the best individual surrogate model and significantly better than the worst surrogate model. For the problems considered here, the best PRESS model outperformed the PWS model. This result is ex pected because of much improved modeling of the objective function by one or more of the surrogates. The results corroborate our earlier findings: (1) if we a priori know the best surrogate model for a given problem, that surrogate should be used for approximation; and (2) ensemb le of surrogates protects us against the worst surrogate model. These results were evident irre spective of the number of points used to model the response. However, we also note that even if a single surrogate is used, its choice depends on sampling density. For Branin-Hoo function wi th 12 points, the polynomial response surface approximation had the best correlation and lowe st maximum error. Its mean RMS error is slightly higher than kriging but standard deviation is much better. With 31 points, kriging is the best surrogate.

PAGE 126

126 Sensitivity analysis of PWS parameters To study the effect of variation in the parameters and (see Equation (4.4)), we constructed the PWS model for the GoldsteinPrice function with different values of and This problem was selected because of significan t differences in the performance of different surrogate models. All other parameters were kept the same. The comparison of correlation coefficient and errors based on 1000 DOE samples is given in Table 4-17. To eliminate the skewness of the data due to a few spurious results, we show median, first, and third quartile data for all cases. When we increased keeping constant, we observed modest decrease in errors. This was expected because by increasing we reduced the importance of individual surrogates and assigned more importance to the averaging, wh ich helped in reduci ng the effect of bad surrogates. However, it is notew orthy that a few designs, which gave poor performance of one surrogate, deteriorated the performance of the PWS model for respective cases. By increasing keeping constant, we emphasized the importance of individual surrogates more than the averaging. For this case, the ove rall effect was the deterioration of correlation and increase in errors. The effect of variation in on the results was more pronounced than the effect of variation in The above results indicated that the parameters and should be chosen according to the performance of the individual surrogates. Conclusions In this chapter, we presented a case to simultaneously use multiple surrogates (1) to identify regions of high uncertainty in predic tions, and (2) to devel op a robust approximation strategy. The main findings can be summarized as follows. Regions of high standard deviation in the pr edicted response of th e surrogates correspond to high errors in the predictions of the surr ogates. However, we caution the user not to

PAGE 127

127 interpret the regions of low st andard deviation (uncertainty) as regions of low error. The standard deviation of responses usually underestimates the error. Simultaneous use of multiple surrogate models can improve robustness of the predictions by reducing the impact of a poor surrogate mode l (which may be an artifact of choice of design of experiment or the i nherent unsuitability of the su rrogate to the problem). Two suggested ways of using an ensemble of surr ogates are to construct PRESS-based weighted average surrogate model or to select the su rrogate model that has the least PRESS error among all considered surrogate models. The proposed PRESS error based selection of multiple surrogates performed at par with the best individual surrogate model for all test problems, and showed relatively low sensitivity to the choice of DOE, sampling density, and dimensionality of the problem. The PRESS-based weighted surrogate model yi elded best correlati on between actual and predicted response for diffe rent test problems. While different surrogates performed the be st for reducing error (RMS and maximum absolute error) in different test problems the performance of surrogate models was influenced by the selection of DOE. Ensemble of surrogates ( via the PRESS-based weighted surrogate (PWS) and the best PRESS model) performed at par with the corresponding best surrogate model for all test problems. The PWS model in general outperformed the surrogate model with best PRESS error. It was also observed that PRESS in gene ral underestimated the actual RMS error for polynomial response surface approximation and ove restimated the actual RMS error for kriging and radial basis neur al network. The correction in weights to account for the underestimation/overestimation of RMS errors by PRESS is a scope of future research. Though the best individual surrogate can change with increase in sampling density, the ensemble of surrogates performs comp arably with the best surrogate. We conclude that for most practical probl ems, where the best surrogate is not known beforehand, use of an ensemble of surroga tes may prove a robust approximation method.

PAGE 128

128 Figure 4-1. Boxplots of wei ghts for 1000 DOE instances (Camelback function). W-PRS, WKRG and W-RBNN are weights associat ed with polynomial response surface approximation, kriging, and radial basi s neural network models, respectively.

PAGE 129

129 A B C Figure 4-2. Contour plots of two variable test functions. A) Branin-Hoo. B) Camelback. C) Goldstein-Price.

PAGE 130

130 Figure 4-3. Boxplots of function values of different analytical functions. A B C D Figure 4-4. Contour plots of erro rs and standard deviation of pr edictions considering PRS, KRG, and RBNN surrogate models for Branin-Hoo function.

PAGE 131

131 A B Figure 4-5. Boxplots of standard deviation of responses and ac tual errors in prediction of different surrogates at corresponding locat ions (based on 1000 DOEs using BraninHoo function). sresp is standard deviation of responses, e_PRS, e_KRG, e_RBNN are actual errors in PRS, KRG and RBNN. A) Maximum standard deviation of responses and corresponding actual errors in approxi mations. B) Minimum standard deviation of responses and corresponding act ual errors in approximations.

PAGE 132

132 A B C D E F Figure 4-6. Correlations between actual and predicted response for different test problems. 1000 instances of DOEs were considered for all test problems, except Hartman-6 and radial turbine design problem for which we show results based on 100 samples. The center line of each boxplot shows the median value and the box encompasses the 25thand 75th-percentile of the data. The leader lines ( horizontal lines) are plotted at a distance of 1.5 times the inter-quartile range in each direction or the limit of the data (if the limit of the data falls within 1.5 times the inter-quartile range). A) Branin-Hoo. B) Camelback. C) Goldstein-Price. D) Hartma n-3. E) Hartman-6. F) Radial turbine design problem.

PAGE 133

133 Figure 4-7. Normal distribution approximation of the sample mean correlation coefficient data obtained using 1000 bootstrap samples (kriging, Branin-Hoo function).

PAGE 134

134 A B C D E F Figure 4-8. RMS errors in design space for diffe rent surrogate models. 1000 instances of DOEs were considered for all test problems, except Hartman-6 and radial turbine design problem for which we show results ba sed on 100 samples. A) Branin-Hoo. B) Camelback. C) Goldstein-Price. D) Hartma n-3. E) Hartman-6. F) Radial turbine design problem.

PAGE 135

135 A B C D E F Figure 4-9. Maximum absolute error in de sign space for different surrogate models. 1000 instances of DOEs were considered for all test problems, except Hartman-6 and radial turbine design problem for which we show results based on 100 samples. A) BraninHoo. B) Camelback. C) Goldstein-Price. D) Hartman-3. E) Hartman-6. F) Radial turbine design problem.

PAGE 136

136 A B C D E F Figure 4-10. Boxplots of ratio of RMS error and PRESS over 1000 DOEs for different problems. *For Branin-Hoo function, one simulation yielded RMSE/PRESS ratio ~O(20) for PRS, **For Goldstein-Price problem, three si mulations yielded high ratio of RMS error and PRESS error (20-80) for RBNN. A) Branin-Hoo*. B) Camelback. C) Goldstein-Price**. D) Hartman-3. E) Hartman-6. F) Radial turbine design problem.

PAGE 137

137 Table 4-1. Parameters used in Hart man function with three variables. i aij ci pij 1 3.0 10.0 30.0 1.0 0.3689 0.11700.2673 2 0.1 10.0 35.0 1.2 0.4699 0.43870.7470 3 3.0 10.0 30.0 3.0 0.1091 0.87320.5547 4 0.1 10.0 35.0 3.2 0.03815 0.57430.8828 Table 4-2. Parameters used in Ha rtman function with six variables. i aij ci 1 10.0 3.0 17.0 3.5 1.7 8.0 1.0 2 0.05 10.0 17.0 0.1 8.0 14.01.2 3 3.0 3.5 1.7 10.0 17.0 8.0 3.0 4 17.0 8.0 0.05 10.0 0.1 14.03.2 i pij 1 0.1312 0.1696 0.5569 0.01240.82830.5886 2 0.2329 0.4135 0.8307 0.37360.10040.9991 3 0.2348 0.1451 0.3522 0.28830.30470.6650 4 0.4047 0.8828 0.8732 0.57430.10910.0381 Table 4-3. Mean, coefficient of variation (COV), and median of different analytical functions. Branin-Hoo Camelback Goldstein-PriceHartman-3Hartman-6 Mean 49.5 19.1 49179 -0.8 -0.06 COV 1.0 1.8 3.9 -1.2 -5.1 Median 36.7 11.8 8114 -0.5 -0.04 Table 4-4. Range of variables fo r radial turbine design problem. Variable Description Minimum Maximum RPM Rotational speed 100000 150000 Reaction Percentage of stage pr essure drop across rotor 0.45 0.57 U/Cisen Isentropic velocity ratio 0.56 0.63 Tip flow Ratio of flow parameter to a choked flow parameter0.30 0.53 Dhex% Exit hub diameter as a % of inlet diameter 0.1 0.4 AN2Frac Used to calculate annulus area (stress indicator) 0.68 0.85

PAGE 138

138 Table 4-5. Numerical setup for the test problems. BraninHoo CamelbackGoldsteinPrice Hartman3 Hartman6 Radial turbine # of variables 2 2 2 3 6 6 # of design points 12 20 25 40 150 56 # of test points* 21 21 21 21 5 254 Order of polynomial 2 3 3 3 3 2 Spread 0.2 0.3 0.5 0.4 0.5 1 goal 10 10 2500 0.05 0.05 0.01 *Total number of points is number of points along a direction raised to th e power of the number of variables (e.g., 213 for Hartman problem with three variab les). For the radial turbine problem, 254 indicate total number of test points. spread controls the decay rate of radial basis function and goal is the desired level of accur acy of the RBNN model on training points. Table 4-6. Median, 1st, and 3rd quartile of the maximum standard deviation and actual errors in predictions of different surrogates at the location corresponding to maximum standard deviation over 1000 DOEs for different test problems. Branin -Hoo Camel back GoldsteinPrice Hartman -3 Hartman -6 Radial turbine Median (Max std dev. of response) 105 53 2.7e5 2.5 2.2 0.020 Median (Actual error in PRS) 114 61 2.9e5 3.9 3.9 0.0016 Median (Actual error in KRG) 42 111 3.6e5 0.7 0.2 0.004 Median (Actual error in RBNN) 110 95 2.5e5 0.6 0.1 0.033 1st/3rd quartile (Max std dev. of response) 77/ 134 38/ 85 1.0e5/ 4.2e5 2.0/ 3.2 1.9/ 2.7 0.017/ 0.022 1st/3rd quartile (Actual error in PRS) 78/ 158 32/ 92 1.0e5/ 4.7e5 2.8/ 5.2 3.3/ 4.9 0.0008/ 0.0027 1st/3rd quartile (Actual error in KRG) 21/ 71 66/ 131 1.4e5/ 6.5e5 0.3/ 1.4 0.1/ 0.4 0.002/ 0.006 1st/3rd quartile (Actual error in RBNN) 76/ 132 42/ 161 1.9e5/ 5.7e5 0.3/ 1.1 0.1/ 0.3 0.028/ 0.038

PAGE 139

139 Table 4-7. Median, 1st, and 3rd quartile of the minimum standard deviation and actual errors in predictions of different surrogates at the location corresponding to minimum standard deviation over 1000 DOEs for different test problems. Branin -Hoo Camel back GoldsteinPrice Hartman -3 Hartman -6 Radial turbine Median (Min std dev. of response) 0.41 0.26 492 0.0019 0.0011 2.1e-4 Median (Actual error in PRS) 4.7 1.7 1630 0.063 0.06 1.0-3 Median (Actual error in KRG) 4.6 1.7 1513 0.062 0.07 1.1e-3 Median (Actual error in RBNN) 4.7 1.7 1510 0.064 0.07 1.0e-3 1st/3rd quartile (Min std dev. of response) 0.25/ 0.67 0.15/ 0.40 280/ 770 0.0012/0 .0029 0.0007/ 0.0017 1.5e-4/ 3.2e-4 1st/3rd quartile (Actual error in PRS) 1.7/ 9.8 0.7/ 4.4 697/ 3854 0.025/ 0.143 0.03/ 0.11 5.02e-4/ 1.9e-3 1st/3rd quartile (Actual error in KRG) 1.8/ 9.9 0.6/ 4.2 525/ 3842 0.025/ 0.143 0.03/ 0.11 5.02e-4/ 1.9e-3 1st/3rd quartile (Actual error in RBNN) 1.8/ 9.7 0.6/ 4.2 535/ 3871 0.024/ 0.142 0.03/ 0.11 5.02e-4/ 2.1e-3 Table 4-8. Median, 1st, and 3rd quartile of the maximum standard deviation and maximum actual errors in predictions of different su rrogates over 1000 DOEs for different test problems. (Number after Branin-Hoo and Camelback functions indicates the number of data points used to model the function). BraninHoo12 BraninHoo31 Camel back20 Camel back40 Goldstein -Price Hartman -3 Hartman6 Radial turbine Median (Min std dev. of response) 105 88 53 42 2.7E+05 2.5 2.2 0.020 Median (Actual error in PRS) 175 32 122 37 4.5E+05 4.1 4.0 0.087 Median (Actual error in KRG) 232 25 135 37 5.3E+05 1.9 1.9 0.087 Median (Actual error in RBNN) 268 173 135 80 3.9E+05 2.3 1.8 0.082 1st/3rd quartile (Min std dev. of response) 77/ 134 61/ 116 38/ 85 31/ 58 1.0e5/ 4.2e5 2.0/ 3.2 1.9/ 2.7 0.017/ 0.022 1st/3rd quartile (Actual error in PRS) 150/ 209 27/ 39 106/ 127 31/ 44 3.7e5/ 5.5e5 3.2/ 5.3 3.4/ 4.9 0.082/ 0.093 1st/3rd quartile (Actual error in KRG) 146/ 298 16/ 38 123/ 145 26/ 59 3.9e5/ 7.5e5 1.7/ 2.2 1.7/ 2.0 0.082/ 0.093 1st/3rd quartile (Actual error in RBNN) 214/ 294 119/ 233 100/ 181 61/ 107 2.7e5/ 6.7e5 2.0/ 2.6 1.7/ 1.9 0.077/ 0.087

PAGE 140

140 Table 4-9. Effect of design of experiment: Numb er of cases when an individual surrogate model yielded the least PRESS error (based on 1000 DOEs). PRS KRG RBNN Branin-Hoo 715 131 154 Camelback 880 59 61 Goldstein-Price 659 143 198 Hartman-3 229 511 260 Hartman-6 400 119 481 Radial turbine 1000 0 0 Table 4-10. Opportuniti es of improvement via PWS: Number of poi nts when individual surrogates yield errors of opposite signs (based on 1000 DOEs). Mean COV Total # of points Branin-Hoo 218 0.23 441 Camelback 227 0.26 441 Goldstein-Price 278 0.17 441 Hartman-3 5216 0.13 9261 Hartman-6 9710 0.09 15625 Radial turbine 37 0.41 254 Table 4-11. Mean and coefficient of variation (in parenthesis) of correlation coefficient between actual and predicted response (based on 1000 DOEs) for different surrogate models. PRS KRG RBNN Best PRESS PWS Branin-Hoo 0.79 (0.08) 0.76 (0.25) 0.75 (0.18) 0.78 (0.12) 0.83 (0.11) Camelback 0.69 (0.13) 0.69 (0.19) 0.61 (0.50) 0.69 (0.14) 0.73 (0.20) Goldstein-Price 0.88 (0.041) 0.87 (0.11) 0.86 (0.28) 0.88 (0.083) 0.91 (0.12) Hartman-3 0.80 (0.073) 0.92 (0.052) 0.89 (0.059) 0.89 (0.074) 0.92 (0.028) Hartman-6 0.56 (0.084) 0.73 (0.12) 0.81 (0.023) 0.71 (0.17) 0.77 (0.042) Radial turbine 0.9951 (0.0015) 0.9814 (0.0088) 0.8495 (0.062) 0.9951 (0.0015) 0.9946 (0.0013)

PAGE 141

141 Table 4-12. Mean and coefficient of variation (in parenthesis) of RMS errors in design space (based on 1000 instances of DOEs) for different surrogate models. PRS KRG RBNN Best PRESS PWS Branin-Hoo 34.4 (0.15) 32.3 (0.38) 37.9 (1.70) 34.1 (0.20) 29.1 (0.46) Camelback 22.0 (0.17) 21.0 (0.16) 38.0 (2.27) 21.8 (0.17) 20.4 (0.30) Goldstein-Price 6.72e4 (0.17) 6.31e4 (0.33) 1.18e5 (3.52) 7.32e4 (3.32) 6.28e4 (1.66) Hartman-3 0.64 (0.20) 0.37 (0.28) 0.47 (0.55) 0.44 (0.34) 0.38 (0.16) Hartman-6 0.45 (0.14) 0.25 (0.13) 0.22 (0.053) 0.31 (0.34) 0.25 (0.074) Radial turbine 0.0023 (0.15) 0.0043 (0.23) 0.0120 (0.18) 0.0023 (0.15) 0.0025 (0.13) Table 4-13. Mean and coefficient of variation (in parenthesis) of maximum absolute error in design space (based on 1000 instances of DOEs). PRS KRG RBNN Best PRESS PWS Branin-Hoo 182 (0.25) 222 (0.41) 258 (0.75) 199 (0.29) 202 (0.35) Camelback 127 (0.24) 133 (0.12) 236 (2.41) 126 (0.23) 128 (0.33) Goldstein-Price 4.74e5 (0.28) 5.63e5 (0.37) 1.08e6 (3.55) 5.56e5 (2.96) 5.31e5 (1.64) Hartman-3 4.40 (0.38) 1.94 (0.21) 2.47 (0.86) 2.59 (0.54) 2.05 (0.28) Hartman-6 4.24 (0.29) 1.89 (0.11) 1.84 (0.092) 2.62 (0.43) 1.90 (0.17) Radial turbine 0.0120 (0.28) 0.0196 (0.21) 0.0346 (0.22) 0.0120 (0.28) 0.0118 (0.20)

PAGE 142

142 Table 4-14. Mean and coefficient of variation of the ratio of RMS error and PRESS over 1000 DOEs. PRS KRG RBNN Branin-Hoo* 1.02 (0.57) 0.70 (0.60)0.76 (1.07) Camelback 1.26 (0.34) 0.80 (0.31)0.77 (0.98) Goldstein-Price* 1.38 (0.75) 1.04 (0.83)0.93 (3.33) Hartman-3 1.31 (0.31) 0.84 (0.33)0.92 (0.32) Hartman-6 1.95 (0.12) 1.00 (0.17)0.99 (0.14) Radial turbine 1.39 (0.25) 1.02 (0.21)0.97 (0.14) *Branin-Hoo and Goldstein-Price functions had significant difference in the mean and median values of RBNN. Table 4-15. The impact of sampling density (m odeling high gradients) in approximation of Branin-Hoo function (Branin-Hoo12 is the cas e when we used 12 points for modeling response and Branin-Hoo31 is the case when we used 31 points to approximate function). We used 1000 DOEs samples to get mean and COV. PRS KRG RBNN Best PRESS PWS Branin-Hoo12 0.79 (0.08) 0.76 (0.24) 0.75 (0.18) 0.78 (0.12) 0.83 (0.11) Correlations Branin-Hoo31 0.988 (0.003) 0.999 (0.001) 0.93 (0.076) 0.998 (0.003) 0.996 (0.014) Branin-Hoo12 34 (0.15) 32 (0.38) 38 (1.70) 34 (0.20) 29 (0.46) RMS Error Branin-Hoo31 7.9 (0.11) 2.4 (0.53) 20.3 (1.27) 2.7 (0.64) 4.3 (0.54) Branin-Hoo12 182 (0.25) 222 (0.41) 258 (0.75) 199 (0.29) 202 (0.35) Max Error Branin-Hoo31 34 (0.31) 30 (0.63) 183 (0.80) 31 (0.60) 41 (0.53)

PAGE 143

143 Table 4-16. The impact of sampling density (m odeling high gradients) in approximation of Camelback function (Camelback20 is the case when we used 20 points for modeling response and Camelback40 is the case when we used 40 points to approximate function). We used 1000 DOEs to get mean and COV. PRS KRG RBNN Best PRESS PWS Camelback20 0.69 (0.13) 0.69 (0.19) 0.61 (0.50) 0.69 (0.14) 0.73 (0.20) Correlations Camelback40 0.97 (0.010) 0.98 (0.039) 0.92 (0.080) 0.98 (0.015) 0.98 (0.010) Camelback20 22 (0.17) 21 (0.16) 38 (2.27) 22 (0.17) 20 (0.30) RMS Error Camelback40 7.3 (0.15) 4.9 (0.74) 12 (0.35) 5.3 (0.42) 5.7 (0.27) Camelback20 127 (0.24) 133 (0.12) 236 (2.41) 126 (0.23) 128 (0.33) Max Error Camelback40 39 (0.34) 48 (0.64) 90 (0.52) 40 (0.47) 43 (0.39) Table 4-17. Effect of parameters in parametric su rrogate filter used for PWS. Three settings of parameters and were selected. We show median, 1st, and 3rd quartile data based on 1000 DOEs for Goldstein-Price problem. PRS KRG RBNN Best PRESS PWS, =0.05, =-1 PWS, =0.5, =-1 PWS, =0.05, =-5 Median 0.89 0.90 0.94 0.89 0.93 0.94 0.93 1st quartile 0.86 0.84 0.88 0.86 0.90 0.91 0.91 Correlations 3rd quartile 0.90 0.94 0.97 0.91 0.95 0.96 0.94 Median 6.45e4 5.84e4 4.57e4 6.21e4 5.15e4 4.88e4 5.20e4 1st quartile 5.85e4 4.74e4 3.56e4 5.44e4 4.31e4 4.11e4 4.55e4 RMS Error 3rd quartile 7.27e4 7.57e4 6.98e4 7.21e4 6.37e4 6.14e4 6.29e4 Median 4.52e5 5.32e5 3.88e5 4.54e5 4.35e5 4.32e5 4.52e5 1st quartile 3.74e5 3.91e5 2.65e5 3.56e5 3.38e5 3.33e5 3.47e5 Max Error 3rd quartile 5.52e5 7.49e5 6.68e5 5.87e5 5.75e5 5.73e5 5.82e5

PAGE 144

144 CHAPTER 5 ACCURACY OF ERROR ESTIMATES FOR SURROGATE APPROXIMATION OF NOISEFREE FUNCTIONS Introduction Surrogate based design and optimization proces s is very attractive for computationally expensive problems. We construc t surrogate models to evaluate the performance of designs using a limited amount of data, and couple surrogate models with optimization methods. Different error measures are used to assess the quality of such surrogate models (Queipo et al., 2005). In addition, these measures are used fo r determining the sampling locations in many surrogate-based optimization methods like EGO (J ones et al., 1998), and for adaptive sampling. The success of these approaches depends on the ab ility of these error estimation measures to reflect the true errors. Error measures can be broadly classified as model based or model independent. Model based error measures are typically based on some statistical assumptions. For example, prediction variance for polynomial response surfa ce approximation is developed assuming that the data used to construct the surrogate contain no ise that is normally distributed with zero mean and same variance 2 in all data and no correlation. When these statistical assumptions are not satisfied, as is usually the case, the accuracy of the resulting error esti mates is questionable. Model-independent error measures are not based on any statistical assumption. However, they may have high computational cost. Error measures can further be characterized as global or local error estimates. Global error measures provide a single number for the entire design space, for example, process variance (model based) for kriging or PRESS (model independent) estimate for polynomial response surface approximation. Local or pointwise error measur es estimate error at any point in the entire design space. Examples of model based pointwi se error measures are standard error for

PAGE 145

145 polynomial response surfaces, and mean square erro rs for kriging, etc. Recently, Goel et al. (2006b) proposed using standard deviation of responses as a m odel independent pointwise error measure. The primary goal of this chapter is to appraise the performance of different error measures with the help of a variety of test proble ms in context of simulation based surrogate approximation. We account for the influence of experimental designs on error estimates by considering a large number of samples obt ained using Latin hype rcube sampling and Doptimality criterion. Noting the difficulties in identifying regions of high uncertainty using a single error estimation measure, we explore th e idea of simultaneously using multiple error measures that is motivated from our previous work on ensemble of surrogates (Goel et al., 2006b). Here, we examine the ideas of: (1) combini ng different error measures to better estimate actual errors, (2) simultaneously using multiple error measures to increase the probability of identifying regions of high uncertainty, and (3) identifying the appropriate error measure for a given problem. The chapter is organized as follows. We briefly describe relevant error estimation measures in next section. Description of di fferent test problems and numerical procedures followed are delineated next. Then, we present the results of appraisal of different error estimators for different test problems. Subsequent ly, we demonstrate the concept of simultaneous application of multiple error estimators. Fina lly, we recapitulate the major conclusions. Error Estimation Measures Error in approximation is define d as the difference between ac tual and predicted response. We can compute the error at each design point, but the quality of diffe rent approximations is meaningfully compared by looking at the global pr ediction metrics, for ex ample, average error, maximum error, or root mean square error in th e entire design space. The choice of appropriate

PAGE 146

146 measure depends on the application. Maximum e rror may be more important for design of critical components but for most applications ro ot mean squared error in the entire design space serves as a good measure of accuracy of approximation. The main issue in computation of errors is that the actual response at any point is not known (that is the primary reason of developi ng surrogates). Consequently, we cannot assess actual errors but we use different error measures to estimate either pointwise bounds or root mean square of actual errors (local error measures) or to assess the root mean square error in the entire design space (global error measures). Ge nerally, root mean square error estimation measures are more popular and more practical than the bounds on the errors. These error measures are usually based on certain assumptions on the actual functions, for example noise in data or statistical distribution of coe fficient vector for polynomial response surface approximations, Gaussian process for kriging, et c. Now, error measures that do not make any assumption on the data or surrogate model are also being developed. In this section, we briefly describe the fo rmulation of relevant global and local error estimation measures for popular surrogate models (based on assumptions on data or surrogate model) as well as some model independent error measures. Error Measures for Polynomial Response Surface Approximation Polynomial response surface approximation (PRS, Myers and Montgomery 1995) is the most popular surrogate model among pr actitioners. The observed response y (x) of a function at point x is represented as a linear comb ination of basis function vector ()fx (mostly monomials are selected as basis functions), and true coefficient vector and error Error in the approximation is assumed to be uncorrelated, and normally distributed with zero mean and 2 variance. That is,

PAGE 147

147 2();()0,().x fTyEV (5.1) The vector ()fx has two components: (1)f(x) is the vector of basis functions used in the polynomial response surface model, and (2)f(x) is the vector of additi onal basis functions that are missing in the linear regression mode l. Similarly, the coefficient vector can be written as a combination of vectors (1) and (2) that represent the true coeffici ents associated with the basis function vectors (1)f(x) and (2)f(x), respectively. Then, (1)(1)(2)(2)()(())(()).xfx fx TTy (5.2) The polynomial response surface appr oximation of the observed response y (x) is, (1) ()(),xxjj jybf (5.3) where jb is the estimated value of the coefficient associated with the jth basis function (1)() xjf. Then, the error in approximation at ith design point is given as ()xiieyy The coefficient vector b can be obtained by mini mizing a loss function L defined as 1,N p i isLe (5.4) where p is the order of loss function, and Ns is the number of sampled design points. We use the conventional quadratic loss function ( p = 2) that minimizes the variance of the error in approximation, unless specified otherwise. This loss function is most popular because we can obtain coefficient vector b from the solution of a linear sy stem of algebraic equations, (1)(1)1(1)(),b y TTXXX (5.5) where (1) X is the Gramian design matrix c onstructed using basis functions (1)()fx, and y is the vector of responses at Ns design points. The estimated varian ce of noise (or root mean square error) in approximation is given by,

PAGE 148

148 12 ()() ()yyyyT asNN (5.6) where 1N is the number of coefficients in polynomial response surface model. For more details on polynomial response surface approximation, we refer the reader to the texts by Myers and Montgomery (1995), and Khuri and Cornell (199 6). The square root of estimated variance of noise is often used as an estimate of root m ean square of actual erro r in approximation in the entire design space (global error measure). The absolute error in PRS approximation at a point x is given as () ()()x xxprse yy Two models to predict error in PRS a pproximation are explained as follows. Estimated standard error (ESE, Myers and Montgomery, 1995) When the response surface model (1)()fx and the true function ()fx are the same (Equation (5.2)), the standard error ()xese is used to characterize approximation error due to noise. This root mean square error estimate is given as, 1 2(1)(1)T(1)(1) ()()()(),xxfxfxT aeseVaryXX (5.7) where 2a is the estimated variance of noise, (1)()fx is the vector of ba sis functions used in approximation, and (1) X is the matrix of linear equations constructed using (1)()fx. Root mean square bias error (RMSBE, Goel et al., 2006c, Appendix A) When noise is small (0 Equation (5.2)), the bias error or modeling error ()bex appears due to approximating a higher order polynomial by a lo wer order function. The bias error is given by, (1)()()(()),bxfx fxTT be (5.8)

PAGE 149

149 where ()fx is the vector of basis functions in assumed true function, and is the coefficient vector associated with ()fx. Since, the coefficient vector is unknown, we estimate the root mean square of the bias error ()rms bex by assuming appropriate stat istical distribution of the coefficient vector as follows. 2()(),xxrms bbeEe (5.9) 2(1)(1)(1)(1) (2)(2)(1)(1) (1)(1)(2)(2)(2)(2)(2)(2)()(())(()) (())(()) (())(())(())(()),bb b bxfx fx fx fx fx fxfx fxT T b T T TTTTEeE E EE (5.10) 2 2 (2)(1)(2)(1)*(1)(2) 1, 3ijjijpjpi peNEbVV (5.11) where is the norm of the solution (of system of linear equations y X ) that satisfies the data y with least deviation, is a constant used to define the distribution of the coefficient vector Ne is the number of null eigenvectors of matrix of li near equations X constructed using ()fx, Vi represents the ith null eigenvector of X and (())yEg is the expected value of () g with respect to random variable y Note that, the two pointwise error estimation measures, described above, characterize pointwise root mean square actual error. Error Measures for Kriging Kriging (KRG) is named after the pioneering wo rk of D.G. Krige (a South African mining engineer) and is formally developed by Mather on in 1963. Kriging estimates the value of an objective function y (x) at design point x as the sum of a polynomial trend model ()xjj jf, and

PAGE 150

150 a systematic departure Z (x) representing low (large scale) and high frequency (small scale) variations around the trend model, ()()()().xxxxjj jyyfZ (5.12) The systematic departure components are assu med to be correlated as a function of distance between the locations under consideration. The Gaussian function is the most commonly used correlation function. Th e Gaussian correlation between point x and the ith design point x(i) is given below. ()2 () 1exp. () (),(),xx i i jjj jvNr xx ZZ (5.13) Martin and Simpson (2005) compared the maximum likelihood approach and a crossvalidation based approach to estimate kriging parameters ,jj and found that the maximum likelihood approach to estimate kriging parameters was the best. So we adopt the maximum likelihood approach to estim ate kriging parameters ,jj The predicted response at point x is given as follows. 1(1)1(1)(1)1(1)1(1)1 ()(())(()())(),xrx y rxfx y TTTTTyRXRXRXXR (5.14) where ()rx is the vector of correlations between point x and design points (whose components are given by Equation (5.13)), R is the matrix of correlations among design points, y is the vector of predicted responses at design points, (1) X is the Gramian design matrix constructed using basis functions in trend m odel at design points, and (1)()fx is the vector of basis functions in trend model at point x. Note that, kriging is an interpolati ng function that reproduces the data but may yield errors at locations not used to construct the surrogate model. The estimated process variance associated with the kriging approximation is given as,

PAGE 151

151 2(1)*1(1)*1 ()(),y y TsXRX N (5.15) where is the approximation of the coefficient vector in Equation (5.12), and is given as *(1)1(1)1(1)1(). y TTXRXXR Process variance should be low for a good approximation. We explore if this error measure can be used as a gl obal estimate of root mean square of actual error in the entire design space. The pointwise estim ate of actual error in kriging approximation is given by computing the mean squared error ()x as follows. 2(1)1(1)11()()()()()(),x1uxuxrxrxTTTXRXR (5.16) (1)1(1)()()(),uxrxfxTXR (5.17) where 2 is the process variance (Equation (5.15)), and 1 is the vector of ones. As was the case for error estimation measures for polynomial response surface approxi mation, mean squared error also is a root mean square error estimate of the actual error. So to compare with the actual error, we use the square root of mean square error (MSE). ()(), xxmsee (5.18) Model Independent Error Estimation Models While the error estimation measures discussed so far are suitable for particular surrogate models, we discuss error measures that can be used with any surrogate model (model independent error measures). Generalized cross-validation error (GCV) Generalized cross-validation e rror (GCV), also known as PR ESS (predicted residual sum of squares) in the polynomial response surface approximation terminology (/ s GCVPRESSN), is estimated by using data at s N points as follows. We fit surrogate

PAGE 152

152 models to 1sN points by leaving one design point at a time, and predict response at the left out point. Then, GCV is defined as, ()2 11 (),i ii isNsGCVyy N (5.19) where ()i i y represents the prediction at ()xi using the surrogate cons tructed using all design points, except (()xi, i y ). We use the square root of generali zed cross-validation error to estimate actual root mean square error in approximation and compare its performa nce with other global error estimation measures. Even though we have shown a global GCV, a local counterpart of the GCV can also be developed. The analytical estimate of GCV is av ailable for polynomial response surface approximation (Myers and Montgomery, 1995). Mitc hell and Morris (1992) and Currin et al. (1998) provided computationally inexpensive expressions to eval uate cross-validation error for kriging using a constant trend model while holdi ng other model parameters constant. Martin (2005) extended this analytical estimate of cro ss-validation error to account for more complex trend functions while keeping the model paramete rs constant. However, here we use a first principle based method to estimate GCV for kriging. Standard deviation of responses (sresp) Goel et al. (2006b) have recently showed that we can estimate uncertainty in predictions for any surrogate model by using an ensemble of surrogates, and that a convex combination of surrogates provides a more robust approximation than the individual surrogate s. The main idea is as follows. Let there be NSM surrogate models such that there are NSM predictions ( i y ) at any test point. We compute the standard deviation of responses as,

PAGE 153

153 2 111 ()(), 1 1 where()(). xxx xxSMSMi SM N i i SMN resp isyyy N yy N (5.20) In this study, we compute sresp using four surrogate models, krig ing, radial basis neural network (Orr, 1996), and two polynomial response surface approximations ; one with quadratic loss function ( p=2 Equation (5.4)), and second with si xth order loss function ( p=6 Equation (5.4)). The different error estimation measures used in this paper are summarized in Table 5-1. PRESS-based weighted average surrogate (PWS) We construct a PRESS-based weighted average surrogate (PWS) using multiple approximations as follows (Goel et al., 2006b). The predicted response () xpwsy at a data point using PWS is given as, ()(), xxSMN pwsii iywy (5.21) where SMN is the number of surrogate models, () xiy is the predicted response by the ith surrogate model, and iw is the weight associated with the ith surrogate model at design point x The weights are determined as follows. * 1*,, ;0,1,i iSMii i avg i i N avgi SMEw ww E w EEN (5.22) where Ei is the square root of generali zed cross-validation error (GCV) for ith surrogate model. We use 0.05and1 in this study. More details about PWS can be found in Chapter 4.

PAGE 154

154 Ensemble of Error Estimation Measures Finally, we discuss the concept of an ensemble of error measures that is inspired by our previous work on ensemble of surrogates (G oel et al., 2006b). Since there are many error estimation measures for different surrogates, we can simultaneously use the information to estimate actual errors. In this context, we explore three approaches as follows. Averaging of multiple error measures A simple way of combining differe nt error measures is to take an arithmetic or geometric average of appropriate error models. To this end, we can combine standard error with root mean square bias error for polynomi al response surface approximation, and mean square error with standard deviation of responses for kriging. We fu rnish details about averaging of error measures in the results section. Identification of best error measure We cannot determine the suitability of any error measure a priori without testing the performance against actual data. However, we can assess the performa nce of various error estimation measures for a given problem by us ing the information obtained from the global cross-validation (GCV). The trends observed in comparing the actual error and the predicted error at the omitted design points are likely to refl ect the accuracy of error estimation measure(s) at other points. This GCV based method to identify the appropriate error measure is computationally very attractive since the inform ation required to make the decision does not require any additional simulations. The deta ils of the procedure are as follows. We fit surrogate models by leaving one desi gn point at a time. We estimate responses, actual and predicted errors, and st andard deviation of responses at the design point not used in constructing the surrogate model. Now, we ch aracterize the performance of different error estimation measures by comparing the actual errors with the predicted errors according to an

PAGE 155

155 appropriate criterion. A few rele vant criteria are the correla tion between predicted and actual error, the ratio of actual and pr edicted root mean square error, and the ratio of maximum actual and predicted error. The best of a ll error measures can be used to characterize the actual errors in entire design space. Simultaneous application of multiple error measures The primary application of error estimation measures is to identify the locations where predictions may have high errors. So the risk asso ciated with the applica tion of error estimation measures is the inability to identify the high error zones. We propose to simultaneously use different error estimation measures to reduce th e probability of missing regions of high errors. This is accomplished by consider ing a region to have high errors if at least one error estimation model identified it as a high error region. Global Prediction Metrics We compare different global and pointwise er ror metrics with actual errors using test problems described in next secti on. The global error measures (root mean square error, process variance, and GCV) for different surrogates are compared with th e respective actual root mean square (RMS) errors in approxi mation. Since our goal is to asse ss the overall cap abilities of different pointwise error measures, we use following global metrics for comparison. Root mean square error Pointwise error estimates in design space are converted into global measures like root mean square error (RMSE), as follows, 21 ,V R MSEedV V (5.23)

PAGE 156

156 where e is the error estimate. In low dimensional sp aces, we used a uniform grid of test points so the root mean square error is nume rically evaluated by implementing quadrature4 for integration (Ueberhuber, 1997) as, 21,jjtestN j teste RMSE N (5.24) where j is the weight used for integration, and Ntest is the number of test points. For high dimensional problems, where using a uniform grid of test points is computationally expensive, we used a quasi-random set of test points so we choose 1j to get root mean square error. We compare the predicted and actual RMSE. Correlation between predicted and actual errors Correlation between pointwise actual absolute errors and predicted errors at the test points ree is given as 1 ,. ()()VeeeedV V ree ee (5.25) where e is the actual error, e is the predicted error, e is the mean of actual error, () e is the variance of the actual error, e is the mean of pr edicted error, and () e is the variance of predicted error. The correlation coefficient is nume rically evaluated from the data for test points by implementing quadrature for integration (Ueberhuber, 1997) as given in Equation (5.26). 4 We used trapezoidal rule for integration.

PAGE 157

157 2 211 1111 ;, 11 (),iiiii V testtest ii V testNN testtest ii N test ieedVeeee VNN eeedvee VN (5.26) where i is the weight associated with trapezoidal in tegration rule when uniform grid is used as test points, and Ntest is the number of test points. As for root mean square estimate for fiveand six-variable problems we choose 1i to estimate correlation using Equation (5.26). For a high quality error prediction measure, the correlation coefficient shoul d be as high as possible. The maximum value of ree is one that defines exact linear re lationship between the predicted and the actual errors. Maximum absolute error Maximum error is computed using the error data at all test points. We compare the magnitude of maximum actual absolute error with maximum predicted error. Test Problems and Testing Procedure Test Problems To test the predictive capabilities of differe nt error estimators, we employ two types of problems ranging from two to six variables: (1) an alytical functions (Dix on-Szeg, 1978) that are often used to test the global optimization methods and (2) engineering probl ems: a radial turbine design problem (Mack et al., 2006) that is a new concept desi gn, and cantilever beam design problem (Wu et al., 2001) that is extensively used as a test problem in reliability analysis. The details of each test problem are given as follows.

PAGE 158

158 Branin-Hoo function 2 1 2 1221 122 1 5.1 5 1 (,) 6101cos()10, 8 4 [5,10],[0,15]. x x fxxxx xx (5.27) Camelback function 4 2222 1 12111222 12 (,) 42.144, 3 [3,3],[2,2]. x f xxxxxxxx xx (5.28) Goldstein-Price function 2 22 1212112122 2 22 12112122 12 (,) 1119431463 3023183212483627, ,[2,2]. fxxxxxxxxxx xxxxxxxx xx (5.29) The graphical representation of these tw o-variable test problems is given in Figure 5-1 illustrating zones of high gradients. Hartman functions 122 11()exp, where,,,[0,1].Niv vN m iijjij ijfcaxp xxxx x x (5.30) Two instances of this problem are considered based on the number of design variables. For the chosen examples, m = 4 Hartman-3 This problem has three variables. Th e choice of parameters is given in Table 5-2 (DixonSzeg, 1978). Hartman-6

PAGE 159

159 This instance of the problem has six design variables and the parameters used in the function are tabulated in Table 5-3 (Dixon-Szeg, 1978). For this case, all design variables were in range [0, 0.5] instead of [0, 1]. Radial turbine design problem As described by Mack et al. ( 2006), this six-variable proble m is a new conceptual design of a compact radial turbine used to drive pumps that deliver liquid hydrogen and liquid oxygen to the combustion chamber of a spacecraft. The objectiv e of the design is to increase the efficiency of a turbine in the liquid rocket expander cycle engine, while keep ing the overall weight of the turbine low. Our interest in this problem is to develop accurate surrogate model(s) of the efficiency as a function of six design variables. The descripti on of design variables and their corresponding ranges are given in Table 5-4 (Mack et al., 2006). The objectives of the design were calculated using a one-dimensional flow an alysis meanline code (Huber, 2001). Mack et al. (2006) identified the appropriate region of in terest by iteratively refining the design space. They also identified the most important va riables using global sensitivity analysis. Cantilever beam design problem (Wu et al., 2001) The cantilever beam design problem (Figure 5-2) that is widely used for reliability analysis is illustrated as follows. The displacement of the beam, Gd, is given as: 2 2 3 00 224 ,y x dF F L GDDD Ewt tw (5.31) where D0 is the allowable initial deflecti on of the beam (taken as 2.25), E is the Youngs modulus, L w, and, t, are the length, width, and thickness of the beam, respectively. The length of the beam is fixed at 100 and the displacement Gd is approximated using different surrogate models. Based on the number of design variables, two instances of this problem are given as follows.

PAGE 160

160 Two variables The design variables are the horizontal loading ( Fx), and the vertical loading ( Fy). In this case, we use the Young modulus E = 29e06 psi, width w = 2.65354, and thickness t = 3.97917. The ranges of the design variables are 700,1300x F lbs and 900,1300y F lbs Five variables The displacement is given as a f unction of the horiz ontal loading ( Fx), vertical loading ( Fy), Youngs modulus ( E ), width ( w ), and thickness ( t ). The ranges of the design variables are given in Table 5-5. Testing Procedure The main steps in the estimation and testing of errors in surrogate modeling are given as follows. Design of experiments (DOE) As we showed in Chapter 3 (Goel et al., 2007b), a combination of Latin hypercube sampling (LHS) and D-optimality criterion (Mye rs and Montgomery, 1995) to construct design of experiments is helpful in reducing noise and bias errors. In this procedure, we generate a large sample of points using LHS, a nd then pick the most optimal c onfiguration for the desired order of polynomial using D-optimality criterion without using replicate points. For all analytical and cantilever beam desi gn problems, we adopt this LHS+D-optimality criterion based approach to sample design poi nts for surrogate construction. The number of design points ( Ns), order of polynomial for D-optimality crit erion, and the number of points used in the large LHS sample ( Nlhs) are given in Table 5-6. The large optimal LHS sample is obtained using MATLAB routine lhsdesign with maxim in criterion (maximize the minimum distance between points). We allocate a maximum of 100 iterations for optimization. This LHS design serves as the initial grid of points fo r MATLAB routine candexch to select Ns design points

PAGE 161

161 without duplication, using D-optimality criteri on. We allow a maximum of 40 iterations for optimization. For the radial turbine design problem, Mack et al. (2006) sampled 232 designs in the sixdimensional region of interest using a fractio nal factorial design. Eight designs were found infeasible, and the 224 feasible design points were used to cons truct, and to test different surrogate models and error estim ation measures. In this study, we randomly select 56 points to construct surrogate models, and the rema ining 168 points are used for testing. We note that there is uncertain ty associated with the DOE due to random components, and the possibility of convergence to local optima fo r both LHS and D-optimal design. To reduce the uncertainty due to choice of DOE, we presen t results based on 1000 instances of design of experiments for all problems. Test points For twoand threedimensional spaces, the su rrogate models were tested using a uniform grid of vN p points where Nv is the number of variables, and p is the number of points along each direction (given in Table 5-6). We used 2000 points, selected via optimized LHS, to assess the performance of error estimation measures for Cantilever-5 and Hartman-6 problems. For radial turbine example, we compared error estimation m easures using 168 points that were not used to construct surrogate models. Surrogate construction The order of polynomial used for polynomial response surface approximation is given in Table 5-6. While we used Equation (5.5) for PRS approximation in least squares sense, we employed MATLAB routine lsqnon lin to construct PRS with sixth order loss function (Equation (5.4)). We used the kriging software deve loped by Lophaven et al. (2002) with a linear

PAGE 162

162 trend model, and the Gaussian correlation function. The bounds on the parameters governing the Gaussian distribution were taken as [0.01, 200] Radial basis neural network was constructed using MATLAB routine newrb, with the parame ter spread specified as 0.5, and the mean square error goal was taken as square of five -percent of the mean value of the response at training points. For all problems, we normalized design variables between zero and one, such that the minimum value of any variable was scaled to zero. The su rrogate models were constructed in normalized design space. Error estimation Firstly, we computed pointwise actual absolute errors at test points for different surrogate models. Next, we estimated global error measures, namely, estimated root mean square error for PRS (Equation (5.6)), process varian ce for kriging (Equation (5.15)), and generalized crossvalidation errors for each surrogate model using Equation (5.19). Finally, we evaluated pointwise errors namely, standard error (Equation (5.7)), and RMS bias error (Equation (5.9)) for PRS, mean square error (Equation (5.18)) for kriging, and standard deviation of responses (Equation (5.20)) using four surroga tes, two instances of polynomial res ponse surfaces with loss functions p = 2 and p = 6 respectively, kriging, and radial basis neural ne twork. We used 0.2 for bias error computations, and the true function was as sumed to be the lowest order polynomial that had more coefficient than the number of data points ( Ns). We computed maximum and RMS values of pointwise error m easures as defined earlier. Results: Accuracy of Error Estimates Firstly, we normalized actual RMS error in approximation by the range of any individual function in the entire design space, and show results in Table 5-7. A low value of normalized actual RMS error indicates good approximation of the actual function. Polynomial response surface approximation using a quadratic loss fu nction was a good surrogate for all problems,

PAGE 163

163 except the Hartman problem with three variables ( 10% or more errors in approximation). Kriging performed reasonably well for all problems, ex cept the Camelback function and the Hartman problem with six variables. RBNN was the poorest of all surrogates overall, but it was the best surrogate for Goldstein-Price problem. Polyno mial response surface approximation using a sixorder loss function was slightly wo rse compared to the quadratic loss function. For radial turbine design problem, the performance of PRS with sixorder loss function was particularly poor and, this resulted in the poor perfor mance of PWS model (PRS with p=6 was assigned a high weight due to a good PRESS). For all problems, except ra dial turbine design prob lem, PWS model could significantly reduce the influence of the worst su rrogate. Also notably, th e PRS models yielded much lower coefficient of variation compared to other surrogate models. The results suggested that the PRS model is indeed a more reliable approximation. The higher variability in kriging approximation is attributed to the variability in estimating the model parameters using the maximum likelihood estimates. The variability in RBNN arises from the complex fitting process that results in different number a nd location of neurons with each DOE. Next, we compared the accuracy of different types of error estimates on various test problems in following two subsections. First subsect ion presents results of comparison of global error estimators with actual er rors. We compared the performa nce of local (pointwise) error estimators in second subsection. Global Error Measures We assessed the accuracy of global error estim ates by computing the ratio of global error estimates and relevant actual root mean square er ror in approximation. The desired value of this ratio is one. A value of less than one indicates that the actual RMS error is underestimated by the error predictor, and a value greater than one m eans that the global error estimator overestimates the magnitude of actual RMS error. We su mmarized the results based on 1000 DOEs in Figure

PAGE 164

164 5-3 using boxplots of different ratios. The box encompasses the 1st and 3rd quartile of the data. Median of the data is shown by the line inside th e box. The leader lines on the ends are placed at either the minimum/maximum of the data or at a distance of 1.5 times th e width of the box. The data points that fall beyond the range of leader lines are shown by a symbol +. The mean and coefficient of variation (C OV) for different problems are given in Table 5-8. We observed that the global er ror estimates for polynomial re sponse surface approximation (PRS) provided more accurate estimates of the actu al RMS error than their kriging counterparts. Both generalized cross validation error (GCV) a nd estimated RMSE for PRS were reliable error estimates for different test problems, as they yi elded small variation with the choice of DOE (low COV on individual problems), and the problem (low variation of the mean, and low COV for all problems). We noted that estimated RMSE typi cally underestimated the actual RMS errors in approximation, and GCV usually overestimated the errors. GCV was better than the proce ss variance estimate for kriging, as GCV yielded smaller variation with the choice of desi gn of experiment (higher mean) and the nature of the problem (lower COV). GCV for kriging, usually, provided higher overestim ate of actual error than GCV for PRS. Besides, the variation of GCV for kr iging with different pr oblems, and DOEs was higher compared to GCV for PRS (compare COV fo r all problems). Process variance for kriging performed the poorest among all global error m easures. This error measure showed large variability with the choice of problem, and design of experime nts (high COV for all problems). Usually, process variance overestimated the actua l errors and the overestimate was much higher for Branin-Hoo function, and the cantilever beam design problem with two variables. Overall, the conclusions are that the mode l independent global error measure (GCV, computationally expensive error measure) provides an equal or better estimate of actual root

PAGE 165

165 mean square error than the model based erro r measures (computationally inexpensive error measure), namely prediction variance for PRS and process variance for kriging. The variability with the choice of DOE and the na ture of the problem was also lower for GCV. Relatively, the performance of error estimation measures for PRS was better than their kriging counterparts. This is remarkable because the error in models cons idered here is primarily bias error rather than noise error, where kriging is supposed to perform better. Pointwise Error Measures Root mean square errors We computed the ratio of predicted root mean square error (obtained via different pointwise error estimation measures) and actual RMS errors in the entire design space. The results for 1000 DOEs are summarized in Figure 5-4 with the help of boxplots, and corresponding mean and COV are given in Table 5-9. Although there was a single estimate of standard deviation of responses we computed the ratio of pr edicted RMS and actual RMS error in PRS, KRG, and PWS, separately due to the di fferences in the actual errors. As before, the desired value of any ratio is one. The standard error consistently underestimated the actual RMS error for all test problems. However, this error estimate yielded the least variation in the results with the choice of DOE (low COV for individual problems) and the natu re of problem (lowest COV for all problems among all error measures). Thus, one can get a fairly good estimate of actual RMS error by inflating the root mean square of the estimated st andard error by a factor of 1.5 (at least that is the case with the examples considered in this study). Root mean square of the RMS bias error overestimated the actual RMS error (except Hartman-6 problem) but provided a reasonable estimate of the actual error. The RMS bias er ror measure significantly overestimated actual errors for a few DOEs for Branin-Hoo func tion and radial turbine design problem, and

PAGE 166

166 consequently the mean value was influenced (als o large COV). The median values of the ratio for Branin-Hoo function and radial turbine desi gn problem were 2.13 and 1.66, respectively. Overall, this error estimate usually had low vari ability with the choice of design of experiments and test problems. As expected, this error esti mate performed the best when the assumed true function and the actual true func tion were very close (Camelback and Goldstein-Price examples). However, we note that RMS bias error measur e involves two assumptions: (1) order of the assumed true polynomial, and (2) magnitude of parameter that might influence its performance. The standard devia tion of responses, in general, performed well in characterizing the actual RMS error and often, the predicted RMS error was cl ose to the actual RMS error, except for the cantilever beam de sign with two variables example. The standard deviation of responses on an average (with respect to test problems) provided a reasonable estimate (median ratio is closed to one) of actual errors in PRS, but this estimate had significant variation in the results both with the choice of test problems (h igh COV, when all probl ems were simultaneously considered) and the choice of design of experiment. An interesting observation from Table 5-7, Table 5-8, and Table 5-9 is that the variation in the actual RMS error in PRS approximation with resp ect to DOE is lower than the variation in its error estimates (estimated root mean square error, and standard error). This is because the fitting process is not influenced by the statistical assumptions while th e error estimate is governed by assumptions that do not apply to the data. The performance of the predicte d root mean square error for kriging in characterizing the actual RMS errors, as denoted by the closeness of the ratio of predicted and actual RMS errors to one, was very good in the mean value of the ratio. However, the predictio n capabilities of this error measure often depended on the choice of de sign of experiments (rela tively large COV). For

PAGE 167

167 Goldstein-Price problem, this error measur e overestimated the actual RMS errors. The performance of standard deviati on of responses in predicting krig ing errors was similar to that observed for polynomial response surface approxima tion. For Branin-Hoo and cantilever beam design problem with two variables, the mean and median values of the ratios were significantly different (median values were 1.97 and 80.10, re spectively). We had larg e variability with the choice of design of experiment, and the nature of problem. An interest ing observation was that often when mean square error overpredicted the ac tual error, the standard deviation of responses underpredicted and vice-versa The root mean square of the standard devi ation of responses usua lly overpredicted the actual RMS error incurred by PRES S-based weighted average su rrogate model except for sixdimensional problems. The variation in results with DOE (COV) was high but comparable to other error estimation measures. However, we noted significant variation in the results with the choice of test problem (highest COV, when a ll problems data was considered simultaneously). The performance of standard deviation of responses for cantilever beam design problem with two variables was particularly poor becau se one surrogate model, radial basis neural network, performed very poorly. Since we have in formation from four surrogate models, we can make a judicious decision to ignore predictions of radial basis neural network model. We computed the standard deviation of responses us ing remaining three surrogate models. The mean (number given in parenthesis is COV) of the ra tio of root mean square of the re-calculated standard deviation of responses and actual RMS error for PRS, KRG, and PWS based on 1000 DOEs is 0.65 (0.31), 3.22 (0.85), and 0.85 (0.41), respectively. Compared to the performance of standard deviation of responses in Table 5-9, we observed sign ificant improvements in the predictions. The improvement in performance e xplained the anomaly observed previously (very

PAGE 168

168 high sresp), and cautioned us against the possi bility of getting poor estimate of sresp, when predictions of one surrogate model were aberrant. Correlations between actual and predicted errors Next, we studied the localized performance of pointwise error estimates by computing correlations between pointwise pr edicted and actual errors. The mean and COV of correlations based on 1000 DOE experiments are given in Table 5-10. The data is summarized in the boxplots shown in Figure 5-5. In general, all error estimators had only modest correlations with the actual error field in the entire design space, and there was significant va riation with the choice of DOE for most test problems. The standard error (ESE) was the wors t error estimator for all test problems, except radial turbine design problem. ESE had large va riation (COV) with desi gn of experiments and with the choice of the problem (highest COV wh en all problems were considered). RMS bias error estimates were good in characterizing the act ual error field for Camelback, Goldstein-Price, where the assumed true models were close to the true functions, and cantilever beam design problem with two variables. We noted significant variation in the correl ations with design of experiment, and the choice of test problems. For all problems except radial turbine design, RMSBE performed better than ESE. The kriging mean squared predicted error also showed significant varia tions with design of experiment, and the nature of the test problem. But the correla tions were good for the BraninHoo function, and the cantilever beam with two variables. The st andard deviation of responses performed at par with other er ror estimation measures for PRS, KRG, and PWS models and, showed relatively smaller variations (COV) when all problems data was considered simultaneously.

PAGE 169

169 The poor characterizatio n of the entire error field was not unexpected because often the assumptions in the surrogate models were not satisfied. Besides, most error measures characterize the average error considering num erous functions, and comparison with a single function was bound to yield errors. The most in teresting result was that overall the model independent error measures performed at par wi th the model based error estimations measures. Maximum absolute errors Next, we show the boxplots of ratio of predic ted maximum error to the actual absolute maximum error at test points for 1000 DOEs in Figure 5-6. The mean and coefficient of variation (based on 1000 DOEs) of the ra tio of predicted maximum and actual absolute maximum error are summarized in Table 5-11. As was observed for RMS errors, the standard error underestimated the maximum absolute actual errors in polynomial response surface approximation. The magnitude of underestimate varied with the choice of test problem, though variation due to DOE was moderate. On the other hand, RMS bias error, typically, overestimated the maximum actual error for all but six-dimens ional test problems (Hartman-6 and radial turbine). This error measure had significant variability with the choice of DOE but results were more consistent with the nature of the problem The standard deviation of responses yielded mixed behavior in characterizing the maximum actual absolute error. For some surrogate models, it underestimated the actual errors and for others it overestimated, though this result is not unexpected since this error measur e is dependent on the surrogate predictions. Besides, there was significant variability with DOE and test problem Mean square error for kriging underestimated the maximum actual absolute error. The underes timate was significantly hi gh for six dimensional problems. For some problems, the variation in the results with DOE was significantly high. We note that model based global error estimation measures are computationally inexpensive but they do not charact erize the actual root mean s quare error very well. On the

PAGE 170

170 other hand, local error estimation measures give a more reasonable estimate of actual RMS error in the entire design space. This is particularly true for kriging error estimation measures. In addition, we can use local error estimate to id entify regions of high erro rs. We conclude that despite relatively higher computational cost compar ed to model based global error measures, it is useful to estimate local errors, especially when the cost of error prediction is significantly lower than the cost of acquiring the data. Ensemble of Multiple Error Estimators We observed that no single error estimation measure correctly characterized the entire error field for all problems. We proposed usi ng multiple error measures to improve the error prediction capabilities. First, based on our observations in Table 5-8 and Table 5-9, we used an average of different RMS error estimates. S econd, for kriging (though the approach can be applied to other surrogates as well), we used GCV-based measure to identify the best error measure for a given problem. Finally, we simu ltaneously used different error prediction measures to identify regions of high errors. The detailed results from these methods of ensemble of error estimation measures are as follows. Averaging of Errors We observed in previous section (Table 5-8 and Table 5-9), that for PRS approximations some error measures overestimated and others u nderestimated actual errors This indicated that we can better estimate the magnitude of actual er rors by averaging different error estimates. To verify this observation, we compared the actu al RMS errors with the geometric mean of GCV and estimated root mean square error (global erro r measures), as well as with the geometric mean of standard error and RMS bias e rror (local error measures). The results of ratio of root mean square of averaged error measures and actual RMS error for different problems are summarized in Table 5-12.

PAGE 171

171 A comparison of individual global error measures (Table 5-8) and averaged global error measure (Table 5-12, column 1) for PRS showed that average of errors was relatively more robust with the choice of problem (ratio is clos er to one), and DOE (slightly smaller COV). Similarly, as is exemplified by the ratio of pr edicted and actual root mean square errors (Table 5-12, column 2), and correlation be tween actual and predicted error (Table 5-12, column 3), pointwise geometric averaging of estimated standard error and root mean square bias error for PRS was a better choice in charact erizing the actual errors than the individual error estimates (Table 5-9). The geometric averaged error perfor med significantly better than the worst of the individual error estimates, and was at par with the better of the two. Besides, the performance was also more consistent for different problems (ratio closer to one and smaller COV when all problems were considered). Similar results were obtained for pointwise geometric averaging of mean square error and standard de viation of responses for kriging (Table 5-12, columns 4 and 5). In fact, the correlations between predicted and actual errors using the geometric average error were much better than individual error measures for kriging. Thus, we can say that averaging of the error estimates induces robustness in predictions. At the same time, we reiterate that the c hoice of true polynomial a nd the distribution of polynomial coefficients were arbitr ary for RMS bias error estimate. Further research is required to explore the influence of these parameters on the averaging of the errors. Besides, in future one may also consider different ways of combini ng the pointwise error estimates e.g., weighted averaging of errors, etc. Identification of Suitable Error Estimator for Kriging We observed that sometimes the mean square error characterized the actual error field better than the standard de viation of responses and vice-versa but we cannot pre-determine the suitability of either error measure. In this secti on, we investigated the a pplicability of generalized

PAGE 172

172 cross-validation approach to identify the better er ror measure. That is, whether the mean square error or standard deviation of responses was mo re appropriate error meas ure to characterize the actual error field. We fitted surrogate models to Ns 1 data points, where Ns is the total number of samples. We estimated response, actual and predicted errors, and st andard deviation of responses at the design point not used in constructing the surr ogate model. Since we knew the actual response at the left out point, we could compute actual abso lute error in prediction. Now, we characterized the performance of any error measure using the data at Ns points. We compared different error estimation measures using a suitable criterion, and identified the better of the two error measures to characterize the actual error fi eld. In this study, we used the correlations between predicted and actual error, and the ratio of the predicted and actual root mean square error, as two criteria to identify the better erro r measure. Note that, the choice of error measure may vary with the nature of the problem as well as with design of experiment. The mean performance of individual and c hosen error measures based on 100 DOEs for different test problems is summarized in Table 5-13. We also tabulate the number of times the predicted best error measure failed to identify the correct better error estimation measure. Although the results for individu al error estimation measure we re obtained with only 100 DOEs to reduce high computational cost, the mean correla tions and RMS errors were comparable to the data obtained with 1000 DOEs (Table 5-9 and Table 5-10). The results clearly indicated that the e rror measure chosen using the GCV based information was often the better measure for th at problem and DOE; and as expected, the mean of the chosen error measure was close to the bett er error measure. For the cantilever beam design problem with two variables, the standard deviation of responses performed very poorly (due to poor approximation of one surroga te model, RBNN) in character izing the root mean square

PAGE 173

173 errors and hence, almost always correctly discarded. So the performance of the predicted best error measure was comparable to mean square er ror measure. We note that the choice of error measure according to GCV-based criterion might be wrong up to 42% (average 25%) of the time, when the selection criterion was the correla tion between the actual a nd predicted error; and up to 50% (average 30%) times, when the criter ion was the ratio of pred icted and actual root mean square error. Nevertheless, the accuracy in identifying the correct error estimation measure for any problem and DOE was encouraging. Detection of High Error Regions us ing Multiple Errors Estimators As discussed earlier, error measures are of ten used to identify the locations where predictions may have high errors/uncertainty. Hen ce, the failure of error estimation measures to detect regions of high error is undesirable. We can reduce the risk of missing regions of high errors by simultaneously using multiple error measures. We compared the performance of different individual error estimators, and combin ation of error estimators against the actual absolute errors using the suite of analytical a nd cantilever beam design problems. Different steps in the test procedure are given as follows. We scale the actual and predicted errors by the corresponding maximum errors in the entire design space to avoid disparity in the ma gnitude of different error estimates. We subdivide the entire design space in 2v N orthants. We define an orthant as high error orthant if the maximum s caled actual error in that orthant is equal to or greater than 0.7. A high error orthant is considered detected by an error measure, if the maximum scaled predicted error in the orthant is greater than 50% of the maximum scaled actual error in this orthant (0.7+). If we fail to detect at least one high error orthan t, we consider that case as failure to detect high error region. If we fail to detect the orthant with the maxi mum actual error, the case is denoted as failure to detect maximum actual error.

PAGE 174

174 The test set-up for error estimation is the same as described earlier. The number of cases out of 1000 that failed to detect high error regions using different error estimators is given in Table 5-14 for different test problems. We showed the results of indivi dual and combination of multiple error estimators. For all problems, except Goldstein-Price and Hartman-3, the standard error yielded the least number of failures fo r polynomial response surface approximation. The mean square error for kriging worked well in identifying regions of hi gh error. The standard deviation of responses performed reasonably in ch aracterizing the actual errors in PRESS-based weighted average surrogate, considering the fact that the present formulation is only a nave attempt to address a large class of problem of combination of multiple surrogates. Combination of different error estimators was effective in reducing the number of failures. The combination of standard deviation of res ponses with standard error for polynomial response surface approximation, and with mean square error for kriging, was most effective in predicting high error regions. By combining more error es timation measures, we reduced the chance of missing high error regions. The results for the de tection of maximum error using different error estimators, given in Table 5-15, also follow the same conc lusions. The standard error for PRS and mean square error for KRG were the best individual error measur es in detecting the maximum error regions and combination of error m easures were effective in reducing the risk of missing the maximum error regions. While the combination of multiple error estimators was useful to identify high error regions, they increased the chance of wrongly id entifying low error regions as high error. The steps in the detection of false representation of high error regions using different error estimates are as follows. We scaled actual and predicted errors by corresponding maximum erro rs in the entire design space.

PAGE 175

175 We considered an orthant to be falsely marked as high error region by an error estimator, if the maximum scaled predicted error in the orthan t was greater than or equal to 0.9, and the maximum scaled actual error in this orthan t was less than 60% of the maximum scaled predicted error (0.9+). As before, we marked a case failed, if it wrongl y identified at least one low error region as high error. We considered a case to falsely represent th e maximum high error region, if the orthant wrongly marked as high error region had the ma ximum scaled predicted error in the entire design space. The number of cases with wrong representation of high errors for different test problems while using different error estimators is given in Table 5-16. Different error estimators performed better for different test problems, how ever most often, the standard deviation of responses created the least number of false high error regions for any surrogate approximation. The root mean square bias error for PRS also pe rformed quite well. As expected, combination of multiple error estimators increased the number of cases, for which false high error regions were created. We cannot conclude if one combinati on was better than others for all problems. The results for wrong representation of maximum errors, shown in Table 5-17, follow the same conclusions. A high level summary of the performance of all pointwise error estimation measures is given in Table 5-18. Conclusions We compared the accuracy of different error estimators using a suite of analytical and industrial test problems. The main findings can be summarized as follows. Global Error Estimators Global error estimators yielded a reasonably accu rate estimate of the actual root mean square error for different test problems and DOEs.

PAGE 176

176 Estimated root mean square error in pol ynomial response surface approximation predicted the actual root mean square error in design space consis tently; however, this measure underpredicted the actual errors. Process variance in kriging significan tly overpredicted th e actual errors. Model independent generalized cross-validation error was a g ood measure of assessing the actual root mean square error in design space for both polynomial response surface and kriging approximations, though GCV usually over predicted the actual root mean square error. Pointwise Error Estimation Models Pointwise error estimators performed slightly worse compared to the global measures but this was expected due to the constraints on the assumptions. Estimated standard error underestimated th e actual errors by approximately 50%. Though this error estimator performed the worst in ch aracterizing the entire error field, it had the best capability to detect th e regions of high errors among all the error estimators for polynomial response surface approximation. This error estimator was least influenced by the choice of design of experiment, and the nature of the problem. RMS bias error resulted in a good approximati on of the root mean square of the actual absolute error for all test problems, though it overpredicted the act ual errors. When the assumed true model was close to the actual f unction, the RMS bias error characterized the actual error field quite well. This error estimate was mostly better than the standard error in characterizing the entire error field. Mean square error estimator t ypically underpredicted the actua l root mean square errors, and showed low variation with the nature of the problem; but the pred iction of entire error field was highly dependent on the function. Howe ver, this error estimate performed very accurately in predicting th e high error regions. Model based local error estimation measures pr ovided a better estimate of actual root mean square errors compared to the model based global error estimation measures. The standard deviation of responses, which is a model independent local error measure, usually gave a reasonable estimate of actual RMS errors for different problems, DOEs, and surrogates. However, the performance of this error estimator in characterizing the entire error field was highly problem dependent. Also this error estimate performed poorly, when one constituent surrogate model predictions we re very poor. It should be noted that while model based error measures have been deve loped over many years, there is a need to develop model independent local error measures.

PAGE 177

177 Simultaneous Application of Multiple Error Measures We showed that a geometric averaging of diffe rent error measures for PRS (standard error and root mean square bias error) provided a r obust estimate (with respect to problems, and choice of DOE) of magnitude of actual RMS er rors. The same concept worked well for global measures (GCV and estimated RMS error) as well. We obtained encouraging results for kriging (mean squared error and st andard deviation of responses) too. We can use the errors and predictions at th e design points while constructing the surrogate model using generalized cross validation proced ure to identify the suitable error measure for any problem and design of experiment. The predicted best error measure based on GCV was correct about 75% of the simulations, and the predictions of the selected error estimation measure were at par with the best individual error measure for any test problem. We further showed that the combination of multiple error estimators improved the ability to detect high error regions for all problems. The combination of standard deviation of responses with the standard error for polynomial response surface approximation, and with mean square error for kriging was the most effective combination. More error estimators increased the probabi lity of detecting the high error region. However, we noted that the combination of mu ltiple error estimators increased the chances of wrongly representing a low error region as high error region. Nevertheless, we still benefit against the risk of failing to identify high error regions.

PAGE 178

178 A B C Figure 5-1. Contour plots of two va riable analytical functions. A) Branin-Hoo. B) Camelback. C) Goldstein-Price. L=100 FxFyt w L=100 FxFyt w Figure 5-2. Cantilever beam subjected to horizontal and vertical random loads.

PAGE 179

179 A B C D E F

PAGE 180

180 G H Figure 5-3. Ratio of global error measures a nd relevant actual RMS error (logscale). GCV_PR: ratio of square root of generalized cro ss-validation error and actual RMS error for PRS, GCV_KR: ratio of square root of ge neralized cross-valida tion error and actual RMS error for KRG, Sigma_PR: ratio of squa re root of estimated root mean square error and actual RMS error for PRS, Sigm a_KR: ratio of square root of process variance and actual RMS error for kriging. A) Branin-Hoo. B) Camelback. C) Goldstein-Price. D) Hartman-3. E) Hartma n-6. F) Radial turbine design problem. G) Cantilever-2. H) Cantilever-5.

PAGE 181

181 A B C D E F

PAGE 182

182 G H Figure 5-4. Ratio of root mean square values of pointwise predicted and actual errors for different problems, as denoted by predicted error measure. ESE: Estimated standard error, RMSBE: RMS bias error, s_resp_PR: ratio of root mean square of standard deviation of responses and actual RMS erro r in PRS, MSE: square root of mean square error, s_resp_KR: rati o of root mean square of st andard deviation of responses and actual RMS error in kriging and. A) Branin-Hoo. B) Camelback. C) GoldsteinPrice. D) Hartman-3. E) Hartman-6. F) Radial turbine design problem. G) Cantilever2. H) Cantilever-6.

PAGE 183

183 A B C D E F

PAGE 184

184 G H Figure 5-5. Correlation between ac tual and predicted error meas ures for different problems, denoted by predicted error measure. ESE: Estimated standard error, RMSBE: RMS bias error, s_resp_PR: correlation between actual error in PRS and standard deviation of responses, MSE: square root of mean s quare error, s_resp_KR: correlation between actual error in kriging and standard devi ation of responses, s_resp_PW: correlation between actual error in PWS and standard deviation of responses. A) Branin-Hoo. B) Camelback. C) Goldstein-Pri ce. D) Hartman-3. E) Hart man-6. F) Radial turbine design problem. G) Cantilever-2. H) Camelback-6.

PAGE 185

185 A B C D E F

PAGE 186

186 G H Figure 5-6. Ratio of maximum predicted and actua l absolute errors in design space for different problems, denoted by predicted error meas ure. ESE: Estimated standard error, RMSBE: RMS bias error, s_resp_PR: correlation between actual error in PRS and standard deviation of responses, MSE: square root of mean square error, s_resp_KR: correlation between actual erro r in kriging and standard de viation of responses. A) Branin-Hoo. B) Camelback. C) Goldstein-Pr ice. D) Hartman-3. E) Hartman-6. F) Radial turbine design problem. G) Cantilever-2. H) Cantilever-6.

PAGE 187

187 Table 5-1. Summary of different e rror measures used in this study. Error measure Description Type NatureFor surrogate a (PRS) Root mean square error, Estimate of variance of noise Model based GlobalPolynomial response surface (KRG) (square root of) Process variance Model based GlobalKriging GCV Leave-one out error measure Model independent GlobalAll surrogates ese Standard error characterizes to noise Model based Local Polynomial response surface rms be RMS bias error characterizes modeling error Model based Local Polynomial response surface msee Mean squared error characterizes approximation error Model based Local Kriging resps Standard deviation of responses characterize approximation uncertainty Model independent Local All surrogates Table 5-2. Parameters used in Hart man function with three variables. I aij ci pij 1 3.0 10.0 30.0 1.0 0.3689 0.11700.2673 2 0.1 10.0 35.0 1.2 0.4699 0.43870.7470 3 3.0 10.0 30.0 3.0 0.1091 0.87320.5547 4 0.1 10.0 35.0 3.2 0.03815 0.57430.8828 Table 5-3. Parameters used in Ha rtman function with six variables. i aij ci 1 10.0 3.0 17.0 3.5 1.7 8.0 1.0 2 0.05 10.0 17.0 0.1 8.0 14.01.2 3 3.0 3.5 1.7 10.0 17.0 8.0 3.0 4 17.0 8.0 0.05 10.0 0.1 14.03.2 i pij 1 0.1312 0.1696 0.5569 0.01240.82830.5886 2 0.2329 0.4135 0.8307 0.37360.10040.9991 3 0.2348 0.1451 0.3522 0.28830.30470.6650 4 0.4047 0.8828 0.8732 0.57430.10910.0381

PAGE 188

188 Table 5-4. Range of variables fo r radial turbine design problem. Variable Description Minimum Maximum RPM Rotational speed 80000 150000 Reaction Percentage of stage pr essure drop across rotor 0.45 0.68 U/Cisen Isentropic velocity ratio 0.50 0.63 Tip Flow Ratio of flow parameter to a choked flow parameter0.30 0.65 Dhex% Exit hub diameter as a % of inlet diameter 0.1 0.4 AN2Frac Used to calculate annulus area (stress indicator) 0.50 0.85 Table 5-5. Ranges of variables for cantilever beam design problem (five variables case). Variable Minimum Maximum Units Fx 700 1300 lbs Fy 900 1300 lbs E 20e6 35e6 psi W 2.0 3.0 inch T 3.0 5.0 inch Table 5-6. Numerical setup fo r different test problems. Nv: number of variables, Ns: number of training points, Ntest is number of test points, for analytical and Cantilever-2 problems Ntest = pNv, where p is the number of points along each direction. For Hartman-6, Cantilever-5, and radial turbine problem, we specify the number of test points Ntest in table. Nlhs: the number of points in the large LHS sample. Problem Nv Ns p or NtestOrder of polynomial Nlhs Branin-Hoo 2 20 16 3 100 Camelback 2 30 16 4 150 GoldStein-Price 2 42 16 5 200 Hartman-3 3 70 11 4 250 Hartman-6 6 56 2000 2 200 Radial Turbine 6 56 168 2 NA Cantilever-2 2 12 16 2 60 Cantilever-5 5 42 2000 2 210

PAGE 189

189 Table 5-7. Mean and coefficient of variati on (COV) (based on 1000 desi gn of experiments) of normalized actual RMS error in the entire de sign space. We use the range of different functions to normalize the respective actual RMS errors. PRS: polynomial response surface approximation; the value in parenthesi s is the order of loss function used to estimate coefficients, KRG: kriging, PWS: PRESS-based weighted surrogate model (using PRS with p=2 and p=6 loss functions, kriging, and radial basis neural network). PRS ( p=2 )KRG RBNN PRS ( p=6 )PWS Mean 0.028 0.019 0.098 0.029 0.025 Branin-Hoo COV 0.075 0.744 3.43 0.088 0.643 Mean 0.041 0.122 0.162 0.041 0.045 Camelback COV 0.050 0.264 5.92 0.073 1.035 Mean 0.024 0.022 0.0093 0.025 0.012 Goldstein-Price COV 0.096 0.290 0.473 0.094 0.197 Mean 0.109 0.060 0.121 0.118 0.075 Hartman-3 COV 0.065 0.357 2.42 0.077 0.496 Mean 0.073 0.098 0.068 0.076 0.066 Hartman-6 COV 0.072 0.129 0.110 0.097 0.101 Mean 0.083 0.072 0.349 0.267 0.257 Radial turbine COV 0.159 0.114 0.463 0.070 0.055 Mean 1.03e-3 4.26e-49.71e-21.06e-3 1.22e-3 Cantilever-2 COV 0.117 1.17 5.186 0.112 3.251 Mean 0.015 0.032 0.061 0.016 0.018 Cantilever-5 COV 0.149 0.393 0.162 0.186 0.147

PAGE 190

190 Table 5-8. Mean and coefficient of variati on (COV) (based on 1000 desi gn of experiments) of ratio of global error measures and corre sponding actual RMS error in design space. P RSa: estimated RMS error for PRS, P RSGCV : square root of generalized crossvalidation error for polynomial resp onse surface approximation (PRS), K RG : square root of process variance for kriging, K RGGCV : square root of generalized crossvalidation error for kriging. (All proble ms: indicate the mean and COV for all problems. i.e., 8000 designs of experiments).*Median is 3.08. P RSa P RSGCV K RG K RGGCV Mean 0.74 1.17 62.09 3.63 Branin-Hoo COV 0.24 0.29 0.58 0.96 Mean 0.75 1.22 2.32 1.27 Camelback COV 0.17 0.21 0.47 0.28 Mean 1.09 1.93 10.74 2.30 Goldstein-Price COV 0.20 0.26 1.14 0.46 Mean 0.82 1.30 3.46 1.48 Hartman-3 COV 0.15 0.18 0.35 0.36 Mean 0.80 1.18 1.00 1.07 Hartman-6 COV 0.18 0.19 0.27 0.21 Mean 0.64 0.97 0.86 0.96 Radial turbine COV 0.32 0.28 0.21 0.34 Mean 1.04 1.94 744.2 18.32 Cantilever-2 COV 0.26 0.34 1.01 1.38 Mean 0.97 1.45 2.69 1.73 Cantilever-5 COV 0.18 0.20 1.37 0.42 Mean 0.86 1.40 103.42*3.84 All problems COV 0.28 0.36 3.49 2.75

PAGE 191

191 Table 5-9. Mean and COV (based on 1000 design of experiments) of ratio of root mean squared predicted and actual errors for different test problems, denoted by predicted error measure. ese : ratio of root mean square of es timated standard error and actual RMS error in PRS approximation, rms be : ratio of root mean square of RMS bias error and actual RMS error in PRS approximation, P RS resps : ratio of root mean square of standard deviation of responses and actual RMS error in PRS approximation, msee : ratio of RMS of square root of mean square error and actual RMS error in kriging approximation, K RG resps : ratio of root mean square of standard deviation of responses and actual RMS error in kriging, P WS resps : ratio of root mean square of standard deviation of responses and actual RMS erro r obtained using PRESS-based weighted average surrogate (PWS). We used four surrogates, PRS with p=2 and p=6 loss functions, KRG, and RBNN models, to cons truct PWS, and to estimate standard deviation of responses. (All problems: indi cate the mean and C OV for all problems. i.e., 8000 designs of experiments). *Median is 1.03. **Median is 0.98. #Median is 1.31. ese rms be P RS resps msee K RG resps P WS resps Mean 0.51 3.04 1.75 0.794.21 1.68 Branin-Hoo COV 0.24 1.04 3.45 0.486.00 1.31 Mean 0.54 1.15 2.73 1.040.91 2.03 Camelback COV 0.17 0.27 4.36 0.203.81 0.82 Mean 0.81 1.10 0.78 1.710.87 1.53 Goldstein-Price COV 0.20 0.17 0.15 0.420.21 0.21 Mean 0.64 1.74 0.73 1.101.50 1.00 Hartman-3 COV 0.15 0.29 1.67 0.381.71 0.54 Mean 0.58 0.78 0.70 0.870.52 0.78 Hartman-6 COV 0.18 0.20 0.14 0.170.17 0.20 Mean 0.68 2.03 1.56 0.761.82 0.50 Radial turbine COV 0.32 0.62 0.74 0.280.75 0.74 Mean 0.71 2.14 47.290.83230.7 30.57 Cantilever-2 COV 0.26 0.40 4.77 0.774.09 0.58 Mean 0.66 1.56 2.16 1.121.13 1.75 Cantilever-5 COV 0.18 0.16 0.20 0.240.42 0.18 Mean 0.64 1.69 7.21* 1.0330.20**4.98# All problems COV 0.26 0.85 11.280.5011.34 2.33

PAGE 192

192 Table 5-10. Mean and COV (over 1000 design of experiments) of correlations between actual and predicted errors for different test problems. P RSacte: actual error in polynomial response surface (PRS) approximation, ese : estimated standard error, rms be : root mean square bias error, K RGacte: actual error in kriging (KRG), msee : square root of mean square error, resps : standard deviation of responses, P WSacte: actual error in PRESSbased weighted average surrogate (PWS) mode l. We used four surrogates, PRS with p=2 and p=6 loss functions, KRG, and RBNN mode ls, to construct PWS, and to estimate standard deviation of responses. (All problems: indicate the mean and COV for all problems. i.e., 8000 designs of experiments). Correlation between PRSacte and Correlation between K RGacte and Correlation between PWSacte and Error measure ese rms be resps msee resps resps Mean 0.083 0.18 0.35 0.66 0.36 0.45 Branin-Hoo COV 1.32 0.90 0.61 0.25 0.66 0.46 Mean 0.27 0.70 0.27 0.43 0.77 0.24 Camelback COV 0.41 0.24 0.59 0.37 0.20 0.95 Mean 0.25 0.89 0.73 0.29 0.64 0.57 Goldstein-Price COV 0.54 0.13 0.14 0.60 0.24 0.23 Mean 0.26 0.33 0.64 0.34 0.28 0.45 Hartman-3 COV 0.42 0.31 0.19 0.24 0.44 0.31 Mean 0.10 0.26 0.48 0.073 0.54 0.39 Hartman-6 COV 0.71 0.39 0.16 1.06 0.19 0.21 Mean 0.24 0.16 0.19 0.17 0.084 0.066 Radial turbine COV 0.55 0.75 0.75 0.70 1.32 1.53 Mean 0.16 0.52 0.17 0.72 0.47 0.24 Cantilever-2 COV 1.05 0.44 1.45 0.29 0.57 1.27 Mean 0.10 0.14 0.39 0.20 0.52 0.60 Cantilever-5 COV 0.80 0.69 0.33 0.60 0.23 0.22 # of problems with mean correlation > 0.5 0 3 2 2 4 2 Mean 0.18 0.40 0.40 0.36 0.46 0.38 All problems COV 0.76 0.75 0.61 0.72 0.57 0.66

PAGE 193

193 Table 5-11. Mean and COV (based on 1000 design of experiments) of ratio of maximum predicted and actual erro rs for different test problems, denoted as ese : ratio of maximum estimated standard error and ma ximum actual error in PRS approximation, rms be : ratio of maximum RMS bias error and maximum actual error in PRS approximation, P RS resps : ratio of maximum standard deviation of responses and maximum actual error in PRS approximation, msee : ratio of maximum square root of mean square error and maximum actua l error in kriging approximation, K RG resps : Ratio of maximum standard deviation of respons es and maximum actual error in kriging, P WS resps : Ratio of maximum standard deviation of responses and maximum actual error obtained using PRESS-based weighted aver age surrogate (PWS). We used four surrogates, PRS with p =2 and p =6 loss functions, KRG, and RBNN models, to construct PWS, and to estimate standard deviation of responses. (All problems: indicate the mean and COV for all probl ems. i.e., 8000 designs of experiments). ese rms be P RS resps msee K RG resps P WS resps Mean 0.46 7.35 3.02 0.654.53 2.42 Branin-Hoo COV 0.37 1.32 3.22 0.615.56 1.49 Mean 0.43 1.86 4.18 0.441.34 2.05 Camelback COV 0.36 0.73 6.72 0.337.50 1.46 Mean 0.70 1.49 0.89 0.610.60 1.46 Goldstein-Price COV 0.47 0.75 0.47 0.630.40 0.44 Mean 0.51 3.45 0.93 0.522.69 1.51 Hartman-3 COV 0.40 0.60 2.56 0.312.49 0.77 Mean 0.24 0.67 0.46 0.200.33 0.46 Hartman-6 COV 0.27 0.32 0.24 0.250.26 0.28 Mean 0.29 1.45 0.89 0.170.87 0.51 Radial Turbine COV 0.63 1.02 0.78 0.460.85 0.67 Mean 0.39 2.34 51.270.71193.031.26 Cantilever-2 COV 0.37 0.54 4.62 0.963.78 0.67 Mean 0.25 1.16 1.69 0.320.93 1.30 Cantilever-5 COV 0.42 0.38 0.25 0.410.50 0.30 Mean 0.41 2.47 7.92 0.4525.545.12 All problems COV 0.58 1.67 10.860.8410.412.44

PAGE 194

194 Table 5-12. Mean and COV (based on 1000 design of experiments) of ratio of root mean square average error and actual RMS errors for diffe rent test problems. GCV: square root of generalized cross va lidation error, P RSa: square root of prediction variance, ese : estimated standard error (PRS), rmsbe : root mean square bias error (PRS), msee : square root of mean squared error (KRG), resps : standard deviation of responses. We used four surrogates, PRS with p=2 and p=6 loss functions, KRG, and RBNN models, to construct PWS, and to estimate standard deviation of responses. We used four surrogates, PRS with p=2 and p=6 loss functions, KRG, and RBNN models, to estimate standard deviation of responses. (All problems: indicate the mean and COV for all problems. i.e., 8000 designs of experiments). *Median is 0.90. P RSaGCV rmses bee (RMSE) rmses bee Correlation respmsees (RMSE) respmsees Correlation Mean COV Mean COV Mean COV Mean COV Mean COV Branin-Hoo 0.93 0.25 1.16 0.43 0.19 0.80 1.47 1.09 0.59 0.29 Camelback 0.96 0.18 0.78 0.16 0.70 0.21 0.85 0.54 0.71 0.20 Goldstein-Price 1.45 0.21 0.94 0.14 0.83 0.13 1.19 0.25 0.62 0.18 Hartman-3 1.04 0.16 1.05 0.18 0.35 0.28 1.22 0.48 0.38 0.29 Hartman-6 0.97 0.18 0.67 0.18 0.24 0.37 0.67 0.16 0.51 0.17 Radial turbine 0.79 0.30 1.16 0.43 0.21 0.59 1.12 0.44 0.13 0.86 Cantilever-2 1.41 0.28 1.20 0.24 0.52 0.39 10.53 1.22 0.63 0.35 Cantilever-5 1.18 0.19 1.01 0.14 0.14 0.61 1.10 0.26 0.54 0.18 All problems 1.09 0.31 0.91 0.34 0.40 0.69 2.07* 2.49 0.51 0.43 Table 5-13. Comparison of performance of indi vidual error measures and GCV chosen error measure for kriging. We show mean of errors based on 100 DOEs. For any DOE, predicted best measure is the one that yi elds better performance on the selected criterion. msee: square root of mean squared error in kriging, resps: standard deviation of responses. We used fo ur surrogates, PRS with p=2 and p=6 loss functions, KRG, and RBNN models, to estimate standard devi ation of responses. We consider a case failure if the GCV chosen error measure performs poorly in the entire design space compared to other error measure. *Median is 0.69. Criterion Correlations Root mean square error Error model msee resps Predicted best Times failed msee resps Predicted best Times failed Branin-Hoo 0.68 0.35 0.59 33 0.79 3.83 1.33 50 Camelback 0.42 0.77 0.65 35 1.03 0.80 0.99 21 Goldstein-Price 0.30 0.62 0.62 14 1.79 0.90 1.28 38 Hartman-3 0.35 0.28 0.35 30 1.03 1.45 1.07 47 Hartman-6 0.083 0.54 0.50 9 0.87 0.53 0.87 5 Radial turbine 0.16 0.10 0.15 42 0.78 2.03 0.94 37 Cantilever-2 0.73 0.46 0.68 29 0.80 491.7 8.02* 5 Cantilever-5 0.21 0.51 0.50 7 1.19 1.16 1.21 40

PAGE 195

195 Table 5-14. Number of cases out of 1000 for which error estimators failed to detect high error regions. ese : estimated standard error, rmsbe : root mean square bias error, P RS resp s : standard deviation of responses as an error measure for PRS, msee: mean square error (kriging), KRG resp s : standard deviation of responses as an error measure for kriging, P WS resp s : standard deviation of responses as an error measure for PWS, + indicates that multiple error estimators are combined. We used four surrogates, PRS with p=2 and p=6 loss functions, KRG, and RBNN models, to construct PWS, and to estimate standard deviation of responses. Error measure(s) BraninHoo Camel back Goldstein -Price Hartman -3 Hartman -6 Cantilev er-2 Cantilev er-5 ese 31 52 252 197 11 4 23 rmsbe 641 410 128 561 681 245 508 P RS resp s 357 243 226 88 119 501 157 P RS esrespes 19 37 62 28 1 4 2 rmsPRS brespes 222 110 46 57 54 170 59 rms esbee 22 32 61 146 6 4 14 rmsPRS esbrespees 14 22 22 21 1 4 0 msee 99 2 15 31 0 87 11 KRG resp s 272 145 31 634 69 302 84 KRG respmsees 30 1 3 13 0 49 1 P WS resp s 237 186 168 399 106 433 33

PAGE 196

196 Table 5-15. Number of cases out of 1000 for whic h error estimators failed to detect maximum error regions. ese : estimated standard error, rmsbe : root mean square bias error, P RS resp s : standard deviation of responses as an error measure for PRS, msee: mean square error (kriging), KRG resp s : standard deviation of responses as an error measure for kriging, P WS resp s : standard deviation of responses as an error measure for PWS, + indicates that multiple error estimators are combined. We used four surrogates, PRS with p=2 and p=6 loss functions, KRG, and RBNN models, to construct PWS, and to estimate standard deviation of responses. Error measure(s) BraninHoo Camel back Goldstei n-Price Hartm an-3 Hartm an-6 Cantil ever-2 Cantil ever-5 ese 17 28 199 120 6 3 20 rmsbe 400 212 72 379 346 130 436 P RS resp s 202 141 56 34 30 357 54 P RS esrespes 11 22 26 11 0 3 0 rmsPRS brespes 120 53 9 19 11 95 19 rms esbee 12 14 44 80 2 3 13 rmsPRS esbrespees 7 11 7 7 0 3 0 msee 66 1 13 19 0 63 6 KRG resp s 189 69 16 339 16 235 20 KRG respmsees 20 1 2 8 0 40 0 P WS resp s 139 101 65 189 34 279 11

PAGE 197

197 Table 5-16. Number of cases out of 1000 for which different error estimators wrongly marked low error regions as high error. ese : estimated standard error, rmsbe : root mean square bias error, P RS resp s : standard deviation of responses as an error measure for PRS, msee: mean square error (kriging), KRG resp s : standard deviation of responses as an error measure for kriging, P WS resp s : standard deviation of respon ses as an error measure for PWS, + indicates that multiple error es timators are combined. We used four surrogates, PRS with p=2 and p=6 loss functions, KRG, and RBNN models, to construct PWS, and to estimate st andard deviation of responses. Error measure(s) BraninHoo Camel back Goldstei n-Price Hartm an-3 Hartm an-6 Cantil ever-2 Cantil ever-5 ese 293 164 479 585 950 497 963 rmsbe 270 172 175 545 844 238 871 P RS resp s 186 220 240 124 554 433 529 P RS esrespes 358 271 542 618 973 628 975 rmsPRS brespes 345 315 336 577 922 494 924 rms esbee 386 271 517 729 988 550 983 rmsPRS esbrespees 425 350 572 742 993 649 989 msee 253 579 797 739 999 262 975 KRG resp s 270 78 110 269 509 405 279 KRG respmsees 418 596 812 772 1000 486 981 P WS resp s 128 189 171 167 643 244 319

PAGE 198

198 Table 5-17. Number of cases out of 1000 for which different error estimators wrongly marked low error regions as the maximum error region. ese : estimated standard error, rmsbe : root mean square bias error, P RS resp s : standard deviation of responses as an error measure for PRS, msee: mean square error (kriging), KRG resp s : standard deviation of responses as an error measure for kriging, P WS resp s : standard deviation of responses as an error measure for PWS, + indicates that multiple error estimators are combined. We used four surrogates, PRS with p=2 and p=6 loss functions, KRG, and RBNN models, to construct PWS, and to esti mate standard deviation of responses. Error measure(s) BraninHoo Camel back Goldstei n-Price Hartm an-3 Hartm an-6 Cantil ever-2 Cantil ever-5 ese 220 118 399 483 774 313 910 rmsbe 247 146 147 487 728 198 798 P RS resp s 151 177 193 88 302 386 394 P RS esrespes 285 224 461 516 824 505 936 rmsPRS brespes 311 264 281 519 790 443 860 rms esbee 335 214 439 646 928 386 956 rmsPRS esbrespees 373 292 492 662 941 534 969 msee 195 314 573 474 845 207 862 KRG resp s 237 58 71 214 275 362 174 KRG respmsees 359 342 599 545 868 424 884 P WS resp s 110 146 127 129 372 211 203 Table 5-18. High level summary of performance of different pointwise error estimators. Error measure Relative to actual error Characterization actual error field Variation with DOE Variation with problem Detect high error region ese Underestimate Poor Low Low Good rms be Overestimate Moderate Low Low Poor msee Underestimate Moderate High Moderate Good P RS resp s Varies/ underestimate Moderate Moderate High Moderate KRG resp s Varies/ overestimate Moderate High High Moderate P WS resp s Varies/ underestimate Poor Moderate High Moderate

PAGE 199

199 CHAPTER 6 CRYOGENIC CAVITATION MOD EL VALIDATION AND SENSITIVITY EVALUATION Introduction Code validation and verificati on is a complex and time-consum ing, but essential exercise to ensure accuracy of the predictions of CF D codes (Roache et al., 1986; AIAA editorial board, 1994; ASME editorial board, 1994; AIAA guide 1998; Roache, 1998; Oberkampf et al., 2004). For computational verification and validation exerci ses, multiple aspects need to be addressed. First of all, one needs to ensure that the num erical representation of the analytical model approaches the correct solution as the grids and ti me step sizes approach the limiting values; this is the so-called verification. Verificati on deals with programming errors, algorithmic insufficiencies, and inconsistencies. The second aspect is to investigate whether and how a particular physical model can reproduce, or at least, satisfactorily approximate the observed phenomena, and reproduce the experimental meas urements. That is, one should examine the propriety of the mathematical models and assumptions; this is the so-called validation. Code validation or model validation is furthe r complicated when the mathematical model involves adjustable parameters beca use there is a danger of fitting the experimental errors rather than the physical reality. We demonstrate for cr yogenic cavitation, how the tools of surrogate modeling and global sensitivity analysis, which ar e extensively used in design and optimization of computationally expensive pr oblems (Li and Padula, 2005; Qu eipo et al., 2005), can help with model validation and calibration. Cavitating Flows: Significance an d Previous Computational Efforts Cavitation is one of the foremost problem s observed in the turbomachinery such as inducers, pumps, turbines, marine propellers, nozzles, hydrofoils, etc. due to wide ranging pressure variations in the flow. Cavitation occurs when the local pressure in the flow falls below

PAGE 200

200 the vapor pressure, and consequently, the flui d undergoes a phase ch ange (Batchelor, 1967; Brennen, 1994, 1995). Cavitation induces noise, mechan ical vibrations, materi al erosion, and can severely impact the performance as well as the st ructural integrity of fl uid machinery. The study of cavitating flows is complicated by simultaneous presence of turbulence, multiple timescales, large density variations or phase change, interfacial dyna mics etc. Due to its practical importance and rich physics, cavitating flow is a topic of substantial interest and challenge to the computational community. The study of cavitating flows in cryogenic en vironment has practical importance for space applications because cryogens often serve as fu els for space launch vehicles (NASA online facts, 1991). A key design issue related to such liquid rocket fuel and oxidizer pumps is the minimum pressure that the design can tolerate for a given inlet temperature and rotating speed. At low inlet pressure (to reduce tank weight ) and high pump rotational speeds (to reduce engine weight), cavitation is prone to appear in the inducer section. To date, there is no established method in industry to estimate the actual loads due to cavitation on inducer blad es. There have been methods proposed, each with its limited validity and challenges (Garcia, 2001). Most rocket engine systems designed in the U.S. have experi enced issues with cavitating elements in the pump. This includes recent programs such as, al ternate turbo pump (ATP ) for the space shuttle main engine (SSME), the Fastrac LOX pump, a nd the RS-68 commercial engine (Garcia, 2001). An integrated framework based on computational m odeling and control strate gies is desirable to treat this critical and difficult issue. It is clear that the design of efficient turbomachinery components requires understanding and accurate prediction of the cryogenic cavitating flows. Cavitating flow computations have been c onducted using both density-based (Merkle et al., 1998; Kunz et al., 2000; Ahuj a et al., 2001; Venkateswaran et al., 2002) and pressure-based

PAGE 201

201 numerical approaches (Athaval e and Singhal, 2001; Singhal et al., 2002; Senocak and Shyy, 2002, 2004a-b), with the cavitation models devel oped based on: (1) Rayl eigh-Plesset type of bubble formulation (Kubota et al., 1992), which separates the liqui d and the vapor regions based on the force balance notion, and (2) homogene ous fluid approach (Senocak and Shyy, 2002), which treats the cavity as a re gion consisting of continuous mixt ure of liquid and vapor phases. In the homogeneous fluid model, the density field is commonly modeled via either a generalized equation of state (Edwards et al ., 2000, Ventikos and Tzabiras, 2000) or a transport equation of the liquid/vapor phase fraction (M erkle et al, 1998; Kunz et al., 2000; Senocak and Shyy, 2002, 2004a-b; Utturkar et al., 2005b). Recent efforts made in computational and modeling aspects of cavitating flows are discussed by Wang et al. (20 01), Ahuja et al. (2001), Preston et al. (2001), Venkateswaran et al. (2002), Senocak and S hyy (2004a-b), Utturkar et al. (2005a-b), and references within. Influence of Thermal Environment on Cavitation Modeling To date, the majority of the cavitation m odeling efforts have focused on the assumption that cavitation occurs with negligible energy in teractions (isothermal condition). This assumption is reasonable for cavitation in non-cryogenic flui ds but fails for thermo-sensible fluids, like liquid hydrogen and liquid oxygen (cr yogens), due to the differences in material properties (low liquid-vapor density ratio, low thermal conductiv ities, steep slope of pressure-temperature saturation curves etc.), and th e coupling of thermal effects such as, variation in vapor pressure/density with temperat ure etc. (Utturkar et al., 2005a-b; Utturkar, 2005; Hosangadi and Ahuja, 2005). Figure 6-1 (Lemmon et al., 2002) illustrates the behavior of th e physical properties of two representative cryogens, liquid nitrog en and liquid hydrogen, in the liquid-vapor saturation regime. The temperatur e range in the plots is chosen based on the general operating condition of the fluids, which is close to the critic al point. We observe substantial variation in the

PAGE 202

202 material properties with changes in the temperat ures. Relatively, the variation in the material properties (vapor pressure, liquidvapor density ratio, and latent h eat of vaporization, etc.) with temperature for liquid hydrogen is higher th an that observed fo r liquid nitrogen. As summarized by Utturkar (2005), dynamic simila rity in case of isothermal cavitation is dictated by the cavitation number (Equation (6.1) with constant vapor pressure pv). In the context of cryogenic cavitation, the actual cavit ation number needs to be defined as follows (Brennen, 1994): 2() 0.5vc lU p pT (6.1) where p is the reference pressure, U is the reference velocity, l is liquid density, and cT is the temperature in the cavity. The local cavita tion number can be related to the far-field cavitation number (based on the vapor pressure there) by the following fi rst-order approximation (Brennen, 1994): 21 ()(). 2v lcdp UTT dT (6.2) Equation (6.2) clearly indicates that the cumulativ e effect of the afor esaid factors would produce a noticeable rise in the local cavitation number, and subsequently, suppress the intensity of cavitation. Representative values of the vapor pressure gradients ( vdp dT ) in the operating temperature regime for liquid nitrogen and hydrogen are 20kPa/K and 37kPa/K, respectively. The influence of thermal eff ects on cavitation has been numer ically and experimentally investigated as early as 1956. Stahl and Stepanoff (1956) introduced a B-factor method to estimate temperature drop in terms of the ra tio of vapor volume to liquid volume during vaporization process; and used it to appraise head depression due to thermodynamic effects in

PAGE 203

203 cryogenic cavitation. Gelder et al. (1966), Ruggeri and Moore (1969), and Hord (1973a) simplified, and extended this B-factor theory to account for dynamic effects via bubble growth, varying cavity thickness, and convective heat transfer. Holl et al (1975) presented an entrainment theory to correlate the temperatur e depression and flow parameters. Cooper (1967) used a non-dimensional vaporization parameter along with a baratropic equation of state to incorporate pressure depression due to therma l effects while numerically simulating liquid hydrogen pumps. Brennen (1994, 1995), and Franc et al. (2003) presented methods of assessing thermodynamic effects on bubble dynamics by incor porating it into Raylei gh-Plesset equation. We refer the reader to the works by Hosangadi and Ahuja (2005), and Uttu rkar (2005) for more insight in the applicat ion regime and pros/cons of these methods. Experimental and Numerical Mode ling of Cryogenic Cavitation Hord (1973a-b) conducted by far the most comprehensive experiments on cryogenic cavitation with liquid nitrogen and liquid hydrogen under different sets of inlet velocities and temperature conditions, and employi ng a variety of geometries ( hydrofoil and ogives of varying diameters). Temperature and pressure data in the cavitating region, wh ich have been commonly employed for numerical valida tion (Hosangadi and Ahuja, 2003), was acquired over the geometries at regular spatial intervals us ing thermocouples and pressure sensors. There have been limited computational stud ies for cryogenic cavitating flows. The key challenges for numerical computations are the presence of strong non-lin earity in the energy equation and the temperature dependence of physic al properties (Lemmon et al., 2002) such as, vapor pressure and densit y ratio (as seen from Figure 6-1(A) and Figure 6-1(B)). The main features of a few selected numerical studies (Reboud et al ., 1990; Delannoy and Reboud, 1993; Deshpande et al., 1997; Lertnuwat et al., 2001; Tokumasu et al., 2002, 2003; Hosangadi and

PAGE 204

204 Ahuja, 2003; Hosangadi et al., 2003; Rachid, 2003; Rapposelli a nd Agostino, 2003; Utturkar et al., 2005a-b) are summarized in Table 6-1. A transport-based cavitation mode l, proposed by Merkle et al (1998), has been adopted in multiple efforts for non-cryogenic conditions. The sa me basic framework can also be used to simulate cryogenic cavitating flow s, subject to proper modificati on of the model parameters to better reflect the transport propert ies of cryogenic fluids and phys ical mechanisms of the flow environment. Utturkar et al. ( 2005b) showed that the accuracy of predictions is affected by the model parameters, and there is a need to calibra te the model parameters to account for cavitation in cryogenic conditions. As discusse d earlier, the temperature depende nt material properties also play a significant role in the predictions. These material properties are typically obtained from the models developed using the experimental da ta, and, naturally, contain uncertainties. The numerical approach employed in the present study has been previously tested, and documented against different flow problems (Utturkar et al 2005a-b; Utturkar, 2005). Our focus here is to address the validation aspect, namely, to what extent a transport-based cavitation model can reproduce the cryogenic cavitati on physics, and how can we improve its performance. Furthermore, realizing that the fluid properties and thermal envir onments add further challenges to cavitation models, the interplay between flui d and flow characteristics will also be probed. Surrogate Modeling Framework To facilitate the formulation of a suitable mathematical framework to probe the global sensitivity (Sobol, 1993) of the above-mentioned cavitation model and uncertainties in fluid properties in cryogenic environment, we first construct suitable su rrogate models (Queipo et al., 2005). Since the fidelity of surr ogate models is critical in determining the success of the sensitivity analysis and model validation, we ad opt multiple surrogate models to help ascertain the performance measures. There are alternativ e surrogate models (for example, polynomial

PAGE 205

205 response surface, kriging etc.) but the model that re presents a particular function the best is not known a priori Consequently, the predictions using diffe rent surrogate models have a certain amount of uncertainty. Goel et al. (2006b) sugg ested that the simultaneous use of multiple surrogate models may be beneficial to quantify, and to reduce uncertainties in predictions. They also proposed a cross-validationerror based weighted average su rrogate model that was shown to represent a wide variety of te st problems very well. The global cross validation error used in this study is also known as predicted residual sum of squares (PRESS) in polynomial response surface approximation terminology. Here, we used four surrogate models, polynomial response surface approximation (PRS), kriging (KRG), radi al basis neural netw ork (RBNN), and PRESSbased weighted average surrogate (PWS) model constructed using these three surrogates. These surrogate models are used to calibrate the model parameters of the present transport based cavitation model (Merkle et al., 1998) in cryogenic conditions. While the surrogate model approach has become popular for fluid device optimization (Li and Padula, 2005; Queipo et al., 2005; Goel et al., 2006d), its appl ication in CFD model validati on and improvement has not yet been actively pursued. The present wo rk represents such an endeavor. Scope and Organization Specifically, the objectives of this paper are: To study the physical aspects of cavitati on dynamics in cryogenic environment and perform model (and code) validation, To conduct a global sensitivity analysis to a ssess the sensitivity of the response to temperature dependent material prop erties and model parameters, and To calibrate the parameters of a transport-based cryogenic cavitation model for suitable flow conditions, and to account for different fluid properties. The organization of this chapter is as follo ws. The governing equations and the numerical approach followed in this paper are described in next section. Afterwards we present results of

PAGE 206

206 global sensitivity analysis to measure the relati ve importance of different model parameters and uncertainties in material properties; and calib ration of model parameters. The influence of thermal effects on the cavitation model is studied in detail in a following section. Finally, we summarize the major outcome of this study. Governing Equations and Numerical Approach The set of governing equations for cryogeni c cavitation under th e homogeneous-fluid modeling strategy comprises the conservative form of the Favre-averaged Navier-Stokes equations, the enthalpy equation, the k two-equation turbulence closure, and a transport equation for the liquid volume fraction. Th e mass-continuity, momentum, enthalpy, and cavitation model equations are given below: () 0,mj m ju tx (6.3) () () 2 [()()], 3mijj miik tij jijjikuuu uuu p txxxxxx (6.4) [()][()][()], PrPrmt mvmjv jjmtjh hfLuhfL txxx (6.5) () ,lj l ju mm tx (6.6) where m is the density of the fluid-vapor mixture, ju denotes the components of velocity, p is pressure, ,mt are mixture laminar and turbul ent viscosities, respectively, h is sensible enthalpy, fv is the vapor mass fraction, L is the latent heat of vaporization, Pr is the Prandtl number, l is the fraction of liquid in the mixture, and mm are the source terms for the cavitation model. The subscript t denotes turbulen t properties, l represents the liquid state, v

PAGE 207

207 represents the vapor state, and m denotes the mixture proper ties. The mixture property m sensible enthalpy, and the vapor mass fraction are respectively expressed as 1(),mllvl (6.7) ,PmhCT (6.8) (1) .vl v mf (6.9) For the problems studied here, we neglect the effects of kine tic energy and viscous dissipation terms in the energy equation (6.5) (0.5(Re) O,6Re~(10) O ), because the temperature field in cryogenic cavitation is mainly dictated by the phenomenon of evaporative cooling (refer to the section on the effect of thermal environment). Transport-based Cavitation Model Physically, the cavitation pro cess is governed by thermodynami cs and kinetics of the phase change process. The liquid-vapor conversion asso ciated with the cavitat ion process is modeled through m and m terms in Equation (6.6), which represent condensation and evaporation, respectively. The particular form of these phase transformation ra tes, which in case of cryogenic fluids also dictate the heat transfer process, forms the basis of the cavitation model. The liquidvapor condensation rates for the present trans port based cavitation model (Merkle et al., 1998) are: 22max(0,)(1) min(0,) ;, (0.5)(0.5)prodvl destlvl vllCpp Cpp mm UtUt (6.10) where destC and prodC are the empirical model parameters controlling the evaporation and condensation rates, pv is the vapor pressure, ,vl are the vapor and liquid densities, U is the reference velocity scale, and t is the reference time scale, defined as the ratio of the

PAGE 208

208 characteristic length scale D to the reference velocity scale U (/ tDU ). Merkle et al. (1998) validated this cavitation m odel with the experimental data for non-cryogenic fluids (e.g., water) and specified 1.0destC and 80.0prodC as optimal model parameters (referred here as original parameters). However, Utturkar ( 2005), and Hosangadi and Ahuja (2005) found that the previously calibrated va lues of the Merkle et al (1998) cavitation model (1.0destC and 80.0prodC ) are inadequate to provide a good match with the experiment al data under the cryogenic condition. Consequently, Uttu rkar et al. (2005) suggested 0.68destC and 54.4prodC (obtained via numerical experimentation) as more appropriate model parameters for liquid nitrogen. However, they noted difficulties in the simultaneous prediction of the temperature and pressure profiles on the surface of the test geometry. The present efforts represent advances in the practice of multi-surrogate model approach for code validation. Thermodynamic Effects The evaporation and condensation processes resu lt in absorption and release of the latent heat of vaporization that regulat es the thermal effects. Furthe rmore, there is a significant variation in the physical properties (,,,,,,and )lvvP p CKL with temperature (Lemmon et al., 2002) in the operating range that manifests coupling between different governing equations, and underscores the importance of thermal effect s in cryogenic cavitation. As indicated by phase diagram in Figure 6-1(D), the physical pr operties (liquid and vapor de nsities) are much stronger functions of temperature than pressure, and one can fairly assume the respective phase values on the liquid-vapor saturation cu rve at a given temperature. We illustrate the impact of thermal effects in cryogenic environment due to phase change on temperature predictions, and thermo-sensible ma terial properties on te mperature and pressure

PAGE 209

209 predictions by analyzing energy eq uation and cavitation sources term s. Firstly, we separate the latent heat terms in th e energy equation (Equation (6.5)) onto the right-h and-side to obtain temperature-based form of the energy equation as follows, energy source/sink term [][][] PrPr .t mPmmjPmP jjLmtj mvmjv jm mfLT CTuCTC txxx ufL tx (6.11) As can be seen from Equation (6.11), the lumped latent he at terms manifest as a nonlinear source term into the energy equation, and physically represent the latent heat transfer rate or the influence of phase change during evaporation and condensat ion. The spatial variation of thermodynamic properties and the evaporative cooli ng effect are intrinsica lly embedded into this transport-based source term causing a c oupling of all the governing equations. The influence of thermal effects due to thermo -sensible material properties can be further analyzed by studying the cavit ation source terms (Equation (6.10)) more closely. Firstly, we consider a case when 0vpp i.e., 0 m and the evaporation source term can be written as, 2()(), 0.5lv destl vlpp C mRTT U t (6.12) where destlC t R is the temperature dependent liquid-vapor density ratio, and is the cavitation number. Expanding Equation (6.12) using the Taylors se ries and utilizing Equation (6.2), we get, 2()() 0.5v T T lT dp dR RTTT m U dT dT (6.13)

PAGE 210

210 2 2()()()()() 0.5v T T lT dp dR RTTTTRTOT m U dT dT (6.14) Similarly, we can do an analysis of c ondensation source term for the condition 0vpp such that 0 m. Then, 2(1) (), 0.5v lprodlC pp mT U t (6.15) where (1)prodlC t As before, using Taylors series, 2 2()() 0.5v T lT dp TOT m U dT (6.16) It can be concluded from Equations (6.14) and (6.16) that the thermal effects influence the cavitation source terms in two ways, (1) thermal rate of change of li quid-vapor density ratio TdR dT, which is negative (Figure 6-1(B)), and (2) thermal rate of change of vapor pressure v Tdp dT, which is positive (Figure 6-1(A)), thus illustrating competing influences of thermal effects. It is obvious that the degree of influe nce of thermal effects depends on the choice of operating fluid, and th e operating conditions (, Tp ) due to the non-linear variation of material properties with temperature. Speed of Sound (SoS) Model Numerical modeling of sound propa gation is a very important f actor in accurate prediction of cavitation in liquid-vapor multiphase mixtur e. The speed of sound affects the numerical calculation via the pressure correction equation by c onditionally endowing it with a convectivediffusive form in the mixture region. Past st udies (Senocak, 2002; Senocak and Shyy, 2002; Wu

PAGE 211

211 et al., 2003; Senocak and Shyy, 2004a ) discuss in detail the mode ling options, their impact and issues. The SoS model used here is outlined below, SoS(1).lCC (6.17) The density correction term in th e continuity equation is thus c oupled to the pressure correction term as shown below. ''. Cp (6.18) In the pure liquid region, we recover the diffusi ve form of the pressure equation. Senocak and Shyy (2002, 2004a-b) suggested an O (1) value for the constant C to expedite the convergence of the iterative computational algo rithm. However, their recommendation is valid under normalized values for inlet velocity and liq uid density. Since we employ dimensional form of equations for cryogenic fluids, we suggest an O(21/ U ) value for C (Utturkar, 2005), which is consistent with the above suggestion in terms of the Mach number regime. Turbulence Model The k two-equation turbulence model with wa ll functions is presented as follows (Launder and Spalding, 1974): () () [()],jmj mt tm jkjuk k k txxx (6.19) 122() () P[()].mj m t tm jjju CC txkkxx (6.20) The turbulence production term (Pt) and the Reynolds stress tensor are defined as: P;, 2 (). 3i tijijmij j mijj i mijt j iu uu x ku u uu x x (6.21)

PAGE 212

212 The parameters for this model, 11.44 C, 21.92 C 1.3 1.0k are adopted from the equilibrium shear flow calibration (Shyy et al., 1 997) and the turbulent viscosity is defined as: 2,0.09.m tCk C (6.22) Of course, the turbulence closure and the eddy viscosity levels can affect the outcome of the simulated cavitation dynamics, especially, in case of unsteady simulations. For detailed investigations of turbulence mode ling on cavitating flow computati ons, we refer to recent works by Wu (2005), Wu et al. (2003a-c, 2005), and Utturk ar et al. (2005). Vaidya nathan et al. (2003) conducted a sensitivity analysis to assess the inte rplay between turbulence model parameters and the cavitation model parameters in non-cryogenic environment. They observed that multiple combinations of turbulence parame ters and cavitation model paramete rs yield same performance. To appraise the influence of turbulence m odeling on the current problem, we follow the previous investigation by Vaidyanathan et al. (2003); and compare the standard k turbulence model (Launder and Spalding, 1974) and a non-equilibrium k turbulence model developed by Shyy et al. (1997) that accounts for absenc e of equilibrium between the production and destruction of dissipation of tu rbulent kinetic energy. Both turbulence models offered very similar predictions within the experimental uncertain ties (refer to the supp lementary results at the end of this chapter). Hence, we restrict the sc ope of this study to th e calibration of cryogenic cavitation model parameters with the standard k turbulence model. Numerical Approach The governing equations are numerically solv ed using a CFD code STREAM (Thakur et al., 2002) based on a pressure-based algorithm a nd the finite-volume approach. We use multiblock, structured, curvilinear grids to analyze flow over different geometries in this chapter. The viscous terms are discretized by second-order acc urate central differencing while the convective

PAGE 213

213 terms are approximated by the second-order accurate controlled variations scheme (CVS) (Shyy, 1994; Shyy and Thakur, 1994). The use of CVS sc heme prevents oscillations under sharp gradients caused by the evaporation source term in the cavitation model while retaining second order of formal accuracy. The pressure-velocit y coupling is implemente d through the extension of the SIMPLEC (Patankar, 1980; Versteeg and Ma lalasekara, 1995) type of algorithm cast in a combined Cartesian-contravariant formulation (Thakur et al., 2002) for the dependent and flux variables, respectively, followe d by adequate relaxation for each governing equation, to obtain steady-state results. The temperat ure dependent material properti es are updated from the NIST (Lemmon et al., 2002) database at the end of each computational iteration. Results and Discussion Test Geometry, Boundary Conditions, and Performance Indicators We simulate flows over a 2-D hydrofoil and an ogive in cryogenic environment, which serve as the benchmark problem s for validating the cryogenic cav itation models. Hord (1973a-b) experimentally investigated the flow over these geometries inside suitably designed wind-tunnels (Figure 6-2(A)). He reported average pressure an d temperature data at five probe locations over the body surface for different cases that are refere nced alpha-numerically in two reports (Hord 1973a-b). We employ (1) Case C for liquid nitrogen (6Re9.110, 1.7, 83.06, TK liquid-vapor density ratio = 95, hydrof oil), and (2) Case D for liquid hydrogen (7Re2.010, 1.57, 20.70, TK liquid-vapor density ratio = 47, hydrofoil), to conduct optimization and sensitivity studies. A simplified geometry, schematic computati onal domain, and the boundary conditions for the two test problems are shown in Figure 6-2. The computational grids consists of 32070 and 34070 non-uniformly distributed grid points for the hydrofoil and the ogive, respectively, such

PAGE 214

214 that the cavitation regime is adequately resolv ed, and the deployment of wall functions (Rogallo and Moin, 1984) near the no-slip boundary conditi ons is allowed (Utturkar et al., 2005a-b). The inlet boundary conditions are implemented by stipula ting the values of the velocity components, phase fraction, temperature, and turbulence quant ities from the experimental data (Hord, 1973ab). At the walls, pressure, phase fraction, and turbulence quantities are extrapolated, along with applying the no-slip (in the form of the wall function, Verste eg and Malalasekara, 1995) on velocity, and adiabatic conditions on temperature. Pressure and ot her variables are extrapolated at the outlet boundaries while enforcing global mass conserva tion by rectifying the outlet velocity components. In addition, we hold the pressure at the refe rence point (illustrated in the experimental reports, Hord, 1973a-b) constant at the reference value specified in the experiments. Though the cavitating flows are unst eady in nature, no time-dependent data was reported by Hord. Utturkar (2005) showed that the flows considered here can be modeled as steady state. Furthermore, for sheet cavitati ng flows, it has been shown by Senocak and Shyy (2002) that steady state computations can well capture the es sential flow features, and reach close agreement with the measurements. Conseque ntly, we modeled the flow as steady state. The quality of the predictions is num erically quantified by computing the L2 norms of the differences between computed and e xperimental values of pressure (diffP ) and temperature (diffT ) at each of the five probe locati ons on the surface of hydrofoils. Th ese metrics are desired to be low to obtain good prediction quality. Surrogates-based Global Sensitivity Assessment and Calibration Since minor changes in flow environment can l ead to substantial changes in the predictions in cryogenic environment (Utturkar et al., 2005b), it is imperative to appraise the role of model parameters and uncertainties in material proper ties on the predictions. In this section, we

PAGE 215

215 characterize the parameters that significantly a ffect predictions using surrogate-based global sensitivity analysis (GSA), and then calibrate the cr yogenic cavitation model parameters. In the following, we present in detail the process of model parameter optimization and sensitivity evaluations based on Case C for liquid nitrogen. A corresponding study based on Case D for liquid hydrogen has also been carried out. To save space, we do not repeat the detailed information and only report the outcome. Global Sensitivity Assessment We employ variance-based, non-parametric GS A method, proposed by Sobol (1993) (refer to Appendix C), to evaluate the sensitivity of cryogenic cavitation model with respect to model parameters and material propertie s, and to get an insight about the factors that influence the accuracy of predictions. We can study the infl uence of uncertainty in different material properties ,,,,and vlpLKC and model parameters ,,anddestprodCCt However, to keep Re (Reynolds number based on upstream flow) and (cavitation number based on upstream flow) constant for the given case, and to keep the co mputational expense reasonable, we select one material property each from ener gy equation and cavitation transport -equation. Consequently, we choose ,,,anddestprodvCCL as variables. The model parameters, anddestprodCC are perturbed on either side of the valu es proposed by Utturkar (2005) (0.68destC ;54.4prodC ) by 15%, and the material properties, andvL are perturbed within 10% of the values they assume from the NIST database (Lemmon et al ., 2002) (perturbations are denoted as ** andvL). The ranges of the variable s are given in the Table 6-2. The performance of the cryogenic cavitation model is characterized by prediction errors diffP and diffT defined in the previous section.

PAGE 216

216 To conduct a global sensitivity analysis, the re sponse function is decomposed into additive functions of variables and inte ractions among variables. This allows the to tal variance ( V ) in the response function to be expressed as a combina tion of the main effect of each variable ( Vi), and its interactions with other variables ( ViZ). The sensitivity of the response function with respect to any variable is measured by computing its sensit ivity indices. The sensitivity indices of main effect ( Si) and total effect (total iS ) of a variable are given as follows: () ,.total iiiZ iiVVV SS VV (6.23) Surrogate construction In the absence of a closed form soluti on characterizing the objective functions (diffP and diffT ), different components of vari ances are evaluated using numeric al integration. Since direct coupling of CFD simulations with numerical integration schemes is computationally expensive, we use surrogate models of the performan ce indicators. We evaluate the responses diffP and diffT using CFD simulations at 70 data points (combinations of variables) selected via face-centered cubic composite design (FCCD, 25 points), and Latin hypercube sampling (LHS, 45 points) experimental designs. We construct four su rrogates: polynomial response surface approximation (PRS), kriging (KRG), radial basis neural network (RBNN), and a PRESS-based weighted average surrogate (PWS) model (refer to Chapters 2 and 4) of both responses in scaled variable space (all variables are scaled between zero and one such that zero corresponds to the minimum value). We use reduced cubic polynomials for PRS, and specify spread value of 0.5 for RBNN. Relevant details of the quality of fit of surrogate models are summarized in Table 6-3. Low PRESS and low root mean square error compared to the range of the function indicate that the two responses are adequately a pproximated by the surrogate m odels. For both objectives, RBNN

PAGE 217

217 surrogate is the worst of the three surrogates, a nd kriging is the best (c ompare PRESS errors). The contribution of different surrogate models to the PWS model, given by the weights in Table 6-3, accounts for the poor performance of RBNN by assigning it a low weight. We employ Gauss-quadrature integration scheme, with ten Gau ssian points along each di rection, to evaluate sensitivity indices. Responses at the Gaussian points are evaluated us ing surrogate models. The influence of the choice of surrogate mode l on the prediction of sensitivity indices is illustrated in Figure 6-3 with the help of main effects for diffP Since all surrogates predict similar trend about the importance of different variable s, we may conclude that the variability in predictions due to the choice of surrogate model is small. Main and interaction effe cts of different variables We show sensitivity indices of main effects and total effects (estimated via PWS model) in Figure 6-4 to quantify the relative im portance of different parameters on diffP and diffT The sensitivity indices of the main e ffects (pie-charts) suggest that destC is the most influential, and prodC is the least influential parameter within the selected range of variation, i.e., the cavity morphology is more influenced by the evaporation rate term, compared to the condensation term that is influential in determining the pressure recovery rate posterior to cavity closure. The variability due to material properties as indicated by the sensitivity indice s associated with vapor density v and latent heat L is smaller compared to the model parameters, but is not negligible. The variability in vapor density v influences both pressure and temperature predictions, but has more significant impact on pressure predictions. On the other hand, the variability in latent heat ( L ) within the selected uncertainty range aff ects temperature predic tions only. Relatively moderate influence of variati on in latent heat on temperatur e predictions does not lead to significant variation in pressure predictions, because the latter is more significantly influenced by

PAGE 218

218 the parameters that directly ap pear in cavitation source terms, and have more pronounced effect. The differences between the main and the total sensitivity indices for both diffP and diffT highlight the importance of interaction among parameters. The interaction between destC and v is particularly stronger than other parameters. Validation of global sensitivity analysis We validate the results of global sensitivity analysis by evaluating the variation in responses diffP and diffT when only one parameter is changed at a time and remaining parameters are fixed at their mean values (mean of the sele cted range). We assign si x equi-spaced levels for each variable and calculate the variation in re sponses using Abramowitz and Stegun (1972) sixpoint numerical integration scheme that has seventh order accur acy. The sensitivity indices of main effects of different parameters on diffP and diffT are shown in Figure 6-5. The results obtained by actual computations ar e in-sync with the findings of the global sensitivity analysis, that is, the model parameter destC and uncertainty in vapor pressure v are the most influential parameters for accurate pressure and temperatur e predictions, and uncerta inty in latent heat L is important only for predicting temperature accurately The differences in the actual magnitude of sensitivity analysis results can be explained by acc ounting for, (1) the small number of points used for actual sensitivity computations, (2) neglect of interaction terms, and (3) the errors in surrogate modeling. Nevertheless, the important trends in the results are captured adequately. The results indicate that the performance of cryogenic cavitation model is more susceptible to the variability in temper ature-dependent vapor density v compared to the variability in latent heat L This calls for more attention in developing accurate models of v Also, the variables, which appear in cavitation source terms ( m or m ), may tend to register greater influence on

PAGE 219

219 the computed results. Thus, in tuitively, reference velocity U reference time scale t, and liquid density l which are omitted from the present GSA, are expected to induce large variability in the computation, as compared to other omitted properties such as thermal conductivity K and specific heat pC Furthermore, as depicted by sensitivity indices in Figure 6-4, largely the impact of different parameters is expected to be th e same on pressure and temperature due to the tight couplin g between various flow variables. Calibration of Cryogenic Cavitation Model In the previous section, we observe d that one of the model parameters destC significantly influences the performance of the present cryoge nic cavitation model. This information can be used to calibrate the present cavitation model parame ters associated with di fferent fluids. Firstly, we optimize the model parameter (destC ) of the present cryogenic cavitation model using the benchmark case of liquid nitrogen flow ove r a hydrofoil C, while fixing the model parameter prodC at 54.4 (minimal influence on predictions) and assuming the temperature dependent material properties obtained from th e NIST database (Lemmon et al., 2002) to be accurate. We observed that increasing destC increases diffP and decreases diffT As shown in Figure 6-6, the parameters, which yield good pressure predictions (low diffP ), produce large errors in temperature predictions (high diffT ) and vice-versa (low diffT but high diffP ). So this model calibration/system identification problem is a multi-objective optimization to simultaneously minimize diffP and diffT by varying the model parameter destC Since the cavitation dynamics primarily impacts pressure fluctuations, we seek to improve the pressure prediction capabilities of the present cryogenic cavitation model with out incurring a significa nt deterioration of

PAGE 220

220 temperature predictions. Consequent ly, we allow the model parameter destC to vary between 0.578 and 0.68. Surrogate modeling of responses To represent the responses diffP and diffT using surrogate models, we sample data using CFD simulations at nine locations. Th e location of points, and corresponding diffP and diffT shown in Figure 6-7, clearly exhibit the conflicting na ture of the two objectives. As before, we construct PRS, KRG, RBNN, a nd PWS models. We approximate diffP with a reduced cubic PRS and diffT with a reduced quintic PRS. The relevant metrics, depicting the quality of surrogate models, and the weights associated with differe nt surrogates in PWS model are summarized in Table 6-4. Low PRESS and low RMS error indicate that the two responses are well represented by all surrogate models. While no single surrogate model performs the best for both responses, RBNN is the worst of the three surrogates consider ed here. The weights associated with different surrogates in PWS model also reflect the same PRESS and maximum error measures indicate that the PWS model obtained by averaging different surrogate models performs significantly better than the worst surrogate, and the perfor mance is comparable to the best surrogate. Multi-objective optimization Different methods to solve multi-objective op timization problems can be obtained from different texts (Chankong and Haimes, 1983; Steuer, 1986; Sen and Yang, 1998; Miettinen, 1999; Deb, 2001). We convert the present multiobjective optimization problem into a single objective optimization problem by combin ing the two performance metrics (diffP and diffT ) using weights (weighted sum strategy, Deb 2001); or by treating one pe rformance metric as objective function, and the second performanc e metric as constraint function ( -constraint strategy, Chankong and Haimes, 1983). We obtain many candi date Pareto optimal solutions by varying

PAGE 221

221 the weights for weighted sum stra tegy, and constraint values for -constraint strategy. After removing dominated and duplicate solutions from the set of candidate solutions, the function space and the variable space illust ration of Pareto optimal front (POF) obtained from different surrogate models is shown in Figure 6-8. We observe that different POF obtained by using multiple surrogate models are close to one another in both function and variable space. All surrogate models predict that a small increase in diffT will lead to significant reduction in the diffP (Figure 6-8(A)). However, we note that the pressure fluctuations play more important role in determining the cavity morphology and loadings on turbomachinery. Consequently, accurate pressure prediction is our primary objective. We select a tradeoff solution on the POF for va lidation, such that noticeable reduction in diffP can be realized without incurr ing significant deterioration of diffT Corresponding destC (referred as best-compromise parameter), computed ( via CFD simulations), and surrogate predictions of the two responses (diffP and diffT ) are given in Table 6-5. The errors in predictions of diffP and diffT are small for all surrogates, except RB NN. Clearly, PWS model yields the best predictions on both objectives. A graphical comparison of surf ace pressure and temperature profiles obtained with the original (Merkle et al., 1998) and optimal parameters of present transport based cavitatio n model is shown in Figure 6-9(A). The calibrated model parameters yield 72% reduction in diffP by allowing 3.8% increase in diffT compared to the original parameters (Merkle et al., 1998). The improvements in the surface pressure prediction, that is the more important criterion to estimate loadings due to cavitation, are obvious whereas the deterioration in the temperature predictions is small. From cavitation dynamics point of view, the main issue with the predictions using original parameters was the poor prediction of the cavity closure region. The best-compromise model

PAGE 222

222 parameters reduce the evaporation source term by reducing the model parameter destC This change brings favorable changes in the cavit y closure region by allowi ng an earlier onset of condensation, and hence, faster recovery of the pressure as was observed in experiments. Optimization outcome for hydrogen We repeat the model calibration exercise fo r liquid hydrogen fluid considering case D (hydrofoil) as the benchmark case. The corresponding best-compromise destC parameter is found to be 0.767. Notably, the ratio of best-compromise and baseline value of destC for both nitrogen and hydrogen is 0.94. The surface pressure and temperature profiles shown in Figure 6-9(B) clearly demonstrate improvements in pressu re predictions with the calibrated parameters compared to the original para meters (Merkle et al., 1998). Validation of the calibrated cavitation model The calibrated model parameters of the pres ent cryogenic cavitation model are validated by simulating additional benchmark cases for two geometries (hydrofoil, Hord 1973a; and ogive Hord, 1973b) using different working fluids, liq uid nitrogen and liquid hydrogen. The cases considered in this study, along with the corr esponding best-compromise model parameters, are enlisted in Table 6-6. We compared the surface pressure and temperature profiles predicted using the cryogenic cavitation model w ith the calibrated (best-compromise) and the original model parameters (Merkle et al., 1998) in Figure 6-10. The model with best-compromise parameters exhibits substantially more robust performan ce for different geometri es, fluids, and flow environments. The results presented here cl early spell the merits of employing a systematic methodology to examine the role of cavitation model parameters. In the present case, the implications of the optimization on pressure and thermal fields are in consistent. While this indicates the merits of

PAGE 223

223 adopting a multi-objective optimization framework, as has been conducted here, it also suggests that there is a need for further investigation of the effect of thermal variations on cryogenic cavitating flows. It should also be reiterated th at in terms of practical impact, the pressure prediction is our primary objective because pressure fluc tuation is what caus es poor performance or even catastrophic situ ation of fluid machinery. In the fo llowing, we offer further assessment of the thermal effect. Investigation of Thermal Effe cts and Boundary Conditions In the previous section, we observed disc repancies in simultaneous predictions of temperature and pressure. To understand the underl ying issues related to thermodynamic effects, we study the influence of thermo-sensitive materi al properties and the role of thermal boundary condition on the hydrofoil wall for Case 290C. We use the best-compromise values of model parameters (liquid N2) in all ca ses. Again, we use standard k turbulence model (refer to supplementary results at th e end of this chapter). Influence of Thermo-sensit ive Material Properties Firstly, we highlight the influence of thermal effects via phase change and thermo-sensitive properties on the temperature and pressure predictions in Figure 6-11. The difference of pressure and free stream vapor pressure (()v p pT ), and the difference of pr essure and the actual vapor pressure (based on temperature, ()v p pT ) are shown in Figure 6-11(A). The cavitation in cryogenic environment differs fr om non-cryogenic environment in two ways: (1) under-shoot at the leading edge of the hydrofoil indicates slower pressure recove ry in cryogenic environment, and the influence of cooling due to heat abso rption than that observe d in the non-cryogenic environment, and (2) the vapor pr essure in the cavity in cryogenic environment is not constant (continuous increase) due to the va riation in temperature. This increase in vapor pressure (as

PAGE 224

224 marked by v p in Figure 6-11(A)) is attributed to the variation in temperature (Figure 6-1). The change in vapor pressure affects the cavitation source terms (Equations (6.14) and (6.16)) and resultant liquid-vapor fraction, which impacts the source terms in energy equation to enforce coupling of thermal effects in governing equatio ns. To contrast the thermal effect on the cavitation dynamics, we also show a solution obtained by assigning a zero latent heat in Figure 6-11(A). With zero latent heat and an adiabatic wall condition, the fluid field exhibits a constant temperature throughout, resulting in a constant vapor pressure. Th is isothermal cavitation case yields a substantially larg er cavity with near constant pressure on the surface inside cavity, which is quite different from the experimental measurement. The temperature on the surface of hydrofo il in cavitating conditions is shown in Figure 6-11(B). The significant drop in te mperature near the leading edge of the cavity is explained as follows. The phase change, as modeled, is dictated by the vapor pressure. When the local pressure in the flow falls below the vapor pr essure, evaporation begins instantaneously as indicated by the transport model. This results in absorption of the latent heat of vaporization to facilitate the phase change. However, unlike boili ng heat transfer where heat is continuously supplied through an external heat source, the heat transfer in ca vitating flow largely stems from the convective and conductive heat transfer, and the latent heat releas e/absorption within the fluid, with external heat s ource playing minor roles. Cons equently, a decrease in fluid temperature is observed in the cavity region. As we approach the cavity closure region, the condensation of fluid releases la tent heat, increasing the fluid temperature locally. Furthermore, since the condensation process is dictated by th e vapor pressure (with the local temperature effect exerted indirectly via the change in vapor pressure in response to the temperature field), the rate of latent heat release can be fast in comparison to th e rate of convective and conductive

PAGE 225

225 heat transfer, and consequently, in simulations, we observe an ove rshoot in temperature profile. The experiments also show an increase in temp erature of the fluid in the closure region but probably due to the lack of sufficient number of probes on the surface, the existence of the overshoot could not be ascertained. Overall, the pressure predictions on the hydr ofoil surface follow the same trends as observed in experiments. However, we note differences in predic tions with experimental data near the closure region of the cavity. Impact of Boundary Conditions To investigate the discrepancy between experi mental and predicted surface pressure and temperature profiles, we also assess the impact of different thermal boundary conditions on the predictions. While all the walls on the wind tunnel are modele d as adiabatic, the hydrofoil surface is modeled as either adiabatic (Neumann boundary), or specified temperature (Dirichlet boundary) wall. The temperature profile required for implementing Dirichlet boundary condition is obtained by interpolating/extr apolating the experimental temper ature at five probe locations on the surface of hydrofoil. The predicted pressure and temperature profil es on the surface of hydr ofoil, obtained with different thermal boundary conditions, are compared with the experimental data (Hord, 1973a) in Figure 6-12. The introduction of heat transf er through the hydrofo il surface by Dirichlet boundary condition has little influence on the pressure distribution. With the given Reynolds number, the heat transfer at th e hydrofoil surface is relatively sma ll compared to the impact of latent heat, and subsequently, only minor variatio ns in the vapor pressures are observed. In the cavity closure region, the latent heat released during condens ation cannot be redistributed via convection and conduction fast enough, resulting in an overshoot in temperature there. The temperature profile on the first computation point above the hydrofoil surface, shown in Figure

PAGE 226

226 6-12(C), also indicates that the effect of heat transfer due to Dirichlet boundary condition is largely restricted to the bounda ry, and has minimal influence on the flow inside the cavity. Overall, it can be said that the effect of thermal boundary condition on the hydrofoil surface has little impact on th e performance of the present cryogenic cavitation model. Conclusions In this chapter, we presented results of m odel validation and improvement of a transportbased cryogenic cavitati on model using benchmark experiment al data for 2-D hydrofoils and ogives provided by Hord (1973a-b). We used surr ogate-based global sensitivity analysis to study the role of model parameters a nd uncertainties in temperature de pendent material properties. The model parameters, originally used in present transport based cavitation model (Merkle et al., 1998), were calibrated for cryogenic environment using multiple surrogates, and optimization techniques. The main conclusions of this study are as follows. Performance of the current cryogenic cavitati on model was more infl uenced by the model parameter associated with the evaporation source term (destC ) than the uncertainty in material properties. The high sensitivity in dex associated with temperature dependent vapor density indicated significant impact on the accuracy of pressure and temperature predictions. The variations in the latent heat of vaporization influen ced the accuracy of temperature predictions only. The model parameter associat ed with the production source term in the present cryogenic cavitation model (prodC ) did not influence predictions. The best-compromise model parameters select ed for present transport based cavitation model (Merkle et al., 1998) were ,20.6392destLNC ,20.767destLHC and 54.4prodC The choice of these parameters reduced the im portance of evaporation source term, which resulted in earlier onset of th e condensation and hence, the cav ity closure. Utturkar et al. (2005b) had made adjustment based on trial-and-error (,20.68destLNC ,20.816destLHC and 54.4prodC ) and limited optimization processes. In their approach, there was a lack of probing in regard to the sensitivity and r obustness of the outcome. The merits of the present effort lie in a systematic use of the optimization and sensitivity methodology, a detailed assessment of the thermal boundary condition, and a reasonably broad range of fluid and flow cases. Simultaneous use of multiple surrogate models evidently helped in increasing confidence in the results of global sensitivity analysis and optimization. The predictions using PRESS-

PAGE 227

227 based weighted average surrogate model were more accurate than individual surrogate models. The impact of thermal boundary conditions on the prediction of flow was apparently not significant. However, the thermal effect cause d by the phase change (latent heat) clearly affects the cavitation dynamics including the vapor pressure, and consequently, the cavity size. As we have shown here, the thermal effect s play a very significant role in the accurate prediction of the pressure via phase change, and thermo-sensitive material properties in cryogenic environment, with little im pact caused by wall heat transfer. The trends of the optimization on pressure and thermal fields follow opposite directions. While this indicates the usefulness of adop ting a multi-objective optimization framework, as has been conducted here, it should also be pointed out again that in terms of practical impact, the pressure prediction is our primar y objective because pressure fluctuation is what causes poor performance or even catas trophic situation of fluid machinery. Though advancements in the pressure predic tion capabilities of th e present cavitation model has been made in this work, further mode l development at a con ceptual level should be pursued to better address the discrepancies betw een measurements and computations, especially in the thermal field. Clearly, more experimental investigation is al so needed to better quantify the measurement uncertainty, and to offer insight into flow structures.

PAGE 228

228 A B C D Figure 6-1. Variation of physic al properties for liquid nitr ogen (solid line, relevant x -axis is on bottom and y -axis is on left), and liquid hydroge n (Lemmon et al., 2002) (dashed line, relevant x -axis is on top and y -axis is on right) with temperature. A) Vapor pressure vs. temperature along saturation line. B) Ratio of liquid density to vapor density vs. temperature along saturation li ne. C) Liquid density vs. temperature along saturation line. D) Pressure-density chart lines denote isotherms (liquid N2).

PAGE 229

229 A B INLET OUTLE T NO-SLIP SYMMETRYh y dro f oilsur f ace ( NO-SLIP ) C D Figure 6-2. Experimental setup and computational geometries. A) Experimental setup used by Hord (1973a-b) to conduct cr yogenic cavitation experime nts over hydrofoil and ogive geometries. (B) A schematic of the comput ational setup. C) The geometry of the adopted hydrofoil. D) The geometry of the adopted 0.357-inch ogive.

PAGE 230

230 A v *37% Cdes t 62 % C prod0% L*1% v *36% Cdest63% L*1% Cprod0% B C v *37% Cdest61% Cprod0% L*2% v *36% Cdest63% L*1% Cprod0% D Figure 6-3. Sensitivity indices of main effects us ing multiple surrogates of prediction metric (liquid N2, Case 290C). A) Polynomial response surface approximation. B) Kriging. C) Radial basis neural network. D) PRESS-based weighted average surrogate.

PAGE 231

231 A v *36% Cdest63% L*1% Cprod0% 0.00.20.40.60.8 Cdest v* L* Cprod Total effect Main effect B v 1 0% Cdest80% L*10% Cprod0% 0.00.20.40.60.81.0 Cdest v* L* Cprod Total effect Main effect Figure 6-4. Influence of different variables on performance metrics quantified using sensitivity indices of main and total effects. We show results obtained using PWS surrogate (liquid N2, Case 290C). A) Sensitivity i ndices of main and total effects for diffP B) Sensitivity indices of main and total effects for diffT

PAGE 232

232 A L*1% v *44% Cdest55% Cprod0% L*13% v *5% Cdest82% Cprod0% B Figure 6-5. Validation of global sensitivity analysis results for main effects of different variables (liquid N2, Case 290C). A) diffP B) diffT A B Figure 6-6. Surface pressure and temperature pred ictions using the model parameters for liquid N2 that minimized diffP and diffT respectively (Case 290C). The number on each surface pressure or temperature profile represents diffP or diffT value associated with appropriate model parameters. A) Surf ace pressure. B) Surface temperature.

PAGE 233

233 1.50 2.00 2.50 3.00 3.50 4.00 0.5780.5920.6050.6190.6320.6460.6600.6730.680CdestPdiff0.44 0.46 0.48 0.50 0.52 0.54 0.56Tdiff Pdiff Tdiff Figure 6-7. Loca tion of points (destC ) and corresponding responses (diffP is shown on the left y axis, and diffT is shown on the right y -axis) used for calibra tion of the cryogenic cavitation model (liquid N2, Case 290C). A B Figure 6-8. Pareto optimal front (POF) and corresponding optimal points for liquid N2 (Case 290C) using different surrogates. PRS: Pol ynomial response surface, KRG: Kriging, RBNN: Radial basis neural network, PWS: PRESSbased weighted average surrogate. A) Function space representation of POF. B) POF in design variable space.

PAGE 234

234 A B Figure 6-9. Surface pressure and temperature pr edictions on benchmark test cases using the model parameters corresponding to original and best-compr omise values for different fluids. The number on each surface pressure/temperature profile represents diffP or diffT value associated with appropriate model parameters. A) Case 290C, liquid nitrogen, hydrofoil. B) Case 249D, liquid hydrogen, hydrofoil.

PAGE 235

235 A B C

PAGE 236

236 D Figure 6-10. Surface pressure and temperature predictions using the original parameters (1.0destC 80.0prodC ), and best-compromise parameters (54.4prodC and ,20.6392destLNC ; or ,20.767destLHC ) for a variety of geometries and operating conditions. The number next to each su rface pressure or temperature profile represents diffP or diffT value associated with appropria te model parameters. A) Case 296B, liq N2, hydrofoil. B) Case 312D, liq N2, ogive. C) Case 255C, liq H2, hydrofoil. D) Case 349B, liq H2, ogive.

PAGE 237

237 A B Figure 6-11. Surface pressure and temperature pr ofile on 2D hydrofoil for Case 290C where the cavitation is controlled by, (1) temperaturedependent vapor pressure (designated as L>0 ), and (2) zero latent heat, and hence, isothermal flow fi eld (designated as L=0 ). The range indicated by pv shows the level of variations in vapor pressure caused by the temperature variations inside th e cavity. (We use best-compromise model parameters ,20.6392destLNC 54.4prodC to perform simulations). A) Surface pressure. B) Surface temperature.

PAGE 238

238 A B C Figure 6-12. Impact of different boundary conditi ons on surface pressure and temperature profile on 2D hydrofoil (Case 290C, li quid N2) and predictions on first computational point next to boundary. We use the best-compromise model parameters ,20.6392destLNC 54.4prodC for simulations. A) Surface pressure. B) Surface temperature. C) Temperature on the first computational point.

PAGE 239

239 Table 6-1. Summary of a few relevant numerical studies on cryogenic cavitation. Reference Main features Reboud et al. (1990) Delannoy and Reboud (1993) a) Potential flow equations with semi-empirical formulation b) Simplistic interfacial heat transf er equation (suitable only for sheet cavitation) c) Energy equation not solved Deshpande et al. (1994, 1997) a) Explicit interface tracking b) Simplistic model for vapor flow insi de cavity (suitable only for sheet cavitation) c) Energy equation solved only in the liquid region Lertnuwat et al. (2001) a) Incorporated energy balance in Rayleigh-Plesset equation to model bubble oscillations b) Good agreement with DNS but de viations under isothermal and adiabatic conditions Tokumasu et al. (2002, 2003) a) Explicit interface tracking b) Improved model for vapor flow in side the cavity (suitable only for sheet cavitation) c) Energy equation solved only in the liquid region Hosangadi and Ahuja (2003) Hosangadi et al. (2003) a) Solved energy equation in the en tire domain with dynamic update of material properties b) Some inconsistency with experimental results is noted c) Noticed significant changes in th e cavitation model parameters in cryogenic and non-cryogenic conditions Rachid (2003) a) Theoretical model to account for compressi bility effects in a liquidvapor mixture b) Introduced dissipative effects in phase transformation intermediate between two extreme revers ible thermodynamic phenomena Rapposelli & Agostino (2003) a) Employed thermodynamic relations to extract speed of sound for various liquids b) Captures most featur es of bubble dynamics well Utturkar et al. (2005a-b) a) Solved energy equation in the en tire domain with dynamic update of material properties b) Test results for different flui ds and reference conditions were consistent with the experimental results

PAGE 240

240 Table 6-2. Ranges of variables for global sensitivity analyses. destC and prodC are the model parameters associated with the cavitati on model source terms for liquid nitrogen, *v and L are the multiplication factors of vapor density and latent heat obtained from NIST database (Lemmon et al., 2002), respectively. Variable Minimum Baseline Maximum destC 0.578 0.68 0.782 prodC 46.24 54.4 62.56 *v 0.90 1.0 1.10 L* 0.90 1.0 1.10 Table 6-3. Performance indica tors and corresponding weights in surrogate appr oximations of prediction metrics diffP and diffT PRS: Polynomial response surface, KRG: Kriging, RBNN: Radial basis neural network, PWS: PRESS-ba sed weighted surrogate. (PRESS is the square root of predicted resi dual sum of squares). The test example is Case 290C (liquid nitrogen flow over hydrofoil). diffP diffT Surrogate Parameter ValuesWeightsValues Weights # of training points 70 70 Minimum of data 1.653 0.334 Mean of data 3.984 0.462 Maximum of data 9.000 0.673 PRS # of coefficients 23 0.320 19 0.335 2 adj R 0.979 0.954 PRESS 0.344 0.0136 Maximum error 0.609 0.0285 RMS error 0.297 0.0121 KRG Process variance 1.277 0.603 1.67e-30.597 PRESS 0.166 6.92e-3 RBNN PRESS 1.538 0.077 0.0726 0.068 Maximum error 0.0905 8.64e-3 PWS PRESS 0.227 9.56e-3 Maximum error 0.199 9.64e-3

PAGE 241

241 Table 6-4. Performance indica tors and corresponding weights in surrogate appr oximations of prediction metrics diffP and diffT in model-parameter space. PRS: Polynomial response surface, KRG: Kriging, RBNN: Radi al basis neural network, PWS: PRESSbased weighted surrogate. (PRESS is the s quare root of predicted residual sum of squares). The test example is Case 290C (liquid nitrogen flow over hydrofoil). diffP diffT Surrogate Parameter ValuesWeightsValues Weights # of training points 9 9 Minimum of data 1.657 0.449 Mean of data 2.222 0.483 Maximum of data 3.465 0.549 PRS # of coefficients 3 0.239 5 0.666 2 adj R 0.999 1.000 PRESS 0.032 1.05e-4 Maximum error 0.037 7.00e-5 RMS error 0.021 6.00e-5 KRG Process variance 0.098 0.659 1.02e-30.315 PRESS 0.010 3.98e-4 RBNN PRESS 0.077 0.101 9.00e-30.019 Maximum error 0.018 6.28e-3 PWS PRESS 0.025 3.25e-4 Maximum error 0.011 1.49e-4 Table 6-5. Predicted and actual diffP and diffT at best-compromise model parameter for liquid N2 (Case 290C). PRS: Polynomial response surface approximation. KRG: Kriging, RBNN: Radial basis neural network. PWS: PRESSbased weighted average surrogate. destC Response Simulation PRS KRG RBNNPWS diffP 2.012 2.0172.0122.003 2.012 0.6392 diffT 0.466 0.4660.4650.463 0.465

PAGE 242

242 Table 6-6. Description of flow cases chosen for the vali dation of the calibrated cryogenic cavitation model. is the cavitation number, Re is the freestream Reynolds number, T is the inlet temperature, l is the liquid density, v is the vapor density, m and m are evaporation and condensation te rms in transport-based cavitation model, and destC is the best-compromise model parameter. ( 54.4prodC ) Fluid Geometry Case # () TKRe l v T % change in m % change in m destC Liq. N2 Hydrofoil 290C 83.06 69.010 1.7094.90 -7.58 7.53 0.6392 Liq. N2 Hydrofoil 296B 88.54 71.110 1.6156.25 -1.00 12.34 0.6392 Liq. N2 Ogive 312D 83.00 69.010 0.4695.47 9.12 19.18 0.6392 Liq. H2 Hydrofoil 249D 20.70 72.010 1.5746.97 -14.79 26.57 0.767 Liq. H2 Hydrofoil 255C 22.20 72.510 1.4931.60 -8.96 29.01 0.767 Liq. H2 Ogive 349B 21.33 72.310 0.3839.91 20.96 34.28 0.767

PAGE 243

243 Influence of Turbulence Modeling on Predictions We compare the influence of turbulence modeli ng on the predictions with the help of two benchmark cases of flow over a hydrofoil with liquid nitrogen (Case 29 0C), and liquid hydrogen (Case 249D). We compare the pe rformance of the standard k two-equation turbulence model (Launder and Spalding, 1974) with the non-equilibrium k turbulence model (Shyy et al., 1997). While the governing equations for th e two models are the same (Equations (6.19)-(6.22)), the model constants are given in Table 6-7. We use 10.9 and 21.15 Table 6-7. Model parameters in Launder-Spalding and non-equilibrium k turbulence models. Model C 1C 2C k Standard k (Launder and Spalding, 1974) 0.09 1.44 1.92 1.0 1.3 Non-equilibrium k model (Shyy et al., 1997) 0.09 11(1.4)(/)tP 22(1.9)(/)tP 0.89271.15 The predicted surface pressure and temper ature for the two test cases, shown in Figure 6-13, clearly demonstrate only moderate influence of turbul ence models on the predictions.

PAGE 244

244 A B Figure 6-13. Influence of turbulence modeling on surface pressure and temperature predictions in cryogenic cavitating conditions. A) 290C, liqui d nitrogen, hydrofoil. B) 249D, liquid hydrogen, hydrofoil.

PAGE 245

245 CHAPTER 7 IMPROVING HYDRODYNAMIC PERFOR MANCE OF DIFFUSER VIA SHAPE OPTIMIZATION Introduction The space shuttle main engine (SSME) is requi red to operate over a wide range of flow conditions. This requirement imposes numerous challenges on the design of turbomachinery components. One concept that is being explored is the use of an expander cycle for an upper stage engine. A schematic of a representative ex pander cycle for a conceptual upper stage engine is shown in Figure 7-1. Oxidizer and fu el pumps are used to feed LOX (liquid oxygen) and LH2 (liquid hydrogen) to the combustion chamber of the main engine. The combustion products are discharged from the nozzle. The pumps are driven by turbines that use the gasified fuel as the working fluid. There is a continuing effort to develop subsystems, like turbopu mps and turbines, used in a typical expander cycle based upper stage engine. The requirements on the design of subsystems are influenced by size, weight, e fficiency, and manufacturability of the system. In addition, often the requirements for subsystems are coupl ed, for example, the above-mentioned design constraints require turbopumps to operate at high speeds to have high efficiency with low weight and compact design. This may require seeking alternate designs for the turbopumps, as the current designs may not prove adequate over a wi de range of operating c onditions. Mack et al. (2005b, 2006) optimized the design of a radial turbine that allows high turbopump speeds, performs comparable to an axia l turbine at design conditions, and yields good performance at off-design conditions. Dorney et al. (2006a) have been exploring different concepts for turbopump design. A simplified schematic of a pump is shown in Figure 7-2. Oxidizer or fuel enters from the left. The pressure increases as the flui d passes through the impeller. The fluid emerging from the impeller

PAGE 246

246 periphery typically has high tangential velocity, which is partially conve rted into pressure by passing it over a diffuser. While conducting the tests with wate r, Dorney et al. (2006a) have found that a diffuser with vanes is more effi cient than a vaneless diffuser at off-design conditions. Over a range of opera ting conditions, the performance of a pump is driven by the flow in the diffuser. Generally, the diffuser will stall before the impeller, and the performance of the diffuser will drop off more rapidly at off-desi gn operating conditions. The main objectives of the current effort are, (1) to improve the hydrody namic performance of a diffuser using advanced optimization techniques, and (2) to study the features of the di ffuser vanes that influence its performance. Improvement in performance via shape optimization has been successfully achieved in many areas. For example, there are numerous instances of improvement in lift to drag ratio via airfoil or wing shape design (Obayashi et al., 2000; Sasaki et al., 2000, 20 01; Papila et al., 2002; Emmerich et al., 2002; Huyse et al., 2002; Giannakoglou, 2002), shap e of blades is optimized to increase the efficiency of turb ines (Papila et al. 2002, Mengist u et al. 2006) and pumps (Samad et al., 2006) in past. Shape desi gn typically involves optimization that requ ires a significant number of function evaluations to explore different concepts. If the cost of evaluating a single design is high, as is usually the case, surroga te models of objectives and constraints are frequently used to reduce the computational burden. Surrogate based optimization approach is wide ly used in the design of space propulsion systems, such as diffuser (Madsen et al. 2000), su personic turbines (Papil a et al. 2002), rocket injectors (Vaidyanathan et al. 2000, Goel et al. 2007a), comb ustion chambers (Mack et al., 2005a), and radial turbines (M ack et al. 2005b, 2006). Detaile d reviews on surrogate based optimization are provided by Li and Padula (200 5), and Queipo et al. (2005). There are many

PAGE 247

247 surrogate models but it is not clear which surroga te model performs the best for any particular problem. In such a scenario, one possible approach to account for uncertainties in predictions is to use an ensemble of surrogates (Goel et al., 2006b). This multiple surrogate approach has been demonstrated to work well for several problems including system identification (Goel et al., 2006d), and hardware design (Samad et al., 2006). Specifically, the objectives of the present study are: (1) to improve the hydrodynamic performance of the diffuser via shape optimization of vanes, (2) to identify important regions in diffuser vanes that help optimize pressure ratio and (3) to demonstrat e the application of multiple surrogates based strategy for space propulsion systems. The chapter is organized as follows. We defi ne the geometry of the vanes and numerical tools to evaluate the diffuser vane shapes in next section. Then we describe the relevant details in surrogate model based optimization and the re sults obtained for the current optimization problem. The analysis of the physics involved in flow over optimal vane and empirical considerations are discus sed afterwards. Finally, we summari ze the major findings of this work. Problem Description Representative radial locations in the meanlin e pump flow path (not-to-scale) are shown in Figure 7-3 (Dorney et al., 2006a). The fluid enters from the left, and is guided to the unshrouded impeller via inlet guide vanes (IGV) assembly. The flow from the impeller passes through the diffuser before being co llected in the discha rge collector. In this study, we focus our efforts on a configuration of 15 inlet guide vane s, seven main and seven splitter blades, and 17 diffuser vanes. The length of the diffuser vanes is also fixed according to the location of the collector. Our goal is to maximize the performance of the diffuser, characterized by the ratio of pressure at the inlet and the outlet, hereafter ca lled as pressure ratio. The performance of the diffuser is governed by the shape of the diffuser vane.

PAGE 248

248 Vane Shape Definition The description of geometry is the most im portant step in the sh ape optimization. The current shape of the diffuser vanes, referred as baseline design, (shown in Figure 7-4(A)) has been created using meanline and geometry generation codes at NASA (Dorney, 2006b); and yields a pressure-ratio of 1.074. It is obvious from Figure 7-4(B) that the existing shape allows the flow to remain attached while passing on th e vanes, but causes sign ificant flow separation while leaving the vane. The separa tion induces significant loss of pr essure recovery that is the primary goal of using a diffuser. The baseline design has been created subject to several constraints, including: (1) the number of vanes is set at 17 based on the number of bolts used to attach the two sides of the experi mental rig, (2) the vanes accomm odate a 3/8-inch diameter bolt, with enough excess material to a llow manufacturing, and (3) the le ngth of the vanes is set based on the location of the collector. These constraints resulted in an initial optimized design that looked quite similar to the baseline design, a nd yielded similar performance in both isolated component and full stage simulations. In an e ffort to more thoroughly explore the design space for diffusers (and allowing for future improve ments in materials and manufacturing), the constraints resulting from the bo lts (the number and thickness of the vanes) were relaxed. Other factors that must be considered in the followi ng optimization include: (1 ) the designs resulting from the optimization techniques shown in this paper are based on diffu ser-alone simulations (different results may be obtained if the optimi zations were based on full stage simulations), and (2) no effort has been made to determine if the proposed designs would meet stress and/or manufacturing requirements. To reduce the separation, we represent the ge ometry of a vane by five sections using a circular arc, and Bezier curves as shown in Figure 7-5. Sections 1, 3, and 4 are Bezier curves, and Section 2 is a circular arc. The sh ape of the inlet nose (between points B1 and B2) is obtained

PAGE 249

249 from the existing baseline vane shape. The circul ar arc in Section 2 is described by fixing the radius r ( r = 0.08), the location of the center C1 (-3.9, 6.1), and the star t and end angles (based on the baseline vane shape provided by Dr Daniel Dorney from NAS A). The arc begins at angle /2r (point A1) and ends at /2r (point A2). Here, we use 22.5o r and is a design variable. A typical parametric Bezier curve f ( x ), shown in Figure 7-6, is defined with the help of two end points P0 and P1, and two control points P2 and P3 as follows. 3223 0123()(1)3(1)3(1),[0,1]fxPxPxxPxxPxx (7.1) The coordinate of any point on the Bezier cu rve is obtained by substituting the value of x accordingly. Here, the coordinates of control poi nts are obtained by supplying the slope and the length of the tangents at end points (Papila, 2001) such that, different Bezier curves are generated by varying the length of tangents. The points A1 and A2, shown by dots in Figure 7-5, and the tangents to the arc, serve as the end points location and tangents us ed to define the Bezier cu rves in Sections 1 and 3. Coordinates and the sl opes at points B1 and B2 (obtained from the data of inlet nose of the baseline design, courtesy Dr Daniel Dorney) are used to define Bezi er curves in Sections 1 and 4. The Bezier curve in Section 1 (Figure 7-5) is parameterized usi ng the coordinates and slope of tangents at B2 and A2. The lengths of the tangents t1 and t2 control the shape of the curve. While one end point coordinates and slope for Bezier curves in Secti ons 3 and 4 are known at points A1 and B1, the second end point and slope are obtained by defining the point P ( Py, Pz). The slope of tangent at point P is taken as five degrees more than the slope of the line joining points P and B1. The additional slope is specified to avoid all the points in Section 4 fall in a straight line. The values of the fixed parameters are decided based on the inputs from designer (Dr Daniel Dorney,

PAGE 250

250 NASA). The lengths of tangents t1 t6 serve as variables to generate Bezier curves in Sections 1, 3 and 4. The ranges of the design variables, summarized in Table 7-1, are selected such that we obtain practically feasible vane geometries and the corresponding grid s (discussed in next section). We note that present Bezier curve and circular arc based defi nition of diffuser vane shapes provides significantly different vane geomet ries than the baseline de sign, particularly near the outlet region. The main rati onale behind this choi ce of vane shape is the relaxation of manufacturing and stress constraint s, ease of parameterization, and better control of the curvature of the vane that allows the flow to remain attached. Mesh Generation, Boundary Conditions, and Numerical Simulation Performance of diffuser vane geometries is analyzed using the NASA PHANTOM code (Dorney and Sondak, 2006b) developed at Ma rshall Space Flight Center to analyze turbomachinery flows. This 3D, unsteady, Navier-S tokes code utilizes structured, overset Oand H-grids to discretize and to analyze the unstea dy flow resulting from the relative motion of rotating components. The code is based on th e Generalized Equations Set methodology (Sondak and Dorney, 2003), and implements a modified Ba ldwin-Lomax turbulence model (Baldwin and Lomax, 1978). The inviscid and viscous fluxes ar e discretized using a third-order spatially accurate Roes scheme, and second-order cent ral differencing (Tannehill et al., 1997), respectively. The unsteady te rms are modeled using a sec ond order accurate scheme. For this problem, we solve incompressible, un steady, non-rotating, turbulent, single phase, constant-material-property flow over diffuser vane. The working fluid is water. By taking the advantage of periodicity, only a si ngle vane is analyzed here. A combination of Hand O-grids with 13065 grid points has been used to analyze di ffuser vane shapes. A typi cal grid is shown in Figure 7-7. The boundary conditions imposed on the flow domain are as follows. Mass flux, total

PAGE 251

251 temperature, and flow angles (c ircumferential and radial) are speci fied at the inlet. Mass flux is fixed at the outlet. All solid bounda ries are modeled as no slip, ad iabatic walls, with zero normal derivative of pressure. Periodic boundary condition is enforced at outer boundaries. With this setup, it takes approximately 15 minutes on a si ngle Intel Xeon proce ssor (2.0 GHz, 1.0 GB RAM) to simulate each design. Surrogate-Based Design and Optimization As discussed earlier, the surrogate model based approach is suitable to reduce the computational cost of optimization. The stepwi se procedure of surroga te based analysis and optimization is explained with the help of Figure 7-8. Firstly, we iden tify objectives, constraints, and design variables. Next, we develop a procedur e to evaluate different designs. Subsequently, we construct multiple surrogate models of the obj ectives and constraints. We use these surrogate models to optimize the performance of the system and to characterize the influence of different design variables on the objectives and constraint s using global sensitivity analysis (GSA). The optimal design(s) is (are) validated by performing numerical simulation. If we are satisfied with the performance of the subsystem, we terminate th e search procedure. Otherwise, we refine the design space in the region of interest. We can also fix the least important variables at the optimal values (as realized by optimization) or mean values to reduce the complexity of the surrogate modeling, and repeat this pro cedure till convergence. Details of different steps in surrogate model based analysis and optimi zation method, in context of th e current problem, are given as follows. Surrogate Modeling The major steps in surrogate modeling are shown in Figure 7-9. 1. Design of Experiments (DOEs). The design of experiment is the sampling plan in design variable space and is effective in reducing the computational expense of

PAGE 252

252 generating high-fidelity surrogate mode ls. We use Latin hypercube sampling (LHS), D-optimality criterion, and face-centered cubic central composite designs to select the design sites (poi nts) for conducting simulations. 2. Numerical Simulations at Selected Locations. The computationally expensive model is executed at all designs points selected using the DOE. 3. Construction of Surrogate Model. Surrogate models are relatively computationally inexpensive models to ev aluate designs. We use an ensemble of surrogates as proposed by Goel et al. ( 2006b). The details of different surrogate models are given in Chapter 2. 4. Model Validation. The purpose of this step is to establish the predictive capabilities of the surrogate model away from the available data (generalization error). This step can also be used to refine DOE. Since we have nine design variables, we se lected 110 design points (a llows adequate data to evaluate 55 coefficients of a quadratic po lynomial response surface) using Latin hypercube sampling (LHS) to construct surrogate mode ls. We generated LHS designs using MATLAB routine lhsdesign with 100 ite rations for maximization of the minimum distance between points. We evaluated each diffuser vane shape usi ng PHANTOM. This dataset is referred as Set A. The range of the data, given in Table 7-2, indicated potential of improvement in the performance of the diffuser by shape optimization. We constructed four surrogate models, polynomial response surface approximation (PRS), kriging (KRG), radial basis neural network (RBNN), and a PRESS-based weighted average surrogate (PWS) of the objective pressure ratio We used quadratic polynomial for PRS, and linear trend model with the Gaussian correlati on function for kriging. For RBNN, the spread

PAGE 253

253 coefficient was taken as 0.5 and th e error goal was the square of five-percent of the mean value of the response at data points. The parameters, and for PWS model (refer to Chapter 4) were 0.05 and -1, respectively. The summary of quality indicators for different surrogate models is given in Table 7-2. All error indicators are desired to be low compared to the response data, except 2 adj R which is desired to be close to one. Th e PRESS (Chapter 2) and RMS error (~1.0e2) were very high compared to the range of data This indicated that a ll surrogate models poorly approximated the actual response, and were likely to yield inaccurate results if used for global sensitivity analysis and optimization. To iden tify the cause of poor surrogate modeling, we conducted a lack-of-fit test (refer to Appendix D) for PRS. A low p-value (~0.017) indicated that the chosen order of the polynomial was inadequate in the selected design space. Since, the data available at 110 points is insuffi cient to estimate 220 coefficien ts in a cubic polynomial, this issue of model inadequacy also reflected the lack of data. We addressed the issue of model accuracy or data inadequacy using two parallel approaches. Firstly, we added more data in th e design space to improve th e quality of fit. We sampled 330 additional points using Latin hyperc ube sampling, such that we had 440 design points to fit a cubic polynomial ( 220 coefficients). We call this da taset as Set B. Secondly, the low mean value of the response data (1.041) at 110 points compared to the baseline design (1.074) indicated that large por tion of the current design space was undesirable due to inferior performance. Hence, it might be appropriate to identify the region where we expect improvements in the performance of the designs and construct surrogate models by sampling additional design points in that region (reasonable design sp ace approach, Balabanov et al., 1999). To identify the regi on of interest, we used the surroga te models, constructed with Set A data (110 design points), to evaluate response at a large number of points in design space.

PAGE 254

254 Specifically, we evaluated responses at a grid of four Gaussian poi nts in each direction (total 49= 262,144 points). We chose the Gaussian points, in stead of usual uniform grids, because the Gaussian points lie inside the de sign domain, and are less susceptible to extrapolation errors than the corners of uniform gr ids that might fall outside the convex hull of LHS design points used to construct surrogate models. Any point with a predicted performance (due to any surrogate model) of 1.080 units or better (0.5% improvement over the baseli ne design) was considered to belong to the potential region of interest. This process id entified 29,681 unique points (~11%) in the potential good region. We select ed 110 points from this data se t using D-optimality criterion in this smaller region. D-optimal designs we re generated using MATLA B routine candexch with a maximum of 100 iterati ons to maximize D-efficiency (Myers and Montgomery, 1995, pp. 93). This 110 points data set is called Set C. As before, we conducted simulations at data points in the Sets B and C using PHANTOM. One point in each set failed to provide an a ppropriate mesh. The mean, minimum, and maximum values of the pressure ratio for the two datasets are summarized in Table 7-3. We observed only minor differences in the mean pressure-rat io of the Set B compared to the Set A (Table 7-2), but the responses in the Set C data indicated high potential of improvement. This clearly demonstrates the effectiveness of the reasonable design space approach used to identify the region of interest. We approximated the data in the Set B, and th e Set C, using four surrogates. We employed a reduced cubic, and a reduced quadratic polynomial for PRS approximation of the Set B and the Set C data, respectively. As can be se en from different error measures in Table 7-3, the quality of surrogate models fitted to the Set B, and the Se t C data was significantly better than the surrogate models fitted to the Set A data. This improvement in surrogate approximation was attributed to

PAGE 255

255 the increase in the sampling density (Set B) al lowing a cubic model, and the reduction of the design space (Set C). Both PRS models did not fail the lack-o f-fit test (p-value ~0.90+) indicating the adequacy of fitting surrogate mode ls in the respective design spaces. The PRESS metric and weights associated with different surrogates sugges ted that the PRS was the best surrogate model for the Set B and the Set C data, unlike kriging for the Set A data. We used the surrogate models fit to the Set B data for global sensit ivity analysis and the surrogate models fitted to the Set C data for optimization. The optimization of the performance using the surrogate models fitted to the Set B data was inferior to th e optimal design obtained using the Set C data based surrogates. Some of the optimal designs from the Set B data based surrogates could not be analyzed. This anomal y arose because large design space was sampled with limited data, such that large regions re mained unsampled; and hence, susceptible to significant errors, partic ularly near the corners where optima were found. The same issue restricted the use of surrogate models constructe d using Set C data for gl obal sensitivity analysis as there were large extrapolation errors outsi de the region of intere st where no point was sampled. Hence, surrogate models constructed using Set B data were more suited for conducting global sensitivity analysis. Global Sensitivity Assessment Global sensitivity analysis (G SA) is useful to characteri ze the importance of different design variables. This information can be used to identify the most, and the least important design variables. We fix the least important desi gn variables to reduce the complexity of the problem. Here, we use a variance based, non-para metric approach to perform global sensitivity analysis. In this approach, the re sponse function is decomposed in to unique additive functions of variables, and their interactions, such that th e mean of each additive function is zero. This decomposition allows the variance ( V ) to be computed as a sum of individual part ial variance of

PAGE 256

256 each variable ( Vi), and partial variance of interactions ( Vij) of different variab les. The sensitivity of the response function with respect to each va riable is assessed by comparing the sensitivity indices ( Si, Sij) that is the relative magnit ude of partial and total variance of each variable. Using Sobols (1993) approach of decomposing the vari ables into two groups; fi rst group with a single variable i , and second group Z with all variables, except ith variable, we compute the main ( Si) and total effect ( Si total) of sensitivity indices as follows. () ,totali iiZ iiVV V SS VV (7.2) In the above equation, ViZ denotes the partial variance of all interactions of ith variable. A detailed description of the global sensitivity analysis approach is given in Appendix C. We used Gauss quadrature numerical integrat ion scheme with four Gauss points along each direction (total 49= 262,144 points) to evaluate different integrals in GSA. The response at each point was evaluated using surrogate models fit to the Set B data. Corresponding sensitivity indices of main effects of di fferent variables are shown in Figure 7-10. Although there were differences in the exact magnitude of sensitivity indices from various surrogates, all surrogates indicated that the pressure ratio was most influenced by three variables, Pz, t2, and Py. A comparison of sensitivity indices of total and main effect of design variables (using PWS) in Figure 7-11 suggested that the interactions between variable s are small bu t non-trivial. To validate the findings of the global sensitivit y analysis, we evaluate d the variation in the response function (pressure ratio) by varying one variable at a tim e, while keeping the remaining variables at the mean values. We specified five equi-spaced levels for e ach design variable, and used trapezoidal rule to com pute the actual variance. The re sponses at design points were evaluated by performing actua l numerical simulations. The results of actual variance computations are shown in Figure 7-12.

PAGE 257

257 The one-dimensional variance computation re sults also indicated that variables, Pz, t2, and Py, were more important than all other variable s. This validated the findings of the global sensitivity analysis. The differences in the re sults of one-dimensional variance computation and global sensitivity analysis can be explained as follows: (1) the nu mber of points used to compute one-dimensional variance is small, (2) one-d imensional variance computation does not account for interactions between variab les, and (3) there are approxima tion errors in using surrogate models for global sensitivity analysis. Neverthele ss, the main implications of the finding clearly show that the performance of the diffuser vane was pre-dominantly affected by the location of point P and the length of tangent t2 in Figure 7-5. Optimization of Diffuser Vane Performance Next, we used surrogate models, fit to the Set C data, to maximize the pressure ratio by exploring different diffuser vane shapes. To avoid the danger of la rge extrapolation errors in the unsampled region, we employed the surrogate model (PRS surrogate) from the Set A as a constraint (all points in the f easible region should have pred icted response greater than a threshold value of 1.080). Our use of PRS from th e Set A as the constraint was motivated by the simplicity of the constraint function, and the f act that PRS contributed to the most number of points in the potential region of interest. We used sequential quadratic programming op timizer to find the optimal shapes. The optimal configuration of blade shapes, obtained using different surrogate models as function evaluators, is shown in Figure 7-13, and the corresponding optimal design variables are given in Table 7-4. The optimal designs obtained from al l surrogates were close to each other in both function and design space. A few minor differences were observe d in relatively insignificant design variables (refer to the results of global sensitivity analysis). Notably, all design variables touched the bounds for PRS, and were close to th e corner for other surrogate models. The small

PAGE 258

258 value of indicated sharper nose, as was used in the baseline design. Also, mo st tangents were at their lower bounds that resulted in low curvature sections. Near the point P, the tangents were on their upper limits to facil itate gradual transition in the slope. The optimal vane was thinner in the middle section, and was longer comp ared to the baseline design (Figure 7-13). The central region of the optimal design was non-convex compared to the convex section for the baseline design. We simulated all four candidate optimal de signs from different surrogate models to evaluate the improvements. The actual and predic ted performances from different surrogates are compared in Table 7-5. We observed that the error in approximation for different surrogates was comparable to their respective PRESS errors. Ne vertheless, PRS was the most accurate surrogate model and furnished the best performance shap e. RBNN was the worst surrogate model. PRESSbased weighted average surrogate model performe d significantly better than the worst surrogate. The best predicted diffuser vane yielded significant improvements in the performance (1.117) compared to the baseline design (1.074). We re fer to this design as intermediate optimal design. It is obvious, that the design space refinement in the region of interest based on multiple surrogates has pay-offs in the improved performa nce of surrogates, and the identification of optimal design. The high confiden ce in the optimal predictions was also derived from the similar performance of all surrogate models. The results also showed the incentives (protection against the worst design, proper identifi cation of the reasonable design space) of investing a small amount of computational resources in constructing multiple surrogate models (less than the cost of a single simulation) for computationally expe nsive problems, and then the extra cost of evaluating multiple optima.

PAGE 259

259 We compared instantaneous (at the end of simu lation) and time-averaged flow fields from the intermediate optimal design (from PRS) with the baseline design in Figure 7-14. The intermediate optimal design allowed smoother turning of the flow compared to the baseline design, and reduced the losses due to separation of the flow, whic h were significantly high in the baseline design. Consequently, th e pressure at the outlet was higher for this intermediate optimal design. We also noted an increase in pressure loading on the vane for the intermediate optimal design. Design Space Refinement Dimensionality Reduction We used first design space refinement by identif ying the region of interest. This helped in identifying the intermediate optimal design. Since most design variables in the intermediate optimal design were at the boundary of the design space (Table 7-4), further improvements in the performance of the diffuser might be obtained by relaxing the limits of the design variables. To reduce the computational expense, we reduce th e design space by utilizi ng the findings of the global sensitivity analysis. We fi xed six relatively insignificant design variables at the optimal design (predicted using PRS), and expanded the range of the three most important design variables. The modified ranges of the design va riables and the fixed parameters are given in Table 7-6. We selected 20 design points using a combination of face-centered central composite design (15 points FCCD), and LHS designs (five poi nts) of experiments. The range of pressure ratio at 20 design poi nts is given in Table 7-7. Note that, all tested designs in the refined design space performed equal to or better th an the intermediate optimal design. As before, we constructed four surrogate models in the refined design space. The performance metrics, specified in Table 7-7, indicated that all su rrogate models approximated the response function very well. The weights associat ed with different surrogate models suggested that a quadratic PRS approximation represents the data the best. This result is not unexpected

PAGE 260

260 since any smooth function can be represented by a second order polynomial, if the domain of application is small enough. As before, PWS model was comparable to the best surrogate model. Final Optimization The design variables for the four optimal de signs of the diffuser vane obtained using different surrogate models and corresponding surr ogate-predicted, and actu al (CFD simulation) pressure ratio are listed in Table 7-8. The error in predictions of surrogate models compared well with the quality indicators, and all surrogate models had only minor differences in the performance. Also, the optimal vane shapes from different surrogates were similar. In this case also, polynomial response surface approximation conceded the smallest errors in prediction. While the performance of the optimal diffuser va ne has improved compared to the intermediate optimal design (compare to Table 7-5), the contribution of the optimization process was insignificant. One of the data points resulted in a better performance than the predicted optimal. This result was not surprising because the optimal design existed at a corner that was already sampled leaving little scope for further impr ovement. Nevertheless, the optimizers correctly concentrated on the best region. As expected, the optimized design was thinner and streamlined to further reduce the losses, and to improve the pressure recovery. Analysis of Optimal Diffuser Vane Shape We analyzed the optimal diffuser vane shape ( t2 = 0.60, Py = -2.00, Pz = 6.00; P-ratio = 1.151 ) according to the flow structure and th e other considerations as follows. Flow Structure The instantaneous and time-averaged pressure contours for the optimal vane shape (best data point) are shown in Figure 7-15. We observed further re duction in separation losses and smoother turning of the flow compared to the inte rmediate optimal design obtained before design space refinement (Table 7-5). Consequently, the pressure rise in the diffuser was higher. The

PAGE 261

261 optimal vane shape had a notable curvature in th e middle section on the lower side of the vane (near the point P). This curvature decelerated th e flow, and led to faster increase in pressure (notice the shift of higher pressure region towards the inlet in Figure 7-14 and Figure 7-15). Vane Loadings The shapes and pressure loads on the baseli ne, intermediate optimal, and final optimal vanes are shown in Figure 7-16. We noted the increase in the mean pressure on the diffuser vane for the optimal design. The pressure loads near the inlet tip, and the pressure loading on the diffuser vane, given by the area bound by the pressu re profile on the two sides, had reduced by optimization. However, the optimized diffuser vane might be susceptible to high stresses as the optimal design was thinner compared to the baseline vane. The intermediate optimal design served as a compromise design with relatively higher pressure ratio (~4%) compared to the baseline design, and lower pressure loading on the vanes compared to the optimal design. In future, this problem would be studied by accounting for manufacturing and structural considerations, like stresses in the diffuser vane s. One can either specify a constraint to limit stress to be less than the feasible value or alternatively, one can solve a multi-objective optimization problem with two competing objectiv es, minimization of stress or pressure loading in the vane, and maximization of pressure ratio. Empirical Considerations Typically, the vane shape design is carried ou t using empirical considerations on the gaps between adjacent diffuser vanes as shown in Figure 7-17. The empirical suggestions on the ratio of different gaps (Stepanoff, 1993) and actual values obtained fo r different vanes are given in Table 7-9. Contrary to the empirical rela tions, the ratio of length to width gap ( L/W1), and ratio of width gaps ( W2/W1) decreased as the pressu re ratio increases, though the actual magnitude of length and width of the gaps increase. The di screpancies between the optimal design and the

PAGE 262

262 empirical optimal ratios (Stepanoff, 1993) are explained by the multiple design considerations used for the empirical optimal. We note that th e optimization was carried out for a 2D diffuser vane not for the combination of vanes and flow for which the empirical ratios are provided. This allowed a variable height of the vane for optimization. Howe ver, the empirical ratios are obtained by assuming a constant cha nnel height, so that the area ra tio of the channel reduces to the ratio of width gaps ( W2/W1). Nevertheless, this requires fu rther investigation to understand the cause of discrepancies between the empirical ratios and that obtained for the optimal design. Summary and Conclusions We used surrogate model based optimization strategy to maximize the hydrodynamic performance of a diffuser, characterized by the in crease in pressure ratio, by modifying the shape of the vanes. The shape of the diffuser vanes wa s defined by a combination of Bezier curves and a circular arc. We defined the shape of the vane using nine design variab les and used surrogate models to represent the pressure ratio. We used lack-of-fit test to identify the issues of model inadequacy, and insufficiency of the data to represent the pressure ratio. We addressed these issues by, (1) adding more data points, and (2) identifying the region of in terest using the lessaccurate surrogate models. More samples were added in the region of interest using the information from multiple surrogate models. The surrogate models, constructed with increased data and/or in smaller design space, were significantly more accurate than the initial surro gate models. Also, during the course of design space refinement, the best surrogate model change d from kriging (initial data) to polynomial response surface approximation (all subsequent results). Had we followed the conventional approach of identifying the best surrogate model with the first design of experiments, and then used that surrogate model for optimization, we mi ght have not captured the best design. Thus, we can say that the results reflect the improveme nts in the performance using the design space

PAGE 263

263 refinement approach, and using multiple surrogates constructed by incurring a low computational cost. We conducted a surrogate model based sensitivit y analysis to identify the most important design variables in the entire design space. Thr ee design variables controlling the shape of the upper and lower side of the vane were found to be most influential. We used surrogate model in the reduced design space to identify the optimal design in nine variable design space. This intermediate optimal design improved the pressure ratio by more than four percent compared to the baseline design. Since all the design variables for intermedia te optimal design hit the bounds, we further refined the design space by fixing the least importa nt variables on optimal values to reduce the design space, and relaxing the bounds on the mo st important design variables. The optimal design obtained using the surrogate models in the refined design space further improved the performance of the diffuser by more than seven percent compared to the baseline design. The pressure losses in the flow were reduced, and a more uniform pressure increase on the vane was obtained. However, the optimal vane shape might be susceptible to failure due to high stresses. This behavior was attributed to the absence of stress constraint that allowed using thinner vanes to maximize the performance. In the future, the optimization would be ca rried out by considering the multi-disciplinary analysis accounting for st ress constraint, manufacturability, and pressure increase. In terms of the vane shapes, as expected thin vanes helped improve the hydrodynamic performance significantly. The inte resting aspect was the change in the sign of curvature of the vane on the suction side that allows an init ial speeding of the flow followed by a continuous pressure recovery without flow separation.

PAGE 264

264 Figure 7-1. A representative expa nder cycle used in the upper stag e engine (courtesy: wikipedia entry on expander cycles). Diffuser vane Main impeller blade Splitter blade Inlet guide vane Diffuser vane Main impeller blade Splitter blade Inlet guide vane Figure 7-2. Schematic of a pump. IGV Main/Splitter blade Flow path IGV Main/Splitter blade Flow path Figure 7-3. Meanline pump flow path. IGV is inlet guide vane.

PAGE 265

265 A B Figure 7-4. Baseline diffuser vane shape and timeaveraged flow. A) Diffuser vane shape. B) Streamlines and time-averaged pressure. Figure 7-5. Definition of the geometry of the diffuser vane (refer to Table 7-1 for variable description). Figure 7-6. Parametric Bezier curve.

PAGE 266

266 Figure 7-7. A combination of Hand O-grids to analyze diffuser vane. Body-fitted O-grids are shown in green and algebraic H-grid is shown in red. Figure 7-8. Surrogate based desi gn and optimization procedure.

PAGE 267

267 Design of experiments Numerical simulations at selected locations Construction of surrogate models (Model selection and identification) Model Validation If necessary Figure 7-9. Surrogate modeling. A t30.5% t40.1% t50.1% t211.7% t60.1% Py6.8% Pz79.8% 0.4% t10.4% t10.2% 0.8% Pz79.1% Py6.0% t60.0% t213.1% t50.0% t40.0% t30.9% B C t31.5% t40.9% t51.0% t212.9% t61.3% Py7.5% Pz72.6% 1.3% t11.0% t10.3% 0.6% Pz79.2% Py6.5% t60.1% t212.5% t50.1 % t40.1% t30.6% D Figure 7-10. Sensitivity indices of main effect using various surrogate models (Set B). A) Polynomial response surface approximation. B) Kriging. C) Radi al basis neural network. D) PRESS-based weight ed average surrogate model.

PAGE 268

268 0.000.200.400.600.801.00 t1 t2 t3 t4 t5 t6 Py PzVariablesSensitivity index Total effect Main effect Figure 7-11. Sensitivity indices of main and total effects of different variables using PWS (Set B). Pz63.2% t50.0% t30.8% t220.3% t60.2% t40.4% 0.6% t10.7% Py13.7% Figure 7-12. Actual partial varian ce of different design variables ( no interactions are considered).

PAGE 269

269 Figure 7-13. Baseline and optimal diffuser vane sh ape obtained using different surrogate models. PRS indicates function evaluator is pol ynomial response surface, KRG stands for kriging, RBNN is radial basis neural ne twork, and PWS is PRESS-based weighted surrogate model as function evaluator.

PAGE 270

270 A B C D Figure 7-14. Comparison of instantaneous and time-av eraged flow fields of intermediate optimal (PRS) and baseline designs. A) Instantane ous Intermediate optimal design. B) Time-averaged Intermediate optimal design. C) Instantaneous Baseline design. D) Time-averaged Baseline design.

PAGE 271

271 A B Figure 7-15. Instantaneous and time-averaged pressure for the final optimal diffuser vane shape. A) Instantaneous pressure and stream lines. B) Time-averaged pressure and streamlines. A B Figure 7-16. Pressure loadings on different vanes. A) Different vane shapes. B) Corresponding pressure loadings.

PAGE 272

272 W1W2 L W1W2 L Figure 7-17. Gaps between adjacent vanes.

PAGE 273

273 Table 7-1. Design variables and corresponding ranges. Angle is given in degrees and all other dimensions are scaled accordi ng to the baseline design. Minimum Maximum 60o 110o t1 1.25 2.50 t2 0.75 1.50 t3 0.05 0.30 t4 0.50 1.00 t5 0.50 1.00 t6 1.00 2.00 Py -2.50 -2.00 Pz 5.70 5.85 Table 7-2. Summary of pressu re ratio on data points and pe rformance metrics for different surrogate models fitted to Set A. We tabulate the weights associated with the surrogates used to construct PRESS-ba sed weighted surrogate (PWS). PRS: Polynomial response surface, KRG: Kriging, RBNN: Radial basi s neural networks, RMSE: Root mean square error, PRESS: Pr edicted residual sum of squares. Here we give the square root of PRESS so as to facilitate easy comparison with RMSE. Surrogate Parameter Value Weight # of points 110 Minimum of data 1.001 Mean of data 1.041 Pressure ratio Maximum of data 1.093 2 adj R 0.863 RMSE 8.00e-3 PRESS 1.33e-2 Max absolute error 2.62e-2 PRS Mean absolute error 4.40e-3 0.32 Process variance 1.09e-4 KRG PRESS 1.02e-2 0.41 PRESS 1.58e-2 Max absolute error 2.03e-2 RBNN Mean absolute error 3.33e-3 0.27 Max absolute error 1.39e-2 PWS Mean absolute error 1.95e-3

PAGE 274

274 Table 7-3. Range of data, qua lity indicators for different surrogate models, and weights associated with the components of PWS. PRS: Polynomial response surface, KRG: Kriging, RBNN: Radial ba sis neural networks, PWS: PRESS-based weighted surrogate, RMSE: Root mean square error, PRESS: Predicted resi dual sum of squares (in PRS terminology), Here, we give the square root of PRESS so as to facilitate easy comparison with RMSE. We used a reduced cubic and a reduced quadratic polynomial to approximate the Set B and Set C data, respectively. Set B Set C Surrogate Parameter Value WeightValue Weight # of points 439 109 Minimum of data 1.000 1.052 Mean of data 1.040 1.075 Pressure ratio Maximum of data 1.097 1.105 2 adj R 0.959 0.978 RMSE 4.07e-31.65e-3 PRESS 4.84e-38.74e-3 Max absolute error 1.02e-23.84e-3 PRS Mean absolute error 2.67e-3 0.44 1.05e-3 0.36 Process variance 1.11e-41.05e-4 KRG PRESS 5.89e-3 0.37 9.47e-3 0.34 PRESS 1.17e-21.07e-2 Max absolute error 1.53e-22.57e-2 RBNN Mean absolute error 1.47e-3 0.19 2.89e-3 0.30 Max absolute error 6.85e-37.91e-3 PWS Mean absolute error 1.30e-3 1.04e-3 Table 7-4. Optimal design variables and pre ssure ratio (P-ratio) obtained using different surrogates constructed using Set C data. PR S is polynomial response surface, KRG is kriging, RBNN is radial basis neural ne twork, and PWS is PRESS-based weighted surrogate. Surrogate T1 t2 t3 t4 t5 t6 Py Pz Predicted P-ratio PRS 60.00 1.25 0.75 0.051.001.001.00-2.005.851.120 KRG 63.58 1.25 0.80 0.050.810.881.43-2.005.851.111 RBNN 64.49 1.32 0.78 0.070.940.541.08-2.045.841.105 PWS 63.65 1.25 0.75 0.051.000.531.09-2.005.851.109

PAGE 275

275 Table 7-5. Comparison of actual and predicted pressure ratio of optimal designs obtained from multiple surrogate models (Set C). Each row shows the results of optimal design using a particular surrogate, and different columns show the prediction of different surrogate models at each optimal design. PRS is polynomial response surface, KRG is kriging, RBNN is radial basis neural ne twork, and PWS is PRESS-based weighted surrogate. Surrogate prediction using Design by Actual P-ratio PRS KRG RBNNPWS PRS 1.117 1.120 1.103 1.084 1.103 KRG 1.113 1.112 1.111 1.085 1.104 RBNN 1.106 1.106 1.104 1.105 1.105 PWS 1.114 1.116 1.108 1.102 1.109 Average absolute error 0.0014 0.00600.01860.0072 Table 7-6. Modified ranges of de sign variables and fixed parameters in refined design space. Variables Min Max Fixed parameters ValueFixed parameters Value T2 0.60 0.75 60o t4 1.00 Py -2.00 -1.50 t1 1.25 t5 1.00 Pz 5.85 6.00 t3 0.05 t6 1.00

PAGE 276

276 Table 7-7. Range of data, summary of performa nce indicators, and weighted associated with different surrogate models in the refined design space. PRS: Polynomial response surface, KRG: Kriging, RBNN: Radial basis neural networks, PWS: PRESS-based weighted surrogate, RMSE: Root mean squa re error, PRESS: Predicted residual sum of squares (in PRS terminology). Here we gi ve the square root of PRESS so as to facilitate easy comparison with RMSE. Surrogate Parameter Value Weight # of points 20 Minimum of data 1.117 Mean of data 1.136 Pressure ratio Maximum of data 1.151 2 adj R 0.956 RMSE 1.75e-3 PRESS 2.74e-3 Max absolute error 2.53e-3 PRS Mean absolute error 1.11e-3 0.59 Process variance 1.21e-4 KRG PRESS 7.14e-3 0.25 PRESS 1.11e-2 Max absolute error <1.0e-6 RBNN Mean absolute error <1.0e-6 0.16 PRESS 4.21e-3 Max absolute error 1.51e-3 PWS Mean absolute error 6.62e-4 Table 7-8. Design variables and pressure ratio at the optimal designs predicted by different surrogates. PRS is polynomial response surf ace, KRG is kriging, RBNN is radial basis neural network, and PWS is PRESS-based weighted surrogate. Surrogate predictions using T2 Py Pz Actual P-ratioPRS KRG RBNN PWS PRS 0.60 -1.99 6.00 1.150 1.151 1.151 1.155 1.152 KRG 0.60 -1.87 6.00 1.149 1.150 1.152 1.152 1.151 RBNN 0.61 -1.97 5.99 1.150 1.150 1.150 1.155 1.151 PWS 0.60 -1.96 6.00 1.150 1.151 1.152 1.154 1.152 Average absolute error 7.5E-041.50E-034.25E-03 1.75E-03 Table 7-9. Actual and empiri cal ratios of gaps between adjacent diffuser vanes. W1 W2 L L/W1W2/W1P-ratio Empirical relations 4.00 1.60 Baseline vane 0.66 0.94 2.093.15 1.42 1.074 Intermediate optimal 0.79 1.07 2.032.55 1.35 1.117 Optimal vane 0.94 1.11 2.022.15 1.19 1.150

PAGE 277

277 CHAPTER 8 SUMMARY AND FUTURE WORK In this chapter, we summarize the main cont ributions in the form of major conclusions derived from this work and discu ss the prospects of future work. As we stated in the beginning, the main goa l of this work was to develop methodologies for optimal design of space propulsion systems. Firstly, we revisited the challenges in the optimal design of space propulsion systems and illustra ted the need of using surrogate models to alleviate high computational expense. Then we hi ghlighted the issues that influence the effective use of surrogate models for design and optimization. To this end, we illustrated the risks in design of experiments based on a single criterion and difficulties in th e choice of surrogate model, and proposed some remedies. We also offe red insight into a somewhat neglected topic of appraising the accuracy of error estimation mode ls that can aid in optimization by correctly identifying regions of high uncerta inties. Finally, we showed the application of above strategies to two applications of relevan ce to space propulsion systems: (1) we used surrogate model based strategy for model validation and sensitivity eval uation of a cryogenic cavitation model, and (2) we optimized the hydrodynamic performance of the diffuser. We briefly recapitulate important lessons from this work and the sc ope of future work as follows. Pitfalls of Using a Single Crit erion for Experimental Designs Summary and Learnings We illustrated the non-dominated nature of diffe rent types of experimental designs while considering multiple criteria, and demonstrated the risks in using a single criterion to construct experimental designs. Particularly min-max RMS bias design, which minimizes the maximum RMS bias error in the entire design space, may yield a design that is very sensitive to noise. Popular experimental designs like LHS and D-optimal designs can leave large holes in design space that may potentially cause la rge errors in approximation without any indication of poor quality of surrogate model.

PAGE 278

278 Face-centered central composite design, which is an intuitive design, performs very well on multiple criteria but this design is not practical for high-dimensional spaces. D-optimal designs are better than LHS designs in reducing maximum errors but the latter is better for space-averaged bias errors. To alleviate the risk of using a single crit erion based experimental design, we explored multiple strategies. We demonstrated possibl e advantages of simultaneously considering complimentary criteria. In part icular, we showed improvement s in space-averaged errors by combining two such criteria, model based D-optimality criterion, which caters to variance, and geometry based LHS design, wh ich improves the distri bution of points in design space. We further evidenced the elimination of poor experimental designs by selecting one out of three experimental designs accordin g to an appropriate criterion. Future Work We have posed the problem of need to simultaneously consider multiple criteria to reduce risk of running into poor experimental designs. However, a significant amount of research is required to answer the following tw o questions, (1) which criteria should be c onsidered?, and (2) how to simultaneously accommodate multiple criteria? Ensemble of Surrogates Summary and Learnings We demonstrated that a singl e surrogate model may not be appropriate for all problems. Instead the best surrogate depe nds on the design of experiment nature of the problem, and the amount of data used to develop surrogate model. We showed that simultaneously using multiple surrogate models may prove more effective than using a single surrogate model. We proposed a method to develop a weight ed-average of surrogates using a crossvalidation estimate of surrogate performance. The proposed weighted surrogate performed at par with the best individual surrogate and protected us from the worst surrogate with lower sensitivity to the choice of design of experiment, sampling density, and the dimensionality of the problem. Using multiple surrogates, we could identify regions where uncertainty in predictions is significant, and measures like, adaptive samp ling can be taken to improve predictions.

PAGE 279

279 Future Work The simultaneous use of multiple surrogate s is a low cost method to account for uncertainties due to choice of approximation model. Some of the possible areas that can be explored in future are as follo ws: (1) to identify the suitabilit y of many different surrogates for model averaging, (2) to develop different schemes for model averag ing, (3) to develop different methods for selection of weights, and (4) to develop error prediction models for weighted average surrogates. Accuracy of Error Estimate s for Noise-free Functions Summary and Learnings Though practically very useful error models have not re ceived much attention in engineering optimization, possibly because of lack of confidence in thei r accuracy. We compared different error estimation models with th e help of various example problems. The main finding was that model independent er ror measures perform equal to or better than the model dependent error estimators. Generalized cross-validation error yields a reas onable estimate of actual root mean square error in the entire design space, though it usua lly overestimates the errors. While estimated root mean square error for polynomial respon se surface (PRS) approximation consistently underpredicts actual errors, it pe rforms much better than proc ess variance for kriging. Among local error estimation models, no error model characterizes th e entire error field well for all problems. Estimated standard error for PRS underestimat es actual RMS errors and identifies the high error regions quite accurately. This error estim ation model is least influenced by the choice of design of experiment and nature of the pr oblem. On the other hand, root mean square bias error model performs the best when the discrepancy between assumed true model and actual function is small. RMS bias error mostly overestimates actual errors in PRS. Mean square error measure for kriging unde restimates actual errors, and shows little variation with the nature of the problem. This error mode l very accurately identifies potential high error regions. The standard deviation of responses, whic h is a model-independe nt pointwise error measure, also characterizes the actual root m ean square errors reasonably well for different

PAGE 280

280 designs of experiments, problems, and surroga tes. However, this model performs very poorly when any constituent surroga te predictions are very bad. We also explored possibilities of simultaneously using multiple error models to improve the error prediction characteristics. A geom etric averaging of error measures, like estimated RMS error and PRESS for PRS, estimat ed standard error and RMS bias error for PRS, mean square error and standard devi ation of responses for kriging, improves robustness with respect to designs of experime nts and nature of the problem, in prediction of actual root mean square error in the entire design space. We showed that simultaneous use of multiple error models for locating the high error regions reduces the risk of failure, th ough it causes false alarms quite often. We observed encouraging results in identifyi ng the appropriate error model using the error prediction capabilities at data points using a generalized cr oss validation based approach. Future Work We showed that while global error estimation models are reasonably well developed, there is a need to improve the capabi lity of local error modeling, part icularly in the area of modelindependent error measures that can be used for any problem. Besides, we can significantly benefit by developing better ways to use an en semble of error models. Some areas worthexploring are weighted averaging of error m odels, reduction in the false alarm while using multiple error models, and so on. System Identification of Cryogenic Cavitation Model Summary and Learnings We studied one cryogenic cavitation model (Merkl e et al., 1998) in de tail to assess the influence of thermo-sensitive material properties, model parameters, and thermal effects on the prediction of cavitating flow va riables in cryogenic environmen ts; and used surrogate based approach to calibrate the model parameters. Firs tly, we studied the influence of variation in model parameters Cdest and Cprod, and uncertainties in material properties, late nt heat of vaporization L and vapor density v, on the prediction of pressure and temperature in a liquid nitrogen flow over a hydrofoil in a suitabl y designed tunnel. This benchmark case was

PAGE 281

281 extensively studied by Hord (1973a) for different fluids and flow conditions. The performance of cavitation model was characterized by L2-norm of the deviation of predicted surface pressure ( Pdiff) and temperature ( Tdiff) data from the experimental data. We approximated the two prediction indicators us ing surrogate models to limit the computational cost. The main conclusions are enumerated as follows. Using a global sensitivity an alysis approach proposed by Sobol (1993), we found that the model parameter Cdest influenced the performance of the cavitation model the most, and Cprod was the least influential parameter. Relatively, uncertainties in material properties were less significant but not negligible. Uncer tainty in vapor density was more important of the two selected material properties. Further, applying the information from the sens itivity analysis, we calibrated the cryogenic cavitation model for different fl uids using the model parameter Cdest. The objective of calibration was to simultaneously minimize the competing responses Pdiff and Tdiff. Again, using a multiple surrogate based optimization strategy, and noting the importance of pressure predictions that influence cavitation more than the temperature predictions, we found ,20.6392destLNC ,20.767destLHC and 54.4prodC as the best compromise model parameters. The choice of calibrated model parameters im proved robustness with respect to different geometries and operating condi tions. From physical point of view, the reduction in the parameter Cdest, instigated an earlier ons et of condensation, and hence the cavity closure, which was difficult to predict using original parameters. The application of multiple surrogate models was found very effective in this model validation and calibration exercise. The role of thermal environment on predic tions was assessed using analysis and simulations. We presented an analytical framew ork to assess the influence of variation in material properties on the cavitation model performance. We found that the wall heat transfer due to thermal boundary condition does not affect the flow variables significantly, but the thermal effects via phase change were very influe ntial in determining the cavity morphology. Future Work While the current effort clearly showed impr ovements in the prediction capabilities of the present cryogenic cavitation model, we noted difficulties in simultaneous prediction of pressure and temperature fields. To this end, one needs to critically probe the current cryogenic cavitation

PAGE 282

282 model and physics of cavitation in cryogenic environment to devel op better models. Besides, the surrogate-based approach of model validation can be extende d to other flow problems. Shape Optimization of Diffuser Vanes Summary and Learnings We used surrogate based methodology to improve the hydrodynamic performance of diffuser that is an important propulsion component. The optimized design has thinner, no n-convex shaped diffuser vanes. While these vanes may be susceptible to the manufacturing and structural difficulties, the improvements in the hydrodynamic pe rformance are significant. We observed that initial speedi ng of the flow can help avoid separation at a later stage, which helps reduce pressure loss. The design approach was aided by different as pects of surrogate modeling. Specifically, the reasonable design space approach helped us identify the high performance region, the dimensionality of the problem was reduced following a global sensitivity analysis, and the use of multiple surrogate models protected us from obtaining a sub-optimal design as the best surrogate modeled changed during the course of optimization. Future Work This problem served as a proof of concept for the future development of space propulsion systems that may involve cavitati on or other complex flows. With respect to the diffuser design with the current operating conditions one can improve the practical utility of the current effort by considering manufacturing and st ructural constraints. The cu rrent analysis was based on a vane-only analysis. In future, we can combin e flow path with the vane to improve the performance. Though the methodologies developed in this work are applied to the design of spacepropulsion systems, they are generic in natu re and can be utili zed for any specialty.

PAGE 283

283 APPENDIX A THEORETICAL MODELS FOR ESTIMA TING POINTWISE BIAS ERRORS Let the true response (x) at a design point x be represented by a polynomial Tf(x) where f(x) is the vector of basis functions, and is the vector of coefficients. The vector f(x) has two components: (1)f(x) is the vector of basis functions used in the polynomial response surface model, and (2)f(x) is the vector of additional basis functions that are missing in the linear regression model. Similarly, the coefficient vector can be written as a combination of vectors (1) and (2) that represent the true coefficients a ssociated with the basis function vectors (1)f(x) and (2)f(x), respectively. Precisely, 1 121(1)22 2()()(())(()).TTTTT () ()()()()() () xfx fffx fx (A1) Assuming normally distributed noise with zero mean and variance 2 (2(0,) N ), the observed response y(x) at a design point x is given as ()(). y xx (A2) If there is no noise the true response () x is the same as the observed response () yx. Then, the true response for Ns design points (,1,,)i s iN ()x in matrix notation is (1) (1)(2)(1)(1)(2)(2) (2), XXXXX y + (A3) where y is the vector of observed re sponses at the data points, X(1) is the Gramian matrix constructed using the basis functions corresponding to (1)f(x), and X(2) is constructed using the missing basis functions corresponding to (2)f(x). As an example, a Gramian matrix in two

PAGE 284

284 variables when the PRS model is quadratic and the true response is cubic (with monomial basis functions) (1)(1)(1)(1)2(1)(1)(2)2(1)3(1)2( 121122112 (2)(2)(2)2(2)(2)(2)2 121122 ()()()2()()()2 121122 ()()()2()()()2 1211221 1 1 1ssssssiiiiii NNNNNN Xxxxxxxxxx xxxxxx X xxxxxx xxxxxx (2)1)(1)(1)2(1)3 122 (2)3(2)2(2)(2)(2)2(2)3 112122 ()3()2()()()2()3 112122 ()3()2()()()2()3 112122.ssssssiiiiii NNNNNN Xxxx xxxxxx xxxxxx xxxxxx (A4) The predicted response () y x at a design point x is given as a linear combination of approximating basis function vector (1)f(x) with corresponding estimated coefficient vector b: (1) ()(()).Ty xfxb (A5) The estimated coefficient vector b is evaluated using the data for Ns design points as (Myers and Montgomery, 1995, Chapter 2) (1)(1)(1)-1.TTXXX b() y (A6) Substituting for y from Equation (A3) in Equation (A6) gives (1)(1)(1)(1)(1)(2)(2)-1,TTXXXXX b()[ + ] (A7) that can be rearranged as (1)(2)A b where (1)(1)(1)(2)-1,TTAXXXX= (A8) is called the alias matrix. Equation (A8) can be rearranged as (1)(2)A b (A9) Note that this relation is valid only if Equation (A3) is satisfied (i.e., no noise). The bias error at Ns design points is defined as (1) .bX e yyy b (A10)

PAGE 285

285 Substituting for y from Equation (A3) and for b from Equation (A8) gives (1)(1)(2)(2)(1)12(2)(1)2.bXXXAXXA ()()()e (A11) Thus, the bias error at Ns design points is a function of the coefficient vector 2() only. The error at a general design point x is the difference between the true response and the predicted response, ()()() ey xxx. When bias error is dominant, ()()bee xx, where ()bex is the bias error at design point x. Substituting values from Equations (A1) and (A5) gives (1)(1)(2)(2)(1) ()()()(())(())(()).TTTbeyxxxfx fx fxb (A12) This expression can be used to estimate pointwise estimates of RMS bias error and/or bounds on bias error. Data-Independent Error Measures Firstly, we develop measures to estimate e rror prior to generation of data that is no experiment/simulation is conducted. Such erro r measures are useful for determining the experimental designs. We assume that there is no noise in data, and the true function (()fx) and the polynomial response surface model ((1)()fx) are known such that Equation (A3) is satisfied. Substituting Equation (A9) in Equation (A12) and rearranging terms, we get, (2)(1)(2)()(())(()).xfxfx T TbeA (A13) This shows that for a given experimental design (alias matrix A is fixed), the bias error at a general design point x depends only on the true coefficient vector (2). So, pointwise bounds or root mean square estimates of bias error can be obtained by supplying the distribution of the true coefficient vector (2).

PAGE 286

286 Data-Independent Bias Error Bounds The bound on bias errors characterize the ma ximum error considering all possible true functions. Mathematically, pointwise data-i ndependent bias error bounds are given as, (2)(2)(1)(2)() ()maxmax. (())(()) x x fxfx T TI b be e A (A14) For a given experimental design, the term (2)(1)(())(())fxfxTA is a function of x only. Define, (2)(1)()(())(()).mxfxfxTA (A15) Then, Equation (A14) can be written as, (2)(2)()max. ()x mx TI be (A16) Obviously Equation (A16) is maximized, when each term of the coefficient vector (2) takes an extreme value. With no data on the distribution, the pr inciple of maximum entropy was followed by assuming that all components of the coefficient vector (2) have a uniform distribution between and ( is a constant). Then, 21()().xxN i iI bem (A17) Data-Independent RMS Bias Error The data-independent root m ean square of bias error ()xrmsI be at design point x, that characterizes the average error, is obtained by computing its L2 norm 2()bEex (() Ex denotes the expected value with respect to ), where (2)(1)(2)(2)(1)(2)2()()()()()()().T TTTTTbbbEeEeeEAA xxxfxfx fxfx (A18)

PAGE 287

287 Since the term (2)(1)()()TA fxfx depends only on the design point x and the experimental design, the above equation can be rewritten as (2)(1)(2)(2)(2)(1) (2)(1)(2)(2)(2)(1)2()()()()() ()()().()TTTT TTTTbEeEAA AEA xfxfx fxfx fxfx ffxx (A19) Then, (2)(1)(2)(2)(2)(1)2()()()()()(). xxfxfx fxfxTTTTrmsI bbeEeAEA (A20) It is obvious from Equation (A20) that the RMS bias error at any point depends on the distribution of the coefficient vector (2) For a uniform distribution between and for all components of the coefficient vector (2) with simple algebra it can be shown that 2 (2)(2)3TEI where I is an 22NN identity matrix. Substituting this and Equation (A15) in Equation (A20), the pointwise RMS bias error is 2 (2)(1)(2)(1)()()()()() 3 () 3 xfxfxfxfx mxTTTrmsI beAIA (A21) where ||.|| represents the norm of the quantity. Data-Dependent Error Measures Next, we develop the error measures posterior to generation of data such that the error measures yield an estimate of actual error in approximation. Unlike data-independent error measures, the choice of the true coefficient vector is restricted by the constraint that the data at sampled locations must be satisfied.

PAGE 288

288 Bias Error Bound Formulation Papila et al. (2005) presented a method to co mpute pointwise bias error bounds, when data was exactly satisfied by the assumed true m odel, and there was no noise in data. This formulation was generalized by Goel et al. (2006a) to estima te bias error bounds while accounting for noise in the data, and rank deficiencies in the ma trix of equations. Their method to estimate pointwise bias error bound ()beb be x is given as follows: (1)(2), (1)(1)(2)(2) (1)(1)(1) lu (2)(2)(2) lu()Maximize() :,beb bbee XX xx Subjectto y1 y 1 c c c c (A22) where (called as relaxation) is the acceptable deviation from the data y, and ,lucc are the lower and upper bounds on the coefficient vector respectively. The minimum relaxation required to get a feasible solution of th is problem can be obtained by solving, (1)(2)minmin ,, (1)(1)(2)(2) minmin min (1)(1)(1) (2)(2)(2)Minimize 0 .lu luXX Subjectto: y1 y1 c c c c (A23) The value of relaxation min is then chosen as suggested by Goel et al. (2006a). Equation (A22) can be cast as: max(max(()be x ), -min(()be x )), where ()be x is computed using (A12). So, bias error bounds at point x can be obtained by solvi ng two linear programming problems, one to maximize bias error, and second to minimize bias error, subject to data and appropriate bounds on the coefficient vector

PAGE 289

289 Root Mean Square Bias Error Formulation The root mean square of bias error () xrms be at design point x is obtained by computing its L2 norm (Goel et al. 2006c, 2007b) as follows: 2())()((()()),rmsT bbbbeeEEeexxxx (A24) where (()) E gx is the expected value of () gx with respect to Substituting for ()be x from Equation (A12), 2 (1)(1)(2)(2)(1)(1)(2)(2)()()() (())(())(())(()).T bbb T TTTTEeEee E bbxxx fx fx fx fx (A25) Since, (1)() fx and (2)() fx depend only on x Equation (A25) can be rearranged as, 2(1)(1)(1)(1) 1 (2)(2)(1)(1) 2 (1)(1)(2)(2)(2)(2)(2)(2) 4 3()(())(()) (())(()) (())(())(())(()).T T b T T TTTTEeE E EE bb b bxfx fx fx fx fx fxfx fx (A26) This expression can be estimated if the distribution of the coefficient vector is known. Determining the Distribution of Coefficient Vector We know that RMS bias errors (Equation (A24)), and bias error bounds (Equation (A22)) depend on the information about the coefficient vector The distribution of coefficient vector can be obtained using the data at sampling points as follows. Since the rank of matrix X is lesser than the number of co efficients, there are (),eNNrankX zero eigenvalues, and correspondingly, eN null eigenvectors. Using the pr operties of null eigenvectors ( V ) 0X V,

PAGE 290

290 we obtain the distribution of coefficient vector as a linear combination of eN null eigenvectors, given as, 1iieN i V (A27) where coefficient vector satisfies the data within the limit of minimum relaxation min is the norm of vector and i is the random coefficient associated with ith null eigenvector Vi. For a given experimental design, and Vi are fixed, so the distribution of is related to the di stribution of i Using the maximum entropy prin ciple, we assume that all i follow a uniform distribution , where is a constant. For a given experimental design, the coefficient vector is obtained in two steps: We identify the minimum-norm solution (m ) of under-determined system of equations X y If matrix X is full rank, solution m exactly satisfies the system of equations, and obviates next step. Otherwise mmX y ; and the difference between y and m y denotes imperfections in modeling of the data. We specify bounds ,lucc in Equation (A23) using m as, lmmc and ummc (A28) and using m as initial guess in Equation (A23), we find Substituting Equation (A27) in Equation (A12), we can rewrite the expression to evaluate bias error at a design point x as, (1)(1)1()()(())()(()).TTTT b iieN ieff bbxx fxx Vfx (A29) This can be rearranged as,

PAGE 291

291 (1) )1()()(())().TTT biieN ieff0Constant(ebxx fx xV (A30) The first term in above expression is a constant 0e for given data (fixed b ), and experimental design (fixed ). Then, the bias error depe nds only on the distribution of i 01()().T biieN ieefx xV (A31) The advantages of the above formulation of bias error is that, now we can develop analytical expressions for bias error bounds and root mean squa re bias errors, if we know i Analytical Expression for Pointwise Bias Error Bound The pointwise bounds on bias error are obtained by maximizing Equation (A31) over all possible values of coefficients i That is, 01()max().ibebT biieN ieef x xV (A32) This can be rearranged as, 00 001 1max();0 (). min();0T ii beb b T iie i e iN i N iefe e efe xV x xV (A33) In contrast to Equation (A22), this optimization problem (Equation (A33)) has only side constraints on i because all coefficient vectors sa tisfy the data within limit of min Since all i follow a uniform distribution , we rewrite the second term in the above expression, 11max(), ()T T ii ieeNN iif f xV xV (A34) and,

PAGE 292

292 11min()max().TT iiiieeNN iiffxVxV (A35) The expression in Equation (A33) is easy to evaluate even when, the bounds on i are nonuniform. Thus, with the new formulation, we have an analytical expression for pointwise bias error bounds that significantly reduc es the computational expense. Analytical Estimate of R oot Mean Square Bias Error Next, we develop analyti cal estimates of the expected valu es of different terms in Equation (A26) by using the assumpti on on the distribution of i as follows. Lets consider ( i, j ) component (2)(1)(())jji E b of matrix (2)(1)(())TE b. Substituting Equation (A27), (2)(1)(2)(2)(1)(1)11(()).jjppiqqjjeeNN iij pqEbEVVb (A36) Denoting (1)(1)*jjjb and expanding terms, (2)(1)*(2)(1)(1)*(2) 1 (2)(1) (1)(2)1 11(()).qqjjppi p jj qqjppiee eeNN iji q i NN qpVV EbE VV (A37) Rearranging terms, (2)(1)(2)(1)*(2)(1) 2 (1)*(2)(1)(2)1 111(()) .jjqqj ppiqqjppie eeeN iiji q NNN j pqpEbEEV EVEVV (A38) Since (2)(1)*,,ij are constant,

PAGE 293

293 (2)(1)(2)(1)*(2)(1) 2 (1)*(2)(1)(2)1 111(()) .jjqqj ppiqqjppie eeeN iiji q NNN j pqpEbEV EVEVV (A39) Using the property E xEx and noting that (1)(2),qjpiVV are constant and 0iE Equation (A39) can be further simplified as, (2)(1)(2)(1)*(2)(())jjqiijiEbE (1) (1)*1qj peN q jV E 2 (2)(1)(2)111.piqqjppieeeNNN pqpVEVV (A40) Using the property x yxy 2 (2)(1)(2)(1)(1)(2)* 11(()),jjqpqjpieeNN iij pqEbEVV (A41) 2 (2)(1)(2)(1)*(1)(2)11(()).jjqpqjpieeNN iij pqEbEVV (A42) Since 23 0qp p q E p q 2 2 (2)(1)(2)(1)*(1)(2)1(()). 3jjpjpieN iij pEbVV (A43) Similarly, we can write analyti cal expressions for ot her terms and estimate RMS bias errors using Equation (A26).

PAGE 294

294 APPENDIX B APPLICATIONS OF DATA-INDEPENDE NT RMS BIAS ERROR MEASURES Recap True function : 1 121(1)22 2()()(())(()).TTTTT () ()()()()() () xfx fffx fx (B1) Approximation : (1) ()(()), xfxbTy (B2) where (1)(1)(1)-1.TTXXX b() y (B3) Data-independent root mean square bias error : (), () 3 x mxrmsI be (B4) where (2)(1)()(())(()). mxfxfxTA (B5) Construction of Experimental Designs We construct a central composite design (CCD ), a popular minimum variance design, in an Nv-dimensional cube [1,1]NvV for Nv ranging from two to five. For Nv dimensions, the CCD has 221NvvN points. Minimum-bias experimental designs are cons tructed using two parameters 120,1 that define the coordinates of sampled data points. Points corresponding to vertices are placed at 1ix for 1,...,viN and axial points are located at 2,0ijxx for all j i for each 1,...,viN In addition, one point is placed at the origin. Figure B-1 shows an example of an ED as a function of parameters 12, in two dimensions. The min-max RMS bias design can be obtained by identifying parameters 12, such that the maximum value of (data independen t) RMS bias error in design space max()rmsI be is minimized. Mathematically, this is a two-level optimization problem:

PAGE 295

295 11max min 1,...,0,...,min()minmax().(||)xbmmbrmsIrmsI b Veee (B6) The inner problem is the maximization of RMS bias error over the design space (this is a function of i and pointwise errors are estimated by Equation (B4)), and the outer level problem is minimization over the parameters i Note that knowledge of (1)(2),, b is not required to generate an ED. We used the conventional assumption of a qua dratic PRS and a true model being a cubic polynomial to compute bias errors. The maximum RMS bias error for a given combination of parameters 1 and 2 was obtained by evaluating pointwise RMS bias errors on a uniform 11Nv grid. The min-max RMS bias designs for di fferent dimensional spaces are given in Table 3-5. In low dimensions, the optimal min-max RMS de sign was obtained by placing vertex points inwards while keeping the axial point on the face However, for higher dimensions the optimal design required the axial point to be close to the cen ter while forcing the vertex point on the corner. This design has a very low maximum and average RMS bias error but this configuration leads to very standard errors, that is, this de sign would be very sensitive to noise errors. Why Min-max RMS Bias Designs Place Points near Center for Four-dimensional Space? The explanation of this une xpected result obtained for highe r dimensional cases is as follows. There are three types of cubic terms: 32,,iijij k x xxxxx. The experimental design is a compromise to minimize errors due to the three. The term 3i x is modeled as xi, no matter where the vertex and axial points are located. Similarly, the ij k x xx term will always be modeled by zero. The term 2ij x x will be modeled by cxi, where the constant c is determined by the position of axial and vertex poin ts. Vertex points favor 2 1c and axial points favor 0 c Because the

PAGE 296

296 ij k x xx term is modeled by zero, it generates large bi as errors. As the dimension of the problem increases, there are relatively more of these terms. For Nv = 2 their number is zero out of four terms; for Nv = 3 it is one out of 10 terms; for Nv = 4 it is four out of 20 and for Nv = 5 it is 10 out of 35. Furthermore, unlik e the other terms, all the ij k x xx terms have a common point where they peak: the all-positive or all-negative corn ers. These terms then dominate the choice of experimental design for high dimensional spaces. If axial points are placed inside the design space, the term 2ij x x contributes to the error at allpositive and all-negative corners (2 1c ). However, when the axial points (hence 2 ) go to zero, we can have 2 1c which minimizes additional errors at the all-posi tive and all-negative corners. Verification of Experimental Designs To verify the results for min-max RMS bias ED, we compared predicted RMS bias errors with actual RMS bias errors in four-dimensi onal design space. A cubic true function was approximated by a quadratic polynomial. Response and errors were predic ted on a uniform grid of 114 points. Pointwise RMS bias (predict ed) error was estimated using Equation (B4) with constant = 1 To compute actual RMS errors, a large number of true polynomials were generated by randomly selec ting the coefficient vectors (1) and (2) from a uniform distribution over [-1, 1]. For each true polynomial, the true response was obtained using Equation (B1) and the vector b was evaluated using Equation (A6). The actual error at a point due to each true polynomial was computed by taking the differen ce between actual and predicted response as (1)(1)(2)(2)(1)cccc ()()()()()(),TTTeyy bxxxfx fx fx (B7) where the subscript c represents an instance of the true polynomial.

PAGE 297

297 Pointwise actual RMS bias errors ()rms acte x were estimated by averaging actual errors over a large number of polynomials in the root mean squares sense as 21()().PN rms actcP ceeNxx (B8) NP (= 100,000) true polynomials were used to compute the actual erro rs. Two experimental designs with extreme values of axial location parameter 2 were compared. The actual errors and the predicted errors along with the correlations between actual and predic ted error estimates are given in Table B-3. Maximum and space-averaged va lues of actual RMS bias errors and predicted RMS bias errors compared very well. The correlations betwee n pointwise actual RMS bias errors and predicted RMS bias errors were also high. The small change in maximal actual RMS bias error with the location of axial desi gn point confirmed the outcome that maximal RMS bias errors had low variation with respect to the parameter 2 and placing the ax ial points close to the center minimized the maximal actual RMS error. Comparison of Experimental Designs Different standard designs, for two-dimensi onal design spaces, availa ble in the literature minimum variance design ED 1 (minimizes maximum variance, Myers and Montgomery, 1995), minimum space-averaged bias design ED 2 (minim izes space-averaged bias error, Qu et al., 2004), min-max bias error bound design ED 3 (minim izes maximum bias error bound, Papila et al., 2005), and min-max RMS bias design ED 4 (minimizes maximum RMS bias error, Goel et al., 2006c)were compared using metrics defined in Chapter 3. Errors were computed using a uniform 41x41 grid and resu lts are summarized in Table B-2. Interestingly, the design based on the most co mmonly used bias error metric, the minimum space-averaged bias design (ED 2), performed mo st poorly on all the metr ics except the space-

PAGE 298

298 averaged bias errors (RMS and bound). Since for ED 2 the sampled points were located in the interior, there was a significant extrapolati on region that explained the very high maximum errors. This suggested that averaging of bias error over the entire space may not be the best criterion to create experimental designs. On th e other hand, all other designs gave comparable performance on all metrics. As expected, the expe rimental designs based on bias errors (minmax bias error bound design, ED 3; and min-max RMS bias design, ED 4) performed better on bias errors and the ED based on min-variance (ED 1) reduced estimat ed standard error more than the other designs. The differences between the min-max bias error bound design (ED 3) and the min-max RMS bias error design (ED 4) were small. As expected, the results show that a design based on a single criterion does not perform the best on all metric s. Instead we obtain different non-dominated tradeoff solutions. RMS Bias Error Estimates for Trigonometric Example Bias error estimates are sometimes criticized because of the assumption that a higher order polynomial than the fitting model is the true f unction. We demonstrate that this assumption on the true model is practically us eful if it captures th e main characteristics of the true unknown function. Suppose the true function is a trigonometric polynomial given by 12126 ,0()Real,,[1,1].ijxikx jkjk jkiaeexx x (B9) The coefficients 0660, aa were assumed to be uniformly di stributed over [-10, 10] and the remaining jka were assumed to be uniformly distributed over [-1, 1]. This function represents the cumulative response of sine waves of different wavelengths. For the given range of parameters j k x1, and x2, the highest frequency (shorte st wavelength corresponding to j = 6 and

PAGE 299

299 k = 6 ) components in the true function have more than one cycle in design space. To estimate actual errors, 10000 ( Np = 10000) combinations of jka were used. This function was approximated using a cubic polynomial and bias errors were estimated assuming the true model to be quintic. A uniform 4x4 grid was used for sampling (16 points) and a uniform 21x21 grid was used to estimate errors The distribution of actual RMS bias error and predicted RMS bias error in design space is shown in Figure B-2. The predicted RMS bias error was scaled by a factor of 55 to compare with the ac tual errors in design space. Note that prior to generation of data, the actual ma gnitude of the error was not important as the estimated errors were scaled by an unknown factor (refer to Equation (B4)) that was arbitrarily set to one. The predicted RMS bias errors correctly identifi ed the presence of hi gh error zones along the diagonals. However, predictions were inaccurate near the center and close to the boundary, where the effect of high frequency terms in th e true function was signi ficant. The correlation between the actual RMS bias erro rs and the predicted RMS bias errors was 0.68. The error contours and correlation coefficient indicated a reasonable agreement between actual RMS bias error and predicted RMS bias error estimates cons idering the fact that the true function was a high frequency trigonometric polynomial. For this example problem, the actual di stribution of the coefficient vector (2) can be obtained from the distribution of constants j ka in the true function by expanding each sine term in Maclaurin series as 1 21 1!(1) sin(), (21)n n nxx n (B10) and observing the coefficients of the quintic terms (() x is odd, so there are no quartic terms). It was found that the coefficients (2) 6 and (2) 11 (corresponding to 5 1 x and 5 2 x respectively)

PAGE 300

300 approximately followed a uniform distribution wi th range [-700, 700] and the coefficients (2) 7 (2) 8 (2) 9 and (2) 10 (corresponding to 4 12 x x 32 12 x x 23 12 x x and 4 12 x x respectively) followed approximately a uniform distribut ion with range [-70, 70]. Using the modified distribution of coefficients and hence (2)(2) TE, RMS bias errors were estimated. Corresponding actual RMS bias error and scaled predicted RMS bias error (scaled by 0.177) contours are shown in Figure B-3. For this case, the agreement between the actual RMS bias error and the predicted RMS bias error improved significantly (compare Figure B-2(A) and Figure B-3(A)) and the correlation between the actual RMS bias errors and the predicted RMS bias errors increased to 0.94. This indicates that supplyi ng the correct distributi on of the coefficient vector for the basis functions missing from the response surface model (2)() f x is helpful to assess the errors.

PAGE 301

301 (1, 1) (0.0, 2) ( 2, 0.0) (2,0.0) ( 1, 1) (1, 1) ( 1, 1) (0.0, 2) (0.0, 0.0) (1, 1) (0.0, 2) ( 2, 0.0) (2,0.0) ( 1, 1) (1, 1) ( 1, 1) (0.0, 2) (0.0, 0.0) Figure B-1. Two-dimensional illust ration of central composite experimental design constructed using two parameters 1 and 2. A B Figure B-2. Contours of scaled predicted RMS bias error and actual RMS error when assumed true model to compute bias error was quint ic while the true model was trigonometric (Equation (B9)) (scaled bias error = 55*predic ted RMS bias error). A) Scaled predicted RMS bias error. B) Actual RMS error.

PAGE 302

302 A B Figure B-3. Contours of scaled predicted RMS bias error and actual RMS error when different distributions of (2) were specified (assumed true model was quintic while the true model was trigonometric (Equation (B9)), (scaled bias error = 0.177*RMS bias error). A) Scaled predicted RMS bias error. B) Actual RMS error.

PAGE 303

303 Table B-1. Design variables and maximum RMS bias errors for min-max RMS bias central composite designs in Nv=2-5 dimensional spaces. Errors were computed on a uniform 412 grid ( Nv = 2), 213 grid ( Nv = 3) and, 11Nv grid ( Nv > 3), in space V = [-1, 1]Nv. Nv 1 2 max()rms be ()rms avg be 2 0.954 1.000 0.341 0.269 3 0.987 1.000 0.659 0.518 4 1.000 0.100 1.155 0.927 5 1.000 0.100 1.826 1.200 Table B-2. Comparison of different experimental designs for two dimensions. (1) Minimum maximum variance design. (2) Minimum space-averaged bias design. (3) Min-max bias error bound design. (4) Min-max RMS bias design. Metrics of measurement were maximum estimated standard error, space-averaged estimated standard error, maximum bias error bound (BEB), maximum RMS bias error, space-averaged RMS bias error (BE). Errors ar e computed on a uniform 41x41 grid in space V = [-1, 1]2. 1 and 2 define the location of sampled points in Figure B-1. ED 1 2 max()ese()esavgemax||I bemax()rms be ()rms avg be (1) Min max variance 1.000 1.0000.898 0.670 1.170 0.385 0.302 (2) Min avg BE 0.700 0.7071.931 0.869 2.364 0.690 0.168 (3) Min max BEB 0.949 0.9490.993 0.681 1.001 0.351 0.261 (4) Min max RMS BE 0.954 1.0000.973 0.688 1.029 0.341 0.269 Table B-3. Comparison of actual RMS bias errors and predicted RMS bias errors for min-max RMS bias central composite experimental designs in four-dimensional space. Errors are computed on a uniform 114 grid. 1 2 max()rms be max()rms acte ()rms avg be ()rms actavge(,)rmsrms act bere 1.000 0.100 1.155 1.158 0.927 0.927 1.000 1.000 1.000 1.176 1.180 0.827 0.827 1.000

PAGE 304

304 APPENDIX C GLOBAL SENSITIVITY ANALYSIS Global sensitivity analysis was first presente d by Sobol in 1993. This method is used to estimate the effect of different variables on the total variability of the function. Some of the advantages of conducting a global sensitivity anal ysis include, (1) assessing importance of the variables, (2) fixing non-e ssential variables (which do not affect the variability of the function) thus, reducing the problem dimensionality. Homma and Saltelli (1996) (analytical functions and study of a chemical kinetics model), Saltelli et al. (1999) (analytical functions), Vaidyanathan et al. (2004b) (liquid rocket injector shape design), Jin et al. (2004) (piston shape design), Jacques et al. (2004) (flow parameters in a nuclear reactor), and Mack et al. (2005a) (bluff body shape optimization) presented some applications of the global sensitivity analysis. The theoretical formulation of the global sensitivit y analysis is given as follows: A function f ( x ) of a square integrable objective as a function of a vector of independent uniformly distributed random input variables x in domain [0, 1] is assumed. The function can be decomposed as the sum of functions of increasing dimensionality as 01212()()(,)(),,,,xiiijijN iijvvNfffxfxxf x xx (C1) where 01 x0x d f f. If the following condition 11 ... 00,iiskfdx (C2) is imposed for k = i1, is, then the decomposition described in Equation (C1) is unique. In context of global sensitivity analys is, the total variance denoted as V ( f ) can be shown equal to 1... 11,()...,vN iij iijv v N NVfVVV (C3)

PAGE 305

305 where 2 0()(())VfEff, and each of the terms in Equation (C3) represents the partial contribution or partial variance of the independent variables ( Vi) or set of variables to the total variance, and provides an indicati on of their relative importance. The partial variances can be calculated using the following expressions: ([|]), ([|,]), ([|,,]),ii ijijij ijjijij ijkikjkkVVEfx VVEfxxVV VVEfxxxVVVVVV (C4) and so on, where V and E denote variance and expected value respectively. Note that, 1 0|iiiEfxfdx and 1 2 0([|])iiiVEfxfdx. This formulation facilitates the computation of the sensitivity indices correspondi ng to the independent variable s and set of variables. For example, the first and second order sensitivity indices can be computed as ,. ()()iji iijV V SS VfVf (C5) Under the independent model inputs assumption, the sum of all the sensitivity indices is equal to one. The first order sensitivity index for a given variable represents the main effect of the variable, but it does not take into account the effect of the interaction of the variables. The total contribution of a variable to the total variance is given as the sum of all the interactions and the main effect of the variable. The total sensi tivity index of a variable is then defined as ,,,... ()iij ijk jjijjikki total iVVV S Vf (C6) Note that the above referenced expressions ca n be easily evaluated us ing surrogate models of the objective functions. Sobol (1993) has prop osed a variance-based non-parametric approach to estimate the global sensitivity for any comb ination of design variables using Monte Carlo

PAGE 306

306 methods. To calculate th e total sensitivity of any design variable xi, the design variable set is divided into two complementary subsets of xi and Z ,1,;jv Z xjNji The purpose of using these subsets is to isolate the influence of xi from the influence of the remaining design variables included in Z The total sensitivity index for xi is then defined as ()totaltotal i iV S Vf (C7) ,, Z total iiiVVV (C8) where iV is the partial variance of the objective with respect to xi, and Z iV is the measure of the objective variance that is depe ndent on interactions between xi and Z Similarly, the partial variance for Z can be defined as Vz. Therefore the total objective variability can be written as ,. Z ZiiVVVV (C9) While Sobol had used Monte Carlo simulations to conduct the global sensitivity analysis, the expressions given above can be easily computed analytically if f ( x ) can be represented in a closed form (e.g., polynomial response surface approximation).

PAGE 307

307 APPENDIX D LACK-OF-FIT TEST WITH NON-REPLICA TE DATA FOR POLYNOMIAL RESPONSE SURFACE APPROXIMATION A standard lack-of-fit test is a statistical tool to determine the influence of bias error (order of polynomial) on the predictions (Myers a nd Montgomery, 1995). The test compares the estimated magnitudes of the error variance and the residuals unaccounted for by the fitted model. Lets say, we have M unique locations of the data and at jth location, we repeat the experiment nj times, such that total number of points used to construct surrogate model is 1 M s j jNn. The sum of squares due to pure error is given by, 2 11,jkjjM pe jknyy SS (D1) where j y is the mean response at the jth sample location, given as, 11jj k k jn y y n. The sum of square of residuals due to lack -of-fit of the polynomial response surface model is, 2 1 () xM jj j lof jyy SSn (D2) where () x j y is the predicted respons e at the sampled location x j In matrix form, the above expressions are given as, 1, 11 yy T Tjj j j jM nn p enn n j jSS I n (D3) 11() 11 yyyy T TT TTjj j j jM nn N nn lof n j jsIXXXX SS I n (D4)

PAGE 308

308 where 1 j n is the (1jn ) vector of ones, j n I is ( j jnn ) identity matrix, N s I is ( s sNN ) identity matrix. We formulate F-ratio using the tw o residual sum of squares as, ,loflof p epeSSd F SSd (D5) where lofsdNN and pesdNM are the degrees of freedom associated with lofSS and p eSS respectively. The lack-of-fit in the surrogate model is detected with -level of significance, if the value of F in Equation (D5), exceeds the tabulated ,,pe lofddF value, where the latter quantity is the upper 100 percentile of the central F -distribution. When the data is obtained from the numerical simulations, the replication of simulations does not provide an estimate of noise ( SSpe), since all replications return exactly the same value. In such scenario, the variance of noise can be estimated by treating the observations at neighboring designs as near replicates (Hart, 1997, pp. 123). We adopt the method proposed by Neill and Johnson (1985), and Papila (2002) to estimate the lack-of-fit for non-replicate simulation. In this method, we denote a near-replicate design point xjk (as the kth replicate of the jth point x j ) such that xx j j kjk (D6) where j k represents the disturbance vector. Th en, the Gramian matrix is written as XX (D7) where X matrix is constructed using x j for near replicate points and matrix X X Now the estimated response at the design points (including near-replicat es) is given as,

PAGE 309

309 yy b (D8) where b is the estimated coefficien t vector. Now, we compute SSpe and SSlof by replacing y y j n, and X in Equations (D1) and (D2) with y y j n, and X

PAGE 310

310 LIST OF REFERENCES Abramowitz M, Stegun IA, (Eds.), 1972, Integration, .4 in Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables New York: Dover, pp 885887. Ahuja V, Hosangadi A, Arunajatesan S, 2001, Simulations of cavitating flows using hybrid unstructured meshes. Journal of Fluids Engineering 123(2) :331-340. AIAA, 1994, Editorial policy statement on numeric al accuracy and experimental uncertainty. AIAA Journal 32 :3. AIAA, 1998, Guide for the verification and va lidation of computational fluid dynamics simulations. AIAA G-077-1998 Ariew R, 1976, Ockhams razor: A historical and philosophical analysis of Ockhams razor principle of parsimony. Ph.D. Thesis Philosophy, The University of Illinois, UrbanaChampaign. ASME Editorial Board, 1994, Journal of heat tr ansfer editorial policy statement on numerical accuracy. ASME Journal of Heat Transfer 116 :797-798. Athavale MM, Singhal AK, July 2001, Numerical analysis of cavitating flows in rocket turbopump elements. Proceedings of the 37th AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit Salt Lake City UT, AIAA-2001-3400. Balabanov VO, Kaufman M, Knill DL, Haim D, Golovidov O, Giunta AA, Grossman B, Mason WH, Watson LT, Haftka RT, September 1996, De pendence of optimal structural weight on aerodynamic shape for a high-speed civil transport. Proceedings of the 6th AIAA/NASA/USAF/ISSMO Symposium on Multidisciplinary Analysis and Optimization Bellevue WA, AIAA-96-4046:599-612. Balabanov VO, Giunta AA, Golovi dov O, Grossman B, Mason WH, Watson LT, Haftka RT, 1999, Reasonable design space approach to response surface approximation. Journal of Aircraft 36(1) :308-315. Baker ML, Munson MJ, Duchow E, Hoppus GW, Alston KY, January 2004a, System level optimization in the integrated hypers onic aeromechanics tool (IHAT). Proceedings of the 42nd AIAA Aerospace Sciences Meeting and Exhibit Reno NV, AIAA-2004-0618. Baker ML, Munson MJ, Hoppus GW, Alston KY, September 2004b, Weapon system optimization in the integrated hypers onic aeromechanics tool (IHAT). Proceedings of the 10th AIAA/ISSMO Multidisciplinary Anal ysis and Optimization Conference Albany NY, AIAA-2004-4316. Baldwin BS, Lomax H, January 1978, Thin-lay er approximation and algebraic model for separated turbulent flow. Proceedings of the 16th Aerospace Science Meeting Huntsville AL, AIAA-1978-257.

PAGE 311

311 Barthelemy J-FM, Haftka RT, 1993, Approximation concepts for optimum structural design A review. Structural Optimization 5 :129-144. Batchelor GK, 1967, An Introduction to Fluid Dynamics Cambridge University Press, New York. Bishop C, 1995, Neural Networks for Pattern Recognition Oxford University Press, Oxford. Box GEP, Draper N, 1959, A basis for th e selection of a re sponse surface design. Journal of the American Statistical Association 54 :622-654. Box GEP, Draper NR, 1963, The choice of a second order rotatable design. Biometrika 50(3) :335. Bramantia A, Barbaa PD, Farinaa MB, Savini a A, 2001, Combining response surfaces and evolutionary strategies for multi-objective Pareto-optimization in electromagnetics. International Journal of Applied Electromagnetics and Mechanics 15 :231-236. Brennen CE, 1994, Hydrodynamics of Pumps Oxford University Press. Brennen CE, 1995, Cavitation and Bubble Dynamics Oxford University Press. Burman J, Papila N, Shyy W, Gebart BR, September 2002, Assessment of response surfacebased optimization techniques for unsteady flow around bluff bodies. Proceedings of the 9th AIAA/ISSMO Symposium on Multidisci plinary Analysis and Optimization Atlanta GA, AIAA-2002-5596. Charania AC, Bradford JE, Olds JR, Graham M, April 2002, System level uncertainty assessment for collaborative RLV design. Proceedings of the 38th Combustion, Airbreathing Propulsion, Pr opulsion Systems Hazards, and Modeling and Simulation Subcommittees Meeting, (sponsored by JANNAF), Destin FL. Chankong V, Haimes YY, 1983, Multiobjective Decision Making Theory and Methodology Elsevier Science, New York. Chung HS, Alonso JJ, September 2000, Comparis on of approximation models with merit functions for design optimization. Proceedings of the 8th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinar y Analysis and Optimization Long Beach CA, AIAA-20004754. Cooper P, 1967, Analysis of single and two-phase flows in turbopump inducers, ASME Journal of Engineering Power 89 :577-588. Craig KJ, Stander N, Dooge D, Varadappa S, September 2002, MDO of automotive vehicle for crashworthiness and NVH usi ng response surface methods. Proceedings of the 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization Conference Atlanta GA, AIAA-2002-5607.

PAGE 312

312 Currin C, Mitchell TJ, Morris MD Ylvisaker D, 1998, A Bayesian approach to the design and analysis of computer experiments. Technical Report Oak Ridge National Laboratory Oak Ridge TN, ORNL-6498. Deb K, 2001, Multi-objective Optimization using Evolutionary Algorithms Wiley: Chichester UK. Delannoy Y, Reboud JL, 1993, Heat and mass transfer on a vapor cavity, Proceedings of the ASME Fluids Engineering Conference Washington DC, 165 :209-214. Deshpande M, Feng J, Merkle CL, 1997, Numeri cal modeling of the thermodynamic effects of cavitation. Journal of Fluids Engineering 119(2) :420-427. Dixon LCW, Szeg GP, 1978, Towards Global Optimization 2 North-Holland, Amsterdam. Dornberger R, Bche D, Stoll P, September 2000, Multidisciplinary optimization in turbomachinery design. Proceedings of the European Congress on Computational Methods in Applied Scie nces and Engineering Barcelona. Dorney DJ, Rothermel J, Griffin LW, Thornton RJ, Forbes JC, Skelley SE, Huber FW, July 2006a, Design and analysis of a turbopump fo r a conceptual expander cycle upper-stage engine. Proceedings of the Symposium of Ad vances in Numerical Modeling of Aerodynamics and Hydrodynamics in Turbomachinery Miami FL, FEDSM 2006-98101. Dorney DJ, Sondak DL, 2006b, PHANTOM: Program users manual, Version 20. Dorney DJ, 2006c, NASA Marshall Space Flight Center, Personal Communications Draper NR, Lawrence WE, 1965, Designs whic h minimize model inadequacies: Cuboidal regions of interest. Biometrika 52(1-2) :111-118. Draper NR, Smith H, 1998, Applied Regression Analysis Third Edition, John Wiley & Sons Inc., New York. Efron B, 1983, Estimating the error rate of a prediction rule: Improvement on cross-validation. Journal of the American Statistical Association 78 :316-331. Efron B, Tibshirani R, 1993, Introduction to the Bootstrap Chapman & Hall, New York. Emmerich MJ, Giotis A, Ozdemir M, Back T, Giannakoglou K, September 2002, Metamodelassisted evolutionary strategies, Proceedings of the Parallel Problem Solving from Nature VII Conference Granada Spain, pp 361-370. Fang H, Rais-Rohani M, Liu Z, Horstemeyer MF, 2005, A comparative study of meta-modeling methods for multi-objective crashworthiness optimization. Computers and Structures 83 :2121-2136.

PAGE 313

313 Farina M, Sykulski JK, 2001, Comparative stud y of evolution strate gies combined with approximation techniques for practical electromagnetic optimization problems. IEEE Transactions on Magnetics 37(5) :3216-3220. Fedorov VV, Montepiedra G, Nachtsheim CJ, 1999, Design of experiments for locally weighted regression. Journal of Statistica l Planning and Inference 81 :363-383. Franc JP, Rebattet C, Coulon A, 2003, An experime ntal investigation of thermal effects in a cavitating inducer, Proceedings of the Fifth International Symposium on Cavitation Osaka, Japan. Friedman J, Stuetzle W, 1981, Projection pursuit regression. JASA: Theory and Methods 76 :817-823. Garcia R, 2001, NASA Marshall Space Flight Center, Personal Communications Gelder TF, Ruggeri RS, Moore RD, 1966, Cavita tion similarity considerations based on measured pressure and temperature depres sions in cavitated re gions of Freon-114. NASA Technical Note D-3509 Giannakoglou KC, 2002, Design of optimal aerodyna mic shapes using stochastic optimization methods and computational intelligence. Progress in Aerospace Sciences 38 :43-76. Gibbs M, 1997, Bayesian Gaussian Proce sses for Regression and Classification. Ph.D. Thesis Cambridge University. Girosi F, 1998, An equivalence between spar se approximation and support vector machines. Neural Computation 10(6) :1455-1480. Giunta AA, Watson LT, September 1998, A compar ison of approximation modeling techniques: Polynomial versus interpolating models. Proceedings of the 7th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis & Optimization St. Louis MO, 1 :392, AIAA-98-4758. Goel T, Vaidyanathan RV, Haftka RT, Qu eipo NV, Shyy W, Tucker PK, September 2004, Response surface approximation of Pareto optimal front in multi-objective optimization. Proceedings of the 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference Albany NY, AIAA-2004-4501. Goel T, Mack Y, Haftka RT, Shyy W, Queipo NV, January 2005, Interaction between grid and design space refinement for bl uff-body facilitated mixing. Proceedings of the 43rd AIAA Aerospace Sciences Meeting and Exhibit Reno NV, AIAA-2005-0125. Goel T, Haftka RT, Papila M, Shyy W, 2006a Generalized bias error bounds for response surface approximation. International Journal of Nume rical Methods in Engineering 65(12) :2035-2059.

PAGE 314

314 Goel T, Haftka RT, Shyy W, Quei po NV, 2006b, Ensemble of surrogates. Structural and Multidisciplinary Optimization Journal doi: 10.1007/s00158-006-0051-9 (a different version was presented at 11th AIAA/ISSMO Multi-disciplinar y Analysis and Optimization Conference Portsmouth VA, 6-8 Se ptember 2006, AIAA-2006-7047). Goel T, Haftka RT, Shyy W, Watson LT, January 2006c, Pointwise RMS bias error estimates for design of experiments. Proceedings of the 44th AIAA Aerospace Sciences Meeting and Exhibit Reno NV, AIAA-2006-0724. Goel T, Zhao J, Thakur SS, Haftka RT, Shyy W, July 2006d, Surrogate model-based strategy for cryogenic cavitation model validati on and sensitivity evaluation. Proceedings of the 42nd AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit Sacramento CA, AIAA2006-5047. Goel T, Haftka RT, Shyy W, Watson LT, July 2006e, Combining bi as and variance based criteria for selecting experimental designs. Proceedings of the NSF Design, Service, and Manufacturing Grantees and Research Conference St. Louis MO. Goel T, Vaidyanathan R, Haftka RT, Shyy W, Queipo NV, Tucker PK, 2007, Response surface approximation of Pareto optimal front in multi-objective optimization. Computer Methods in Applied Mechanics and Engineering 196 :879-893. Goel T, Haftka RT, Shyy W, Watson LT, 2007, Pitfalls of usin g a single criterion for selecting experimental designs. Submitted to International Journal of Numerical Methods in Engineering Golub G, Heath M, Wahba G, 1979, Generalize d cross-validation as a method for choosing a Goodridge parameter. Technometrics 21 :215-223. Griffin MD, French JR, 1991, Space Vehicle Design AIAA, Washington DC. Gupta A, Ruffin SM, 2000, Aerothermodynamic performance enhancement of sphere-cones using the artificially blun ted leading-edge concept. Journal of Spacecraft and Rockets 37(2) :235-241. Hall P, 1986, On the bootstrap and confidence intervals. Annals of Statistics 14 :1431-1452. Hansen PC, 1992, Analysis of discrete ill-posed problems by means of the L-curve. SIAM Review 34 :561-580. Hart JD, 1997, Non-parametric Smoothing and Lack of Fit Tests Springer-Verlag, NewYork. Hedayat A, Sloane N, Stufken J, 1999, Orthogonal Arrays: Theory and Applications Springer Series in Statistics, Sp ringer Verlag, New York. Hesterberg T, Moore DS, Monaghan S, Clipson A, Epstein R, 2005, Bootstrap Methods and Permutation Tests W H Freeman, New York, Chapter 14.

PAGE 315

315 Hill GA, Olson ED, September 2004, Application of response surface-based methods to noise analysis in the conceptual de sign of revolutionary aircraft. Proceedings of the 10th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization Conference Albany NY, AIAA-2004-4437. Holl JW, Billet ML, Weir DS, 1975, Therm odynamic effects on developed cavitation, Journal of Fluids Engineering 97(4) :507-516. Homma T, Saltelli A, 1996, Importance measures in global sensitivity analysis of nonlinear models. Reliability Engineering and System Safety 52(1) :1-17. Hord J, 1973a, Cavitation in liquid cryogens, II-Hydrofoil. NASA CR-2156 Hord J, 1973b, Cavitation in liquid cryogens, III-Ogives. NASA CR-2156 Hosangadi A, Ahuja V, Ungew itter RJ, 2002, Simulations of cav itating inducer flow fields. Proceedings of the 38th Combustion, Airbreathing Propulsion, Propulsion Systems Hazards, and Modeling and Simulation Subcommittees Meeting (sponsored by JANNAF), Destin FL. Hosangadi A, Ahuja V, June 2003, A generalized multi-phase framework for modeling cavitation in cryogenic fluids. Proceedings of the 33rd AIAA Fluid Dynamics Conference and Exhibit Orlando FL, AIAA-2003-4000. Hosangadi A, Ahuja V, Ungewitter RJ, July 2003, Generalized numerical framework for cavitation in inducers. Proceedings of the 4th ASME/JSME Joint Fluids Engineering Conference Honolulu HI, FEDSM-2003-45408. Hosangadi A, Ahuja V, 2005, Numerical study of cavitati on in cryogenic fluids. Journal of Fluids Engineering 127(2) :267-281. Hosder S, Watson LT, Grossman B, Mason WH, Kim H, Haftka RT, Cox SE, 2001, Polynomial response surface approximations for the multidisciplinary design optimization of a high speed civil transport. Optimization and Engineering 2(4) :431-452. Huber F, 2001, Turbine aerodynamic design tool development. In: Space Transportation Fluids Workshop Marshall Space Flight Center AL. Humble RW, Henry GN, Larson WJ, 1995, Space Propulsion Analysis and Design Mcgraw Hill Inc., New York, Chapter 5. Huque Z, Jahingir N, July 2002, Applica tion of collaborative optimization on a RBCC inlet/ejector system. Proceedings of the 38th AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit Indianapolis IN, AIAA-2002-3604. Hutchinson M, de Hoog F, 1985, Smoothing noisy data with spline functions. Numerical Mathematics 47 :99-106.

PAGE 316

316 Huyse L, Padula SL, Lewis RM, Li W, 2002, Proba bilistic approach to free-form airfoil shape optimization under uncertainty. AIAA Journal 40(9) :1764-1772. Iman R, Conover W, 1982, A di stribution-free approach to inducing rank correlation among input variables. Communications in Statistics, Pa rt B-Simulation and Computation 11 :311-334. Jacques J, Lavergne C, Devict or N, March 2004, Sensitivity anal ysis in presence of model uncertainty and correlated inputs. Proceedings of 4th International Conference on Sensitivity of Model Output (SAMO 2004) Santa Fe NM, pp 317-323. Jin R, Chen W, Simpson TW, 2001, Comparativ e studies of meta-modeling techniques under multiple modeling criteria. Structural and Multi-di sciplinary Optimization 23(1) :1-13. Jin R, Chen W, Sudijanto A, March 2004, An alytical meta-model based global sensitivity analysis and uncertainty pr opagation for robust design. Proceedings of the SAE World Congress and Exhibition Detroit MI, Paper 2004-01-0429. JMP, The statistical discovery softwareTM, Version 5, Copyright 1989-2002, SAS Institute Inc., Cary,NC, USA. Johnson M, Moore L, Ylvisaker D, 1990, Minimax and maximin distance designs. Journal of Statistical Planning and Inference 26 :131-148. Jones D, Schonlau M, Welch W, 1998, Expens ive global optimization of expensive black-box functions. Journal of Global Optimization 13(4) :455-492. Jun S, Jeon Y, Rho J, Lee D, September 2004, A pplication of collaborative optimization using response surface methodology to an aircraft wing design. Proceedings of the 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference Albany NY, AIAA-2004-4442. Kaufman M, Balabanov V, Burgee SL, Giunta AA, Grossman B, Haftka RT, Mason WH, Watson LT, 1996, Variable-complexity re sponse surface approximations for wing structural weight in HSCT design. Computational Mechanics 18(2) :112-126. Keane AJ, 2003, Wing optimization using design of experiments, response surface and data fusion methods. Journal of Aircraft 40(4) :741-750. Khuri AI, Cornell JA, 1996, Response Surfaces: Designs and Analyses 2nd edition, Marcel Dekker Inc., New York. Kim Y, Lee D, Kim Y, Yee K, September 2002, Multidisciplinary design optimization of supersonic fighter wing using response surface methodology. Proceedings of the 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization Conference Atlanta GA, AIAA-2002-5408.

PAGE 317

317 Kleijnen J, Deflandre D, 2004, Validation of regression meta-mode ls in simulation: Bootstrap approach. European Journal of Operational Research 170(1) :120-131. Knill DL, Giunta AA, Baker CA, Grossman B, Mason WH, Haftka RT, Watson LT, 1999, Response surface models combining linear and Euler aerodynamics for supersonic transport design. Journal of Aircraft 36(1) :75-86. Kubota A, Kato H, Yamaguchi H, 1992, A new modeling of cavitati ng flows: A numerical study of unsteady cavitation on a hydrofoil section, Journal of Fluid Mechanics 240 :59-96. Kunz RF, Boger DA, Stinebring DR, Chyczewski TS, Lindau JW, Gibeling HJ, 2000, A preconditioned Navier-Stokes method for two-phase flows with application to cavitation. Computers and Fluids 29(8) :849-875. Kupper LL, Meydrech EF, 1973, A new approach to mean squared error estimation of response surfaces. Biometrika 60(3) :573-579. Laslett G, 1994, Kriging and splines: An empirica l comparison of their predictive performance in some applications. JASA: Applications and Case Studies 89 :391-400. Launder BE, Spalding DB, 1974, The numerical computation of turbulent flows. Computer Methods in Applied Mechanics and Engineering 3(2) :269-289. Leary S, Bhaskar A, Keane A, 2003, Optim al orthogonal array-based Latin hypercubes. Journal of Applied Statistics 30 :585-598. Lemmon EW, McLinden MO, Huber ML, 2002, REFPROP: Reference Fluid Thermodynamic and Transport Properties NIST Standard Database 23, version 7.0. Lepsch RA Jr, Stanley DO, Unal R, 1995, Dual-fuel propulsion in single-stage advanced manned launch system vehicle. Journal of Spacecraft and Rockets 32(3) :417-425. Lertnuwat B, Sugiyama K, Matsumoto Y, 2001, Modeling of thermal behavior inside a bubble. Proceedings of the Fourth International Symposium on Cavitation Pasadena, CA. Levy Y, Fan H-Y, Sherbaum V, 2005, A numerical investigation of mixing processes in a novel combustor application. Journal of Heat Transfer 127(12) :1334-1343. Li W, Padula S, May 2005, Approxi mation methods for conceptual design of complex systems. Approximation Theory XI : Gatlinburg, (eds. Chui C, Neaumtu M, Schumaker L), Nashboro Press, Brentwood, TN, pp 241-278 (also appeared in Proceedings of the 11th International Conference on Approximation Theory ). Lophaven SN, Nielsen HB, Sondergaard J, 2002, DAC E: A Matlab kriging toolbox, version 2.0, Information and Mathematical Modeling Technical University of Denmark.

PAGE 318

318 Mack Y, Goel T, Shyy W, Haftka RT, Queipo NV, January 2005a, Multiple surrogates for shape optimization of bluff bod y-facilitated mixing. Proceedings of the 43rd AIAA Aerospace Sciences Meeting and Exhibit Reno NV, AIAA-2005-333. Mack Y, Goel T, Shyy W, Haftka RT, 2005b, Su rrogate model based optimization framework: A case study in aerospace design. Evolutionary Computation in Dynamic and Uncertain Environments (eds. Yang S, Ong YS, Jin Y), Springe r Kluwer Academic Press (in press). Mack Y, Shyy W, Haftka RT, Griffin LW, Snellg rove L, Huber F, July 2006, Radial turbine preliminary aerodynamic design optimization fo r expander cycle liquid rocket engine. Proceedings of the 42nd AIAA/ASME/SAE/ASEE Joint Propu lsion Conference and Exhibit Sacramento CA, AIAA-2006-5046. Madavan NK, Rai MM, Huber FW, 2001, Redesi gning gas-generator turbines for improved unsteady aerodynamic performance using neural networks. Journal of Propulsion and Power 17(3) :669-677, (also presented at the AIAA/ASME/SAE/ ASEE 35th Joint Propulsion Conference 20-24 June, 1999, Los A ngeles CA, AIAA-99-2522). Madsen JI, Shyy W, Haftka RT, 2000, Res ponse surface techniques for diffuser shape optimization. AIAA Journal 38(9) :1512-1518. Martin JD, 2005, A Methodology for Evaluating Syst em-Level Uncertainty in the Conceptual Design of Complex Multidisciplinary Systems. PhD Thesis The Pennsylvania State University, University Park PA. Martin JD, Simpson TW, 2005, Use of kriging m odels to approximate deterministic computer models. AIAA Journal 43(4) :853-863. Matheron G, 1963, Principles of geostatistics. Economic Geology 58 :1246-1266. Matlab, The language of technical compu ting, Version 6.5 Release 13. 1984-2002, The MathWorks Inc. McDonald D, Grantham W, Tabor W, Murphy M, September 2000, Response surface model development for global/local optimiza tion using radial basis functions. Proceedings of the 8th AIAA/USAF/NASA/ISSMO Symposium on Multid isciplinary Analysis and Optimization Long Beach CA, AIAA-2000-4776. McKay M, Conover W, Beckman R, 1979, A compar ison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 21 :239245. Mengistu T, Ghaly W, Mansour T, September 2006, Global and local shape aerodynamic optimization of turbine blades. Proceedings of the 11th Multi-disciplinar y Analysis and Optimization Conference Portsmouth VA, AIAA-2006-6933.

PAGE 319

319 Merkle CL, Feng J, Buelow PEO, April 1998, Computational modeling of dynamics of sheet cavitation. Proceedings of the 3rd International Symposium on Cavitation Grenoble France. Miettinen KM, 1999, Nonlinear Multiobjective Optimization Kluwer:Boston. Mitchell TJ, Morris MD, 1992, Bayesian design a nd analysis of computer experiments: Two examples. Statistica Sinica 2 ( 2 ): 359-379. Montepiedra G, Fedorov VV, 1997, Minimum bias design with constraints. Journal of Statistical Planning and Inference 63(1) :97-111. Morozov VA, 1984, Methods for Solving Incorrectly Posed Problems Springer-Verlag: Berlin. Myers RH, Montgomery DC, 1995, Response Surface Methodology-Process and Product Optimization using Designed Experiments John Wiley & Sons Inc: New York. NASA online facts, 1991, http://www-pao.ksc.nasa.gov/kscpao/nasafact/count2.htm (last accessed March 28, 2007). Neill JW, Johnson DE, 1985, Testing linear regressi on function adequacy without replications. The Annals of Statistics 13(4) :1482-1489. Obayashi S, Sasaki D, Takeguc hi Y, Hirose N, 2000, Multi-obj ective evolutionary computation for supersonic wing-shape optimization. IEEE Transactions on Evolutionary Computation 4(2) :182-187. Oberkampf WL, Trucano TG, Hirsch C, 2004, Verification, validati on, and predictive capability in computational engineering and physics. Applied Mechanics Review 57(5) :345-384. Ohtani K, 2000, Bootstrapping R2 and adjusted R2 in regression analysis. Economic Modelling 17(4) :473-483. Ong YS, Nair PB, Keane AJ, 2003, Evolutionary optimization of computationally expensive problems via surrogate modeling. AIAA Journal 41(4) :687-696. Orr MJL, 1996, Introduction to ra dial basis function networks, Center for Cognitive Science, Edinburg University, EH 9LW, Scotland, UK. url: http://www.anc.ed.ac.uk/~mjo/rbf.html (last accessed March 28, 2007). Orr MJL, 1999a, Recent advances in radial basis function networks, Technical Report Institute for Adaptive and Neural Computa tion Division for Informatics Edinburg University, EH 9LW, Scotland, UK. url: http://www.anc.ed.ac.uk/~mjo/rbf.html (last accessed March 28, 2007).

PAGE 320

320 Orr MJL, 1999b, Matlab functions fo r radial basis function networks, Technical Report, Institute for Adaptive and Neural Computa tion Division for Informatics Edinburg University, EH 9LW, Scotland, UK. url: http://www.anc.ed.ac.uk/~mjo/rbf.html (last accessed March 28, 2007). Owen AB, 1992, Orthogonal arrays for computer e xperiments, integration and visualization. Statistica Sinica 2(2) :439-452. Owen AB, 1994, Controlling correlati ons in Latin hypercube samples. Journal of the Statistical Association 89 :1517-1522. Palmer K, Tsui K. 2001, A minimu m bias Latin hypercube design. Institute of Industrial Engineers Transactions 33(9) :793-808. Papila M, Haftka RT, April 1999, Uncertainty and wing structural we ight approximations. Proceedings of the 40th AIAA/ASME/ASCE/AHS/ASC Struct ures, Structural Dynamics, and Material Conference St. Louis MO, AIAA-99-1312, pp 988-1002. Papila M, Haftka RT, 2000, Response surface appr oximations: Noise, error repair and modeling errors. AIAA Journal 38(12) :2336-2343. Papila M, Haftka RT, April 2001, Uncerta inty and response surface approximations. Proceedings of the 42nd AIAA/ASME/ASCE/AHS/ASC Struct ures, Structural Dynamics, and Material Conference Seattle WA, AIAA-2001-1680. Papila M, 2002, Accuracy of Response Surface Approximations for Weight Equations Based on Structural Optimization, Ph.D. Thesis The University of Florida, Gainesville FL. Papila M, Haftka RT, Watson LT, 2005, Poin twise bias erro r bounds and min-max design for response surface approximations. AIAA Journal 43(8) :1797-1807. Papila N, Shyy W, Griffin LW, Huber F, Tran K, July 2000, Preliminary design optimization for a supersonic turbine fo r rocket propulsion. Proceedings of the 36th AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit Huntsville, Alab ama, AIAA-2000-3242. Papila N, 2001, Neural Networks and Pol ynomial Response Surface Approximation Techniques for Supersonic Turbine Design Optimization, Ph.D. Thesis The University of Florida, Gainesville FL. Papila N, Shyy W, Griffin LW, Dorney DJ, January 2001, Shape optimization of supersonic turbines using response surface and neural network methods. Proceedings of the 39th Annual Aerospace Scien ces Meeting and Exhibit Reno NV, AIAA-2001-1065. Papila N, Shyy W, Griffin LW, Dorney DJ, 2002, Shape optimization of supersonic turbines using global approximation methods. Journal of Propulsion and Power 18(3) :509-518. Patankar SV, 1980, Numerical Heat Trans fer and Fluid Flow Hemisphere: Washington, D.C.

PAGE 321

321 Perrone M, Cooper L. 1993, When networks di sagree: Ensemble methods for hybrid neural networks. In: Mammone RJ. editor. Artificial Neural Networks for Speech and Vision London: Chapman & Hall, pp 126-142. Poggio T, Smale S, 2003, The mathema tics of learning: D ealing with data. Notices of the American Mathematical Society 50 :537-544. Preston A, Colonius T, Brennen CE, 2001, Towards efficien t computation of heat and mass transfer effects in the continuum model for bubbly cavitating flows. Proceedings of the 4th International Symposium on Cavitation Pasadena CA. Qu X, Haftka RT, Venkataraman S, Johnson TF 2003, Deterministic and reliability based optimization of composite laminates for cryogenic environments. AIAA Journal 41(10) :2029-2036. Qu X, Venter G, Haftka RT, 2004, New form ulation of minimum-bias central composite experimental design a nd Gauss quadrature. Structural and Multidisciplinary Optimization 28(4) :231-242. Queipo NV, Haftka RT, Shyy W, Goel T, Vaidyanathan R, Tucker PK, 2005, Surrogate-based analysis and optimization. Progress in Aerospace Sciences 41(1) :1-28. Rachid FBF, 2003, A thermodynamically consistent model for cavitating flows of compressible fluids, International Journal of Non-linear Mechanics 38 :1007-1018. Rai MM, Madavan NK, 2000, Aerodynami c design using neural networks. AIAA Journal 38(1):173-182. (also presented at the 7th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization Conference St. Louis MO, AIAA-19984928). Rai MM, Madavan NK, 2001, Application of arti ficial neural networ ks to the design of turbomachinery airfoils. Journal of Propulsion and Power 17(1) :176-183 (also presented at the 36th Annual Aerospace Sciences Meeting and Exhibit Reno NV, Jan 12-15, 1998, AIAA-98-1003). Rais-Rohani M, Singh MN, September 2002, Efficien t response surface approach for reliability estimation of composite structures. Proceedings of the 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization Conference Atlanta GA, AIAA-2002-5604. Rao CR, 1947, Factorial experi ments derivable from combinatorial arrangement of arrays. Supplement to the Journal of the Royal Statistical Society 9(1) :128-139. Rao CR, 2002, Linear Statistical Infe rence and Its Applications Second Edition, John Wiley and Sons, New York. Rapposelli E, Agostino LD, 2003, A barotropic cavitation model with thermodynamic effects, Proceedings of the Fifth International Symposium on Cavitation Osaka, Japan.

PAGE 322

322 Reboud JL, Sauvage-Boutar E, Desclaux J, 1990, Pa rtial cavitation model for cryogenic fluids. ASME Cavitation and Multiphase Flow Forum Toronto, Canada. Redhe M, Forsberg J, Jansson T, Marklund PO Nilsson L, 2002a, Using the response surface methodology and the D-optimality criterion in crashworthiness related problems: An analysis of the surface approximation error versus the number of function evaluations. Structural and Multidisciplinary Optimization 24(3) :185-194. Redhe M, Eng L, Nilsson L, September 2002b, Using space mapping and surrogate models to optimize vehicle crashworthiness design. Proceedings of the 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization Conference Atlanta GA, AIAA-20025536. Rikards R, Auzins J, September 2002, Response surface method in design optimization of carbon/epoxy stiffener shells. Proceedings of the 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization Conference Atlanta GA, AIAA-2002-5654. Roache PJ, Ghia K, White F, 1986, Editorial po licy statement on the control of numerical accuracy. ASME Journal of Fluids Engineering 108(1) :2. Roache PJ, 1998, Verification and Validation in Com putational Science and Engineering Hermosa Publishers, Al buquerque, New Mexico. Rogallo RS, Moin P, 1984, Numerical simulation of turbulent flows. Annual Review of Fluid Mechanics 16 :99-137. Rowell LF, Braun RD, Olds JR, Unal R, 1999, Mu ltidisciplinary conceptual design optimization of space transport systems. Journal of Aircraft 36(1) :218-226. Ruggeri RS, Moore RD, 1969, Method of prediction of pump cavitation performance for various liquids, liquid temperatures and rotation speeds. NASA Technical Note NASA TN D-5292 Sacks J, Ylvisaker D, 1984, Some model robust designs in regression. The Annals of Statistics 12 :1324-1348. Sacks J, Schiller S, Welch W, 1989, Designs for computer experiments. Technometrics 31 :4147. Sacks J, Welch WJ, Mitchell TJ, Wynn HP, 1993, De sign and analysis of computer experiments. Statistical Science 4 :409-435. Saltelli A, Tarantola S, Chan K, 1999, A qua ntitative model-independent method for global sensitivity analysis of model output. Technometrics 41(1) :39-56. Samad A, Kim K-Y, Goel T, Haftka RT, Shyy W, July 2006, Shape optimization of turbomachinery blade using multiple surrogate models. Proceedings of the Symposium of Advances in Numerical Modeling of Aerodynamics and Hydrodynamics in Turbomachinery Miami FL, FEDSM 2006-98368.

PAGE 323

323 Sanchez E, Pintos S, Queipo NV, July 2006, To ward and optimal ensemble of kernel-based approximations with engi neering applications. Proceedings of the IEEE World Conference on Computational Intelligence Vancouver BC, Canada, Paper ID 1265. Sasaki D, Obayashi S, Sawada K, Himeno R, September 2000, Multi-objective aerodynamic optimization of supersonic wings using Navier-Stokes equations, Proceedings of the European Congress on Computa tional Methods in Applied Sciences and Engineering (ECCOMAS), Barcelona Spain. Sasaki D, Morikawa M, Ob ayashi S, Nakahashi K, March 2001, Aerodynamic shape optimization of supersonic wings by adaptive range multi-objective genetic algorithms, Proceedings of the 1st International Conference on Evolutionary Multi-Criterion Optimization Zurich, pp 639-652. Sasena M, Papalambros P, Goovaerts P, Sept ember 2000, Metamodeling sampling criteria in a global optimization framework. Proceedings of the 8th AIAA/NASA/USAF/ISSMO Symposium on Multidisciplinary Optimization Conference Long Beach CA, AIAA-20004921. Sen P, Yang J-B, 1998, Multiple Criteria Decision Support in Engineering Design SpringerVerlag: London. Senocak I, Shyy W, 2002, A pressure-based met hod for turbulent cavitating flow computations. Journal of Computational Physics 176 :363-383. Senocak I, Shyy W, 2004a, Interfacial dynamics-b ased modeling of turbulent cavitating flows, Part-1: Model development a nd steady-state computations. International Journal for Numerical Methods in Fluids 44(9) :975-995. Senocak I, Shyy W, 2004b, Interfacial dynamics-b ased modeling of turbulent cavitating flows, Part-2: Time-dependant computations. International Journal fo r Numerical Methods in Fluids 44(9) :997-1016. Shyy W, 1994, Computational Modeling for Flui d Flow and Interfacial Transport Elsevier: Amsterdam, The Netherlands (Revised print 1997). Shyy W, Thakur SS, 1994, A contro lled variation scheme in a seque ntial solver for recirculating flows, Part-I: Theory and Formulation. Numerical Heat Transfer B-Fundamentals 25(3) :245-272. Shyy W, Thakur SS, Ouyang H, Liu J, Blosch E, 1997, Computational Tec hniques for Complex Transport Phenomenon Cambridge University Press, United Kingdom. Shyy W, Tucker PK, Vaidyanathan R, 2001a, Re sponse surface and neural network techniques for rocket engine injector optimization. Journal of Propulsion and Power 17(2) :391-401.

PAGE 324

324 Shyy W, Papila N, Vaidyanathan R, Tuck er PK, 2001b, Global design optimization for aerodynamics and rocket propulsion components. Progress in Aerospace Sciences 37 :59118. Simpson TW, Peplinski JD, Koch PN, Allen JK, 2001a, Meta-models for computer based engineering design: Surv ey and recommendations. Engineering with Computers 17(2) :129-150. Simpson TW, Mauery TM, Korte JJ, Mistree F, 2001b, Kriging models for global approximation in simulation-based multidisciplinary design optimization. AIAA Journal 39(12) :22332241. Singhal AK, Vaidya N, Leonard AD, June 1997, Multi-dimensional simulation of cavitating flows using a PDF model for phase change. ASME Fluids Engineering Divison Summer Meeting Vancouver, Canada FEDSM97-3272. Singhal AK, Athavale MM, Li H, Jiang Y, 2002, Mathematical basis and validation of the full cavitation model. Journal of Fluids Engineering 124(3) :617-624. Snieder R, 1998, The role of nonlinearity in inverse problems. Inverse Problems 14(3) :387404. Sobieszczanski-Sobieski J, Haftka RT, 1997, Multidisciplinary aerospace design optimization: Survey of recent developments. Structural Optimization 14 :1-23. Sobol IM, 1993, Sensitivity analysis for nonlinear mathematical models. Mathematical Modeling and Computational Experiment 1(4) :407-414. Sondak DL, Dorney DJ, July 2003, General equation set solver for compressible and incompressible turbomachinery flows, Proceedings of the 39th AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit Huntsville AL, AIAA-2003-4420. Stahl HA, Stepanoff AJ, 1956, Thermodynamic asp ects of cavitation in centrifugal pumps, Transactions of ASME 78 :1691. Stander N, Roux W, Giger M, Redhe M, Fedorova N, Haarhoff J, September 2004, A comparison of meta-modeling technique s for crashworthiness optimization. Proceedings of the 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference Albany NY, AIAA-2004-4489. Steffen CJ Jr., January 2002a, Response surf ace modeling of combined-cycle propulsion components using computational fluid dynamics. Proceedings of the 40th AIAA Aerospace Sciences Meeting and Exhibit Reno NV, AIAA-2002-0542. Steffen CJ Jr., Bond RB, Edwards JR, April 2002b, Three dimensional CFD analysis of the GTX combustor. Proceedings of the 38th Combustion, Airbreathi ng Propulsion, Propulsion Systems Hazards, and Modeling and Simulation Subcommittees Meeting (sponsored by JANNAF), Destin FL.

PAGE 325

325 Stein M, 1987, Large sample properties of simulations using Latin hypercube sampling. Technometrics 29 :143-151. Stepanoff AJ, 1993, Centrifugal and Axial Flow Pumps 2nd Edition, Krieger Publishing Company, Malabar FL. Steuer RE, 1986, Multiple Criteria Optimization: Theo ry, Computation, and Application Wiley: New York. Tang B, 1993, Orthogonal array-based Latin hypercubes. Journal of the American Statistical Association 88 :1392-1397. Tannehill JC, Anderson DA, Pletcher RH, 1997, Computational Fluid Mechanics and Heat Transfer Taylor and Francis. Tenorio L, 2001, Statistical regul arization of inverse problems. SIAM Review 43 :347-366. Thakur SS, Wright JF, Shyy W, 2002, STREAM: A computati onal fluid dynamics and heat transfer Navier-Stokes solver. Streamline Numerics Inc. and Computational ThermoFluids Laboratory, Department of Mechan ical and Aerospace E ngineering Technical Report, Gainesville FL. Tikhonov AN, Arsenin VY, 1977, Solutions to Ill-posed Problems Wiley, New York. Tokumasu T, Kamijo K, Matsumoto Y, 2002, A numerical study of th ermodynamic effects of sheet cavitation. Proceedings of the ASME FEDSM Montreal, Canada. Tokumasu T, Sekino Y, Kamijo K, 2003, A new modeling of sheet cavitation considering the thermodynamic effects. Proceedings of the 5th International Sym posium on Cavitation Osaka, Japan. Ueberhuber CW, 1997, Numerical Computation 2: Methods, Software and Analysis SpringerVerlag: Berlin, pp 71. Umakant J, Sudhakar K, Mujumdar PM, Panneerselvam S, September 2004, Configuration design of air-breathing hypersonic vehicle using surrogate models. Proceedings of 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference Albany NY, AIAA-2004-4543. Utturkar Y, 2005, Computational Modeling of Thermodynamic Effects in Cryogenic Cavitation, Ph.D. Thesis The University of Florida, Gainesville FL. Utturkar Y, Wu J, Wang G, Shyy W, 2005a, R ecent progress in modeling of cryogenic cavitation for liquid rocket propulsion. Progress in Aerospace Sciences 41(7) :558-608. Utturkar Y, Thakur SS, Shyy W, January 2005b, Computational modeling of thermodynamic effects in cryogenic cavitation. Proceedings of 43rd AIAA Aerospace Sciences Meeting and Exhibit Reno NV, AIAA-2005-1286.

PAGE 326

326 Vaidyanathan R, Papila N, S hyy W, Tucker PK, Griffin LW, Ha ftka RT, Fitz-Coy N, September 2000, Neural network and response surface methodology for rocket engine component optimization. Proceedings of 8th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization Conference Long Beach CA, AIAA-20004880. Vaidyanathan R, Senocak I, Wu J, Shyy W, 200 3, Sensitivity evaluation of a transport-based turbulent cavitation model. Journal of Fluids Engineering 125(3) :447-458. Vaidyanathan R, 2004, Investigation of Na vier-Stokes Code Ve rification and Design Optimization, Ph.D. Thesis The University of Florida, Gainesville FL. Vaidyanathan R, Tucker PK, Papila N, Shyy W, 2004a, CFD based design optimization for a single element rocket injector. Journal of Propulsion and Power 20(4) :705-717 (also presented in 41st Annual Aerospace Sciences Meeting and Exhibit Reno NV, January 2003, AIAA-2003-296). Vaidyanathan R, Goel T, Shyy W, Haftka RT, Queipo NV, Tucker PK, July 2004b, Global sensitivity and trade-off analyses for multi-objective liquid rocket injector design. Proceedings of 40th AIAA/ASME/SAE/ASEE Joint Pr opulsion Conference and Exhibit Ft. Lauderdale, FL, AIAA-2004-4007. Vapnik VN, 1998, Statistical Learning Theory Wiley and Sons, New York. Venkateswaran S, Lindau JW, Kunz RF, Merkle C, 2002, Computation of multiphase mixture flows with compressibility effects. Journal of Computational Physics 180(1) :54-77. Venter G, Haftka RT, April 1997, Minimum-bias based experimental design for constructing response surface in structural optimization. Proceedings of the 38th AIAA/ASME/ASCE/AHS/ASC Structures, Stru ctural Dynamics and Materials Conference Kissimmee FL, Part 2: 1225-1238, AIAA-1997-1053. Ventikos Y, Tzabiras G, 2000, A numerical methods for simulation of steady and unsteady cavitating flows. Computers and Fluids 29(1) :63-88. Versteeg HK, Malalasekera W, 1995, An Introduction to Computa tional Fluid Dynamics: The Finite Volume Method Pearson Education Limited, England. Vittal S, Hajela P, September 2002, Confidence in tervals for reliability estimated using response surface methods. Proceedings of the 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization Conference Atlanta GA, AIAA-2002-5475. Wahba G, 1983, Bayesian confidence interval for the cross-validated smoothing spline. Journal of the Royal Statistical Society B 45 :133-150. Wang TS, Chen YS, July 1990, A united Navier-S tokes flowfield and performance analysis of liquid rocket engines, Proceedings of the 26th AIAA/SAE/ASME/ASEE Joint Propulsion Conference Orlando FL, AIAA 90-2494.

PAGE 327

327 Wang G, Senocak I, Shyy W, Ikohagi T, Cao S, 2001, Dynamics of attach ed turbulent cavitating flows. Progress in Aerospace Sciences 37(6) :551-581. Welch WJ, 1983, A mean squared error crit erion for the design of experiments. Biometrika 70(1) :205-213. Wikipedia, 2007, http://en.wikipedia.org/wiki/Expander_cycle last accessed March 8, 2007. Williams B, Santner T, Notz W, 2000, Sequentia l design of computer experiments to minimize integrated response functions. Statistica Sinica 10 :1133-1152. Williams CKI, Rasmussen CE, 1996, Gaussian processes for regression. Advances in Neural Information Processing Systems 8 Eds. Touretzky DS, Mozer MC, Hasselmo ME, MIT Press, pp 514-520. Wilson B, Capperelleri D, Simpson TW, Frecker M, 2001, Efficient Pareto frontier exploration using surrogate approximations. Optimization and Engineering 2(1) :31-50. Wu J, 2005, Modeling of Turbulent Cavitati on Dynamics in Fluid Machinery Flows, Ph.D. Thesis The University of Florida, Gainesville FL. Wu J, Senocak I, Wang G, Wu Y, Shyy W, 200 3a, Three-dimensional simulation of turbulent cavitating flows in a hollow-jet valve. Computer Modeling in Engineering and Sciences 4(6) :679-690. Wu J, Utturkar Y, Senocak I, Shyy W, Arak ere N, June 2003b, Impact of turbulence and compressibility modeling on three-dime nsional cavitating flow computations. Proceedings of the 33rd AIAA Fluid Dynamics Conference and Exhibit Orlando FL, AIAA-2003-4264. Wu J, Utturkar Y, Shyy W, November 2003c, As sessment of modeling stra tegies for cavitating flow around a hydrofoil. Proceedings of the Fifth International Symposium on Cavitation Osaka Japan, Cav03-OS-1-12. Wu J, Wang G, Shyy W, 2005, Time-dependent turbulent cavita ting flow computations with interfacial transport and filter-based models. International Journal of Numerical Methods in Fluids 49(7) :739-761. Wu YT, Shin Y, Sues R, Cesare M, April 2001, Safety factor based approach for probabilitybased design optimization. Proceedings of the 42nd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference and Exhibit Seattle WA, AIAA-2001-1522. Yakowitz S, Szidarovsky F, 1985, A comparison of Kriging with non-parametric regression. Journal of Multivariate Analysis 16 :21-53. Ye K, 1998, Orthogonal column Latin hypercube s and their application in computer experiments. Journal of the American Statistical Association 93 :1430-1439.

PAGE 328

328 Ye K, Li W, Sudjianto A, 2000, Algorithmic c onstruction of optimal symmetric Latin hypercube designs. Journal of Statistical Planning and Inference 90 :145-159. Zerpa L, Queipo NV, Pintos S, Salager J, 2005, An optimization methodology of alkalinesurfactant-polymer flooding processes using field scale numerical simulation and multiple surrogates. Journal of Petroleum Science and Engineering 47 :197-208 (also presented at SPE/DOE 14th Symposium on Improved Oil Recovery 2004, April 17-21, Tulsa OK).

PAGE 329

329 BIOGRAPHICAL SKETCH Tushar Goel was born and brought up in the c ity of Sitapur in India. He completed his bachelors education in mechani cal engineering at the Institute of Engineering and Technology at Lucknow (India) in June 1999. He joined the In dian Institute of Technology at Kanpur for his graduate education. He worked with Prof. Kalyanmoy Deb in the area of evolutionary algorithms and received his master of technology in January 2001. Immediately after completing his masters education, he took an appointment w ith John F Welch Technology Center, research and development division of General Electric, as a m echanical engineer in th e Advanced Mechanical Technologies group. He received hi s doctorate from the University of Florida at Gainesville in May 2007 under the tutelage of Prof. Ra phael Haftka and Prof. Wei Shyy. His research interests are design and optim ization methods, surrogate modeling, and computational fluid dynamics.


Permanent Link: http://ufdc.ufl.edu/UFE0019140/00001

Material Information

Title: Multiple Surrogates and Error Modeling in Optimization of Liquid Rocket Propulsion Components
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0019140:00001

Permanent Link: http://ufdc.ufl.edu/UFE0019140/00001

Material Information

Title: Multiple Surrogates and Error Modeling in Optimization of Liquid Rocket Propulsion Components
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0019140:00001


This item has the following downloads:


Full Text










MULTIPLE SURROGATES AND ERROR MODELING INT OPTIMIZATION OF LIQUID
ROCKET PROPULSION COMPONENTS

















By

TUSHAR GOEL


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA

2007




































O 2007 Tushar Goel



































To my parents Sushil and Ramesh, sister Manj ari, and brother Arun.









ACKNOWLEDGMENTS

This dissertation could not be completed without enormous help from my teachers, family,

and friends. While I feel that words would never be sufficient to adequately reflect their

contributions, I give a try. I am incredibly grateful to my advisors Prof. Raphael Haftka and Prof.

Wei Shyy for their continuous encouragement, very generous support, patience, and peerless

guidance. Both Prof. Shyy and Prof. Haftka provided me numerous opportunities to develop and

to hone my research and personal skills. I am amazed by their never-ending enthusiasm and

depth of knowledge, and feel extremely fortunate to have been taught by them.

I would like to especially thank my advisory committee members, Prof. Nam-Ho Kim,

Prof. Jacob N. Chung, and Prof. Andre I. Khuri, for their willingness to serve on my committee,

for evaluating my dissertation, and for offering constructive criticism that has helped improved

this work. I particularly thank Dr Kim, for many discussions during our weekly group meetings

and afterwards.

I feel deeply indebted to Prof. Nestor V. Queipo for a very fruitful collaboration. Not only

did he significantly contribute to my work but also he was very supportive and helpful during the

entire course of my graduate studies. I express my sincere gratitude to Dr Layne T Watson and

Dr Daniel J Dorney for their collaboration and help in my research. I thank Dr Siddharth Thakur,

who provided a huge assistance with the STREAM code and suggestions related to numerical

aspects in my research work. I also thank Prof. Peretz P. Friedmann and Prof. Kwang-Yong Kim

for the opportunities to test some of our ideas. I thank my collaborators Dr Raj Vaidyanathan, Ms

Yolanda Mack, Dr Melih Papila, Dr Yogen Utturkar, Dr Jiongyang Wu, Mr Abdus Samad, Mr

Bryan Glaz, and Dr Li Liu. I learnt a lot from you and I feel sincerely indebted for your help,

both personally and academically.









I thank the staff of Mechanical Engineering department, particularly Jan, Pam, and David,

for their help with administrative and technical support. I also am thankful to the staff at

International Center, library, CIRCA, and ETD for their help with this thesis and other

administrative details. I sincerely acknowledge the financial support provided by NASA

Constellation University Institute Program (CUIP).

I duly thank my colleagues in the Structural and Multi-disciplinary Optimization Group

and Computational Thermo-fluids Laboratory for their assistance and many fruitful discussions

about all the worldly issues related to academics and beyond. I am highly obliged to Prof.

Kalyanmoy Deb and Prof. Prashant Kumar at IIT Kanpur, who gave me very sage advice at

different times in my life. They indeed have played a big role in shaping my career.

I also thank my colleagues Emre, Eric, Pat, Nick, and Amor, who made my stay at the

University of Michigan a memorable one. I am grateful to have true friends in Ashish, Jaco,

Erdem, Murali, Siva, Ashwin, Girish, Saurabh, Ved, Priyank, Satish, Tandon, Sudhir, Kale,

Dragos, Christian, Victor, Ben, and Palani for lending me a shoulder, when I had a bad day and

for sharing with me the happy moments. These memories will remain etched for life.

Finally, but not at all the least, I must say that I would never have completed this work, had

it not been the unconditional love, appreciation, and understanding of my family. Despite the fact

that we missed each other very much, they always motivated me to take one more step forward

throughout my life and rej oiced all my achievements. To you, I dedicate this dissertation!











TABLE OF CONTENTS


page

ACKNOWLEDGMENTS .............. ...............4.....


LIST OF TABLES ............ ...... ._ ._ ...............12...

LIST OF FIGURES .............. ...............16....


LIST OF ABBREVIATIONS ........._.._ ..... .___ ...............20....

AB S TRAC T ......_ ................. ............_........2


CHAPTER

1 INTRODUCTION AND SCOPE ................ ...............31................


Space Propulsion Sy stems ................. ............. ...............3.. 1....
Design Requirements of Propulsions Systems ................. ...............32........... ...
System Identification and Optimization: Case Studies .............. .... .......... ..............3
Sensitivity Evaluation and Model Validation for a Cryogenic Cavitation Model ..........34
Shape Optimization of Diffuser Vanes ................. ...............34........... ...
Surrogate M odeling .............. ...............35....
Issues with Surrogate Modeling .............. ...............37....
Sampling Strategies ................. ...............37.................
Type of Surrogate M odel ................... ............. ...............38......
Estimation of Errors in Surrogate Predictions ................. ...............38...............
Scope of Current Research .............. ...............39....

2 ELEMENTS OF SURROGATE MODELING ................. ...............43........... ...


Steps in Surrogate M odeling .............. ...............44....
Design of Experiments ................... .. ......... ...............45......
Numerical Simulations at Selected Locations ................. ...............45........... ...
Construction of Surrogate Model .............. ...............45....
M odel Validation............... ......... ....................4
Mathematical Formulation of Surrogate Modeling Problem............... ...............46
Design of Experiments .............. ...............48....
Factorial Designs ................. ...............49.................
Central Composite Designs ............... ........ ....... .... ........5
Variance Optimal DOEs for Polynomial Response Surface Approximations ................51
Latin Hypercube Sampling .............. ...............51....
Orthogonal Arrays .................. ............. .....................5
Optimal LHS, OA-based LHS, Optimal OA-based LHS ................. .......................53
Construction of Surrogate Model .............. .... ...............54.
Polynomial Response Surface Approximation .............. ...............54....
Kriging M odeling ............. ...............56.....












Radial Basis Functions ............. ...............58.....
Kernel-based Regression ................. ...............61.__._......
Model Selection and Validation .............. ...............62....

Split Sam ple .............. ...............62....
Cross Validation ............. ...............63.....
Bootstrapping .............. ...............64....


3 PITFALLS OF USING A SINGLE CRITERION FOR SELECTING
EXPERIMENTAL DE SIGNS ........._._ .......__. ...............72...


Introducti on ........._..... ......_.. .. .... ...............72
Error Measures for Experimental Designs .............. ...............76....
Test Problems and Results ............ ... .... ..._.. ...............80...

Comparison of Different Experimental Designs ......._. .......... .. ........._._.....81
Space filling characteristics of D-optimal and LHS designs................. ...............8
Tradeoffs among various experimental designs ........._...... ..... ...._._..................82
Extreme example of risks in single criterion based design: Min-max RMS bias
C C D .............. .............. ....... ... ..... .......8
Strategies to Address Multiple Criteria for Experimental Designs .............. .... ........._...89
Combination of model-based D-optimality criterion with geometry based LHS
criterion ................... ............ ...... .. ............. ...............9
Multiple experimental designs combined with pointwise error-based fi1tering.......92
Concluding Remarks .............. ...............94....

4 ENSEMBLE OF SURROGATES ................. ...............105...............


Conceptual Framework................ .. ... ...............10
Identification of Region of Large Uncertainty ................. ........._._ ...._._ .....10
Weighted Average Surrogate Model Concept............... ...............107
Non-parametric surrogate fi1ter .............. ...............109....
Best PRESS for exclusive assignments ........___..........___ ............... 109 ....
Parametric surrogate fi1ter .............. ..... .. ......... ..........10
Test Problems, Numerical Procedure, and Prediction Metrics ......_._ ........... ........ .......112
Test Problems ................ ...............112................
Branin-Hoo function ................. ...............112......... ......
Camelb ack functi on ........._.._ ........... ...............112...
Goldstein-Price function ................. ...............112......... ......
Hartman functions ................. ......... ...............112......
Radial turbine design for space launch ................. ...............113..............
Numerical Procedure ................. ...............114................
Prediction M etrics ................. ...............115......... ......
Correlation coefficient ................. ...............115................
RMS error............... ...............116.
M aximum error ................. ...............116......... ......
Results and Discussion ................... ... .. ...... ...............117.....
Identification of Zones of High Uncertainty ................. ...............117..............
Robust Approximation via Ensemble of Surrogates .......... ................ ...............119












Correlations ............ ............ ...............120......
RM S errors ................. ................. 12......... 1....
Maximum absolute errors............... ..... ...... .........2

Studying the role of generalized cross-validation errors ................. ................ ..123
Effect of sampling density................ ...............12
Sensitivity analysis of PWS parameters ................. ...............126........... ...
Conclusions............... ..............12


5 ACCURACY OF ERROR ESTIMATES FOR SURROGATE APPROXIMATION OF
NOISE-FREE FUNCTIONS ................. ...............144......... ......


Introducti on ................. ...............144................
Error Estimation Measures ................. ............. ... ..........4
Error Measures for Polynomial Response Surface Approximation .............. ................146
Estimated standard error ................. ...............148................
Root mean square bias error............... ...............148.
Error Measures for Kriging .............. .... ............. ...............149.....
Model Independent Error Estimation Models ................ ...............................15
Generalized cross-validation error ............. ...............151....
Standard deviation of responses ................ ................. .................. ..152
Ensemble of Error Estimation Measures ................. ...............154........... ...
Averaging of multiple error measures ................. ...............154........... ...
Identification of best error measure .............. ...............154....
Simultaneous application of multiple error measures ................ .....................155
Global Prediction Metrics............... ...............155
Root mean square error .............. ... ............. ...............155.....
Correlation between predicted and actual errors ................. .........................156
Maximum absolute error ................. ...............157...............
Test Problems and Testing Procedure .............. ...............157....
Test Problem s ................. ...............157......... ......
Branin-Hoo function .............. ...............158....
Camelb ack functi on ................. ...............158...............
Goldstein-Price function .............. ...............158....
Hartman functions ................ ...............158................
Radial turbine design problem .............. ...............159....
Cantilever beam design problem .............. ...............159....
Testing Procedure ................. ...............160................
Design of experiments .............. .....................160
Test points .............. ...............161....
Surrogate construction................ ............16
Error estim ation................ ..............16
Results: Accuracy of Error Estimates ................. ...............162........... ...
Global Error Measures .............. ...............163....
Pointwise Error Measures ................. ...............165...............
Root mean square errors............... ........... ..........16
Correlations between actual and predicted errors .............. ....................16
Maximum absolute errors............... ...............169












Ensemble of Multiple Error Estimators ............ .....___ ...............170.
Averaging of Errors ............... ...... ..._ .... ... ..............17
Identification of Suitable Error Estimator for Kriging ............_ ..... .. ..............171
Detection of High Error Regions using Multiple Errors Estimators ................... ..........173
Conclusions............... .... ...........17
Global Error Estimators ............_ ......___ ...............175...
Pointwise Error Estimation Models ............__......_ __ ...............176.
Simultaneous Application of Multiple Error Measures. ....._____ ...... ...___...........177

6 CRYOGENIC CAVITATION MODEL VALIDATION AND SENSITIVITY
EVALUATION ................. ...............199......... ......


Introducti on ................. ... ......... .. .. ....... .. ........ .............9
Cavitating Flows: Significance and Previous Computational Efforts ................... ........199
Influence of Thermal Environment on Cavitation Modeling ................. ................ ..201
Experimental and Numerical Modeling of Cryogenic Cavitation ............... ............._...203
Surrogate Modeling Framework ................. ...............204................
Scope and Organization................. ...... ........0
Governing Equations and Numerical Approach ................. ...............206........... ...
Transport-based Cavitation Model .............. ...............207....
Thermodynamic Effects .............. ...............208....
Speed of Sound Model .............. ...............210....
Turbulence M odel ................. ...............211..............
Numerical Approach .............. ...............212....
Results and Discussion ............... .......... .. ........ .............1
Test Geometry, Boundary Conditions, and Performance Indicators.............................21
Surrogates-based Global Sensitivity Assessment and Calibration ............... .... ........._..214
Global Sensitivity Assessment ................. ...............215.......... ....
Surrogate construction..................... .. ... ........1
Main and interaction effects of different variables .............. ....................21
Validation of global sensitivity analysis ................ ............ ........_.........218
Calibration of Cryogenic Cavitation Model .............. ...............219....
Surrogate modeling of responses ................. ....._.. ...._.. ......_.._......220
Multi-objective optimization............... .............22
Optimization outcome for hydrogen .............. ...............222....
Validation of the calibrated cavitation model ........._.._.. ...._... ........_.......222
Investigation of Thermal Effects and Boundary Conditions .............. ....................23
Influence of Thermo-sensitive Material Properties ........._.._.. ....._.._ ........._......223
Impact of Boundary Conditions .............. ...............225....
Conclusions............... .. .........................2 6
Influence of Turbulence Modeling on Predictions ....._._._ ......._.. .. ...._._.........243

7 IMPROVING HYDRODYNAMIC PERFORMANCE OF DIFFUSER VIA SHAPE
OPTIMIZ ATION ................. ...............245................

Introducti on ................. ...............245................
Problem Description ................ ...............247................












Vane Shape Definition .............. ....... .... .... ..................24
Mesh Generation, Boundary Conditions, and Numerical Simulation .........................250
Surrogate-Based Design and Optimization ............ ......___....._ ...........25
Surrogate M odeling ............ ..... ._ ...............251...
Global Sensitivity Assessment .............. ...............255....
Optimization of Diffuser Vane Performance ............_...... .__ ........._........257
Design Space Refinement Dimensionality Reduction .............. .....................5
Final Optimization............... ..... ..........26
Analysis of Optimal Diffuser Vane Shape .............. ...............260....
Flow Structure .............. ...............260....
Vane Loadings ................. ...............261................
Empirical Considerations .............. ...............261....
Summary and Conclusions .............. ...............262....

8 SUMMARY AND FUTURE WORK .............. ...............277....


Pitfalls of Using a Single Criterion for Experimental Designs ................. .....................277
Summary and Learnings ................. ...............277................
Future W ork ................. ...............278................
Ensemble of Surrogates .............. ...............278....
Summary and Learnings ................. ...............278................
Future W ork. .............. .... .... ........ ... ......... ............27
Accuracy of Error Estimates for Noise-free Functions ................. ................ ........ .279
Summary and Learnings ................. ...............279................
Future W ork ................. .... ............ .. ........... ..... ..........28
System Identification of Cryogenic Cavitation Model ................ ................. ..........280
Summary and Learnings ....__. ................. ........_.._.........28
Future W ork. ...._.._.................. ... ......_.._..........2 1
Shape Optimization of Diffuser Vanes ....__. ................. ........._.._ ....... 28
Summary and Learnings ....__. ................. ........_.._.........28
Future W ork. ...._.._................. ......._ _. ..........28

APPENDIX


A THEORETICAL MODELS FOR ESTIMATING POINTWISE BIAS ERRORS............. .283


Data-Independent Error Measure s ....__. ................. ........__. ........ 28
Data-Independent Bias Error Bounds ....__. ................. ............... 286 ....
Data-Independent RMS Bias Error .............. ...............286....
Data-Dependent Error Measures .............. ...............287....
Bias Error Bound Formulation .............. ...............288....
Root Mean Square Bias Error Formulation .................. ...............289..............
Determining the Distribution of Coefficient Vector (3 .............. .....................8

Analytical Expression for Pointwise Bias Error Bound ...._ .............. ..... ..........291
Analytical Estimate of Root Mean Square Bias Error. ......____ ..... .. .............292

B APPLICATIONS OF DATA-INDEPENDENT RMS BIAS ERROR MEASURES ..........294











Construction of Experimental Designs ............_..._ .. ........_._ ... .. ........ .............9
Why Min-max RMS Bias Designs Place Points near Center for Four-dimensional
Space? ............ ........ ... ............29
Verification of Experimental Designs ................. ......... ...............296 ....
Comparison of Experimental Designs ................. .......... ...............297 ....
RMS Bias Error Estimates for Trigonometric Example............... ...............298

C GLOBAL SENSITIVITY ANALYSIS ................. ...............304...............

D LACK-OF-FIT TEST WITH NON-REPLICATE DATA FOR POLYNOMIAL
RESPONSE SURFACE APPROXIMATION .............. ...............307....

LI ST OF REFERENCE S ................. ...............3.. 10......... ...

BIOGRAPHICAL SKETCH .............. ...............329....










LIST OF TABLES


Table page

2-1. Summary of main characteristics of different DOEs. ................ ............. ........ .......70

2-2. Examples of kernel functions and related estimation schemes. ................ ......................70

2-3. Summary of main characteristics of different surrogate models............._ ........._._......71

3-1. D-optimal design (25 points, 4-dimensional space) obtained using JMP. ..........................102

3-2. LHS designs (25 points, 4-dimensional space) obtained using MATLAB. .......................102

3-3. Comparison of RMS bias CCD, FCCD, D-optimal, and LHS designs for 4-dimensional
space (all designs have 25 points). ............. ...............102....

3-4. Prediction performance of different 25-point experimental designs in approximation of
example functions F;, Fz, and F3 in four-dimensional spaces. ............. ....................10

3-5. Min-max RMS bias central composite designs for 2-5 dimensional spaces and
corresponding design metrics. ............ ...............103.....

3-6. Mean and coefficient of variation (based on 100 instances) of different error metrics for
various experimental designs in four-dimensional space (30 points). ................... ..........104

3-7. Reduction in errors by considering multiple experimental designs and picking one
experimental design using appropriate criterion (filtering). ............. ......................0

4-1. Parameters used in Hartman function with three variables. ............. ....................13

4-2. Parameters used in Hartman function with six variables. ............. ......................3

4-3. Mean, coefficient of variation (COV), and median of different analytical functions. .........137

4-4. Range of variables for radial turbine design problem. ...........__.....___ ...............137

4-5. Numerical setup for the test problems. ........._.._... ...............138._...._...

4-6. Median, 1st, and 3rd quartile of the maximum standard deviation and actual errors in
predictions of different surrogates at the location corresponding to maximum
standard deviation over 1000 DOEs for different test problems. ........._.... ........._.....13 8

4-7. Median, 1st, and 3rd quartile of the minimum standard deviation and actual errors in
predictions of different surrogates at the location corresponding to minimum
standard deviation over 1000 DOEs for different test problems. ........._... ................1 39











4-8. Median, 1st, and 3rd quartile of the maximum standard deviation and maximum actual
errors in predictions of different surrogates over 1000 DOEs for different test
problems............... ...............13

4-9. Effect of design of experiment: Number of cases when an individual surrogate model
yielded the least PRESS error. ............. ...............140....

4-10. Opportunities of improvement via PWS: Number of points when individual surrogates
yield errors of opposite signs. ................ ......... ........ ......... ...............140

4-11i. Mean and coefficient of variation of correlation coefficient between actual and
predicted response for different surrogate models ................. .....__. ........._.._.. .140

4-12. Mean and coefficient of variation of RMS errors in design space for different surrogate
m odels. .............. ...............141....

4-13. Mean and coefficient of variation of maximum absolute error in design space. ................141

4-14. Mean and coefficient of variation of the ratio of RMS error and PRESS over 1000
DOEs ................. ...............142................

4-15. The impact of sampling density in approximation of Branin-Hoo function. ...................142

4-16. The impact of sampling density in approximation of Camelback function. ...................... 143

4-17. Effect of parameters in parametric surrogate filter used for PWS. ............ ...................143

5-1. Summary of different error measures used in this study. ............ ...... ............... 187

5-2. Parameters used in Hartman function with three variables. ............ ...... ...............18

5-3. Parameters used in Hartman function with six variables. ............ ......................8

5-4. Range of variables for radial turbine design problem. ......................__ ...............188

5-5. Ranges of variables for cantilever beam design problem. ............. ......................8

5-6. Numerical setup for different test problems. ............. ...............188....

5-7. Mean and coefficient of variation of normalized actual RMS error in the entire design
space. ............. ...............189....

5-8. Mean and coefficient of variation of ratio of global error measures and corresponding
actual RMS error in design space. ............ ...............190.....

5-9. Mean and COV of ratio of root mean squared predicted and actual errors for different
test problem s. ............ ...............191.....











5-10. Mean and COV of correlations between actual and predicted errors for different test
problem s. ............. ...............192....

5-11i. Mean and COV of ratio of maximum predicted and actual errors for different test
problems............... ...............19

5-12. Mean and COV of ratio of root mean square average error and actual RMS errors for
different test problems. ............ ...............194.....

5-13. Comparison of performance of individual error measures and GCV chosen error
measure for kriging. ............. ...............194....

5-14. Number of cases out of 1000 for which error estimators failed to detect high error
regions. ............. ...............195....

5-15. Number of cases out of 1000 for which error estimators failed to detect maximum
error regions. ............. ...............196....

5-16. Number of cases out of 1000 for which different error estimators wrongly marked low
error regions as high error. ............. ...............197....

5-17. Number of cases out of 1000 for which different error estimators wrongly marked low
error regions as the maximum error region. ............. ...............198....

5-18. High level summary of performance of different pointwise error estimators. ................... 198

6-1. Summary of a few relevant numerical studies on cryogenic cavitation ............... ...............239

6-2. Ranges of variables for global sensitivity analyses. ............ ...............240.....

6-3. Performance indicators and corresponding weights in surrogate approximations of
prediction metrics P,f and T,f ............ ...............240

6-4. Performance indicators and corresponding weights in surrogate approximations of
prediction metrics P,f and T,f in model-parameter space. ............. .....................24


6-5. Predicted and actual P,f and T,f at best-compromise model parameter for liquid N2
(Case 290C). ............ ...............241.....

6-6. Description of flow cases chosen for the validation of the calibrated cryogenic
cavitation model ................. ...............242....._._. .....

6-7. Model parameters in Launder-Spalding and non-equilibrium k -E turbulence models.....243

7-1. Design variables and corresponding ranges. ............ ...............273.....

7-2. Summary of pressure ratio on data points and performance metrics for different
surrogate models fitted to Set A. ............. ...............273....










7-3. Range of data, quality indicators for different surrogate models, and weights associated
with the components of PWS for Set B and Set C data, respectively. .................. ...........274

7-4. Optimal design variables and pressure ratio obtained using different surrogates
constructed using Set C data. ............. ...............274....

7-5. Comparison of actual and predicted pressure ratio of optimal designs obtained from
multiple surrogate models (Set C). ............ ...............275.....

7-6. Modified ranges of design variables and fixed parameters in refined design space. ...........275

7-7. Range of data, summary of performance indicators, and weighted associated with
different surrogate models in the refined design space. ............. ......................7

7-8. Design variables and pressure ratio at the optimal designs predicted by different
surrogates. ............ ...............276.....

7-9. Actual and empirical ratios of gaps between adj acent diffuser vanes. ..........._..._ ..............276

B-1. Design variables and maximum RMS bias errors for min-max RMS bias central
composite designs in N,=2-5 dimensional spaces. ............. ...............303....

B-2. Comparison of different experimental designs for two dimensions. ............ ..................303

B-3. Comparison of actual and predicted RMS bias errors for min-max RMS bias central
composite experimental designs in four-dimensional space. ............. .....................30










LIST OF FIGURES


Figure page

1-1. Schematic of liquid fuel rocket propulsion system. ............. ...............41.....

1-2. Classifieation of propulsion systems according to power cycles. ................ ................ ..42

2-1. Key stages of the surrogate-based modeling approach. ............. ...............66.....

2-2. Anatomy of surrogate modeling: model estimation + model appraisal. ............. .................66

2-3. A surrogate modeling scheme provides the expected value of the prediction and the
uncertainty associated with that prediction ................. ...............67........... ...

2-4. Alternative loss functions for the construction of surrogate models. ............. ...................67

2-5. A two-level full factorial design of experiment for three variables. ............ ....................68

2-6. A central composite design for three-dimensional design space. ............. .....................6

2-7. A representative Latin hypercube sampling design with Ns = 6, N~ = 2 for uniformly
distributed variables in the unit square. ............. ...............69.....

2-8. LHS designs with significant differences in terms of uniformity. ............ ....................69

3-1. Boxplots of radius of the largest unoccupied sphere inside the design space [-1, 1]N". ........97

3-2. Illustration of the largest spherical empty space inside the 3D design space [-1, 1]3 (20
points). ............ ...............97.....

3-3. Tradeoffs between different error metrics. ............ ...............98.....

3-4. Comparison of 100 D-optimal, LHS, and combination (D-optimality + LHS)
experimental designs in four-dimensional space (30 points) using different metrics. .....99

3-5. Simultaneous use of multiple experimental designs concept, where one out of three
experimental designs is selected using appropriate criterion (filtering). ........................100

4-1. Boxplots of weights for 1000 DOE instances (Camelback function). ............ .................128

4-2. Contour plots of two variable test functions. ............. ...............129....

4-3. Boxplots of function values of different analytical functions. ................ .....................130

4-4. Contour plots of errors and standard deviation of predictions considering PRS, KRG,
and RBNN surrogate models for Branin-Hoo function. ............ .....................13










4-5. Standard deviation of responses and actual errors in prediction of different surrogates at
corresponding locations (b oxplots of 1000 DOEs using B ranin-Hoo functi on)..............13 1

4-6. Correlations between actual and predicted response for different test problems. ...............132

4-7. Normal distribution approximation of the sample mean correlation coefficient data
obtained using 1000 bootstrap samples (kriging, Branin-Hoo function). .....................133

4-8. RMS errors in design space for different surrogate models. ............ .....................13

4-9. Maximum absolute error in design space for different surrogate models. ..........................135

4-10. Boxplots of ratio of RMS error and PRESS over 1000 DOEs for different problems. .....136

5-1. Contour plots of two variable analytical functions. ............. ...............178....

5-2. Cantilever beam subj ected to horizontal and vertical random loads. .........._... ........._.....178

5-3. Ratio of global error measures and relevant actual RMS error. ............. .....................18

5-4. Ratio of root mean square values of pointwise predicted and actual errors for different
problems, as denoted by predicted error measure. ............. ..... ............... 18

5-5. Correlation between actual and predicted error measures for different problems. .............184

5-6. Ratio of maximum predicted and actual absolute errors in design space for different
problem s. ............. ...............186....

6-1. Variation of physical properties for liquid nitrogen and liquid hydrogen with
tem perature. ............ ...............228.....

6-2. Experimental setup and computational geometries. ............ ...............229.....

6-3. Sensitivity indices of main effects using multiple surrogates of prediction metric
(liquid N2, Case 290C). ............ ...............230.....

6-4. Influence of different variables on performance metrics quantified using sensitivity
indices of main and total effects (liquid N2, Case 290C). ................ ......................23 1

6-5. Validation of global sensitivity analysis results for main effects of different variables
(liquid N2, Case 290C). ............ ...............232.....

6-6. Surface pressure and temperature predictions using the model parameters for liquid N2
that minimized P,,, and Td,, respectively (Case 290C). ............. ....................23


6-7. Location of points ( C,, ) and corresponding responses used for calibration of the
cryogenic cavitation model (liquid N2, Case 290C). ............. ..... ............... 23











6-8. Pareto optimal front and corresponding optimal points for liquid N2 (Case 290C) using
different surrogates. ............ ...............233.....

6-9. Surface pressure and temperature predictions on benchmark test cases using the model
parameters corresponding to original and best-compromise values for different
fluids. ........... ...............234.....

6-10. Surface pressure and temperature predictions using the original parameters and best-
compromise parameters for a variety of geometries and operating conditions. .............236

6-11. Surface pressure and temperature profie on 2D hydrofoil for Case 290C where the
cavitation is controlled by, (1) temperature-dependent vapor pressure, and (2) zero
latent heat, and hence isothermal flow field. ............ ...............237.....

6-12. Impact of different boundary conditions on surface pressure and temperature profie on
2D hydrofoil (Case 290C, liquid N2) and predictions on first computational point
next to boundary. ............ ...............238.....

6-13. Influence of turbulence modeling on surface pressure and temperature predictions in
cryogenic cavitating conditions. ............ ...............244.....

7-1. A representative expander cycle used in the upper stage engine. ............ .....................26

7-2. Schematic of a pump. ............. ...............264....

7-3. Meanline pump flow path. ................ ............ ........ ......... ........ .........264

7-4. Baseline diffuser vane shape and time-averaged flow. ............ ...............265.....

7-5. Definition of the geometry of the diffuser vane. ............ ...............265.....

7-6. Parametric Bezier curve............... ...............265.

7-7. A combination of H- and O-grids to analyze diffuser vane. ............ .....................26

7-8. Surrogate based design and optimization procedure. ............ ...............266.....

7-9. Surrogate modeling............... ...............26

7-10. Sensitivity indices of main effect using various surrogate models. ............ ..................267

7-11. Sensitivity indices of main and total effects of different variables using PWS. ...............268

7-12. Actual partial variance of different design variables. ............. ...............268....

7-13. Baseline and optimal diffuser vane shape obtained using different surrogate models. .....269

7-14. Comparison of instantaneous and time-averaged flow Hields of intermediate optimal
(PRS) and baseline designs. ............. ...............270....










7-15. Instantaneous and time-averaged pressure for the final optimal diffuser vane shape. ......271

7-16. Pressure loadings on different vanes. ............ ...............271.....

7-17. Gaps between adjacent vanes. ............ ...............272.....

B-1. Two-dimensional illustration of central composite experimental design constructed
using two parameters al and at. ............. ...............301....

B-2. Contours of scaled predicted RMS bias error and actual RMS error when assumed true
model to compute bias error was quintic while the true model was trigonometric. .......301

Figure B-3. Contours of scaled predicted RMS bias error and actual RMS error when
different distributions of p(2) were specified. ............ ...............302.....










LIST OF ABBREVIATIONS

Alias matrix

Constants in Hartman functions


Estimated coefficients associated with ith basis function

Vector of estimated coefficients of basis functions

Covariance matrix in kriging

Bounds on coefficient vectors

Cavitation model parameters

Constants in Hartman functions


Specific heat at constant pressure

Turbulence model parameters


D-efficiency


Expected value of random variable x

Average of surrogate models


Error associated with i~th SUTTOgate model


Approximation error at design point x

Bias error at design point x


Bias error bound at design point x


Data-independent bias error bound at design point x


Root mean square bias error at design point x


A

a
b

b






c` C
Cdest, >prod




CP





Der

E(x)


Eavg

E,


e(x)


eb (x)

ebeb X)


e (x)


e "(x)










ees(x)




f(x)




h

h(x)

h

I

K

k

L




viz vi

N
DOE

Ne

Nlhs

NRBF


N,

Ns

N


Ntest


Standard error at design point x

Function of design variables and decomposed functions


Vector of basis functions in polynomial response surface model

Vapor mass fraction

Sensible enthalpy

Radial basis function

Vector of radial basis functions

Identity matrix

Thermal conductivity

Turbulent kinetic energy

Loss function, Latent heat

Moment matrix

Cavitation source terms

Number of design of experiments

Number of eigenvectors

Number of Latin hypercube samples

Number of radial basis functions

Number of symbols for orthogonal arrays

Number of sampled data points

Number of surrogate models

Number of test points










N,,



Np

Np


P





p .


p


R

R


ad

r



r(x)





sresp

T

T ~/


t


u,v,w,ui,u ,uk,U


Number of variables


Number of basis functions in approximation model


Number of basis functions missing from the approximation model


Lz norm of the difference between predicted and benchmark
experimental pressure data

Turbulence production term

Location of mid-point on the lower side of diffuser vane

Pressure

Constants in Hartman functions


Correlation matrix

Adjusted coefficient of multiple determination

Strength of orthogonal array

Radius of largest unoccupied sphere

Vector of correlation between prediction and data points (kriging)

Sensitivity indices, main, interaction, and total effects


Standard deviation of responses


Temperature

Lz norm of the difference between predicted and benchmark
experimental temperature data

Time, magnitude of tangents of Bezier curves

Velocity components









V Null eigenvector

V, V, V; Variance and its components (partial variances), volume


w; Weight associated with ith SUTTOgate model

X Gramian design matrix

x Vector of design variables

xi, x ,xk Space variables (coordinates)

y Vector of responses

y, y(x) Function or response value at a design point x

9, f(x) Predicted response at a design point x

Z(x) Systematic departure term in kriging

Z Subset of design variables for global sensitivity analysis

a Volume fraction, Thermal diffusivity, Parameter in weighted
average model

a ,a2 Parameters used to define vertex and axial point locations in a
central composite design

P Vector of coefficients of true basis functions

P* Estimated coefficient vector in kriging

/7 Coefficient associated with a basis function in polynomial response
surface approximation, coefficient of thermal expansion,
parameter in weighted average model

7 Constant used to estimate root mean square bias error

3 Kronecker-delta


E, E(x) Turbulent dissipation, error in surrogate model

r(x) True function or response









B Probability density function, variable defining diffuser vane shape

0 Vector of parameters in the Gaussian correlation function

/Z Regularization parameter

pu Mean of responses at sampled points, dynamic viscosity

Weights used in numerical integration

p Density

2 Variance of noise, estimated process variance in kriging

~a Adjusted root mean square error

Degree of correlation, flow variable

1 Vector of ones

Sub scripts

c Cavity, candidate true polynomial

dopt D-optimal design

i, j, kIndices

krg Kriging

I Liquid phase

lhs Latin hypercube sampling design

m Mixture of liquid and vapor

max Maximum of the quantity

min Minimum of the quantity

prs Polynomial response surface

pws PRESS-based weighted average surrogate










rbnn Radial basis neural network

RM~S Root mean square value

t Turbulent

wta Weighted average surrogate

E Solution with least deviation from the data

v Vapor phase

oo Reference conditions

Superscripts/Overhead Symbols

I Data independent error measure

R Reynolds stress

(i) ith design point

(-i) All points except ith design point

(1) Lower bound

T Transpose

(u) Upper bound

(1) Terms in the response surface model

(2) Terms missing from the response surface model

^ Predicted value

Average of responses in surrogate models


Non-dimensional Numbers

CFL

Pr

Re


Courant, Freidricks and Levy number

Prandtl number

Reynolds number









Cavitation number


Acronyms

ADS

AIAA

ASME

ASP

CCD

CFD

COV

CV

DOE

ED

EGO

ESE

FCCD

GCV

GMSE

GSA

KRG

IGV

LHS

LH2

LOX


All possible data sets

American Institute of Aeronautics and Astronautics

American Society of Mechanical Engineers

Alkaline- surfactant-p olymer

Central composite design

Computational fluid dynamics

Coefficient of variation

Cross validation

Design of experiment

Experimental design

Efficient global optimization

Estimated standard error

Face-centered central composite design

Generalized cross validation error

Generalized mean squared error

Global sensitivity analysis

Kriging

Inlet guide vane

Latin hypercube sampling

Liquid Hydrogen

Liquid Oxygen









MSE Mean squared error

NASA National Aeronautics and Space Administration

N-S Navier-Stokes

NIST National Institute of Standards and Technology

NPSF Non-parametric surrogate filter

OA Orthogonal array

POF Pareto optimal front

PRESS Predicted residual sum of squares

PRS Polynomial response surface

PSF Parametric surrogate filter

PWS PRESS-based weighted average surrogate

RBF Radial basis function

RBNN Radial basis neural network

RMS Root mean square

RMSBE Root mean square bias error

RMSE Root mean squared error

RP1 Refined petroleum

SoS Speed of sound

SQP Sequential quadratic programming

SS Split sample

SVR Support vector regression

Operators

E(a) Expected value of the quantity a









max(a, b) Maximum of a and b

min(a, b) Minimum of a and b

r(a, b) Correlation between vectors a and b

V(a) Variance of the quantity a

cr(a) Standard deviation of the quantity a

(a~iavg Space-averaged value of the quantity a

(a>ma Maximum of the quantity a

|a| Absolute value of the quantity a

||a|| (L2) Norm of the vector a









Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

MULTIPLE SURROGATES AND ERROR MODELING IN OPTIMIZATION OF LIQUID
ROCKET PROPULSION COMPONENTS

By

Tushar Goel

May 2007

Chair: Raphael T. Haftka
Cochair: Wei Shyy
Major: Mechanical Engineering

Design of space propulsion components is extremely complex, expensive, and involves

harsh environments. Coupling of computational fluid dynamics (CFD) and surrogate modeling to

optimize performance of space propulsion components is becoming popular due to reduction in

computational expense. However, there are uncertainties in predictions using this approach, like

empiricism in computational models and surrogate model errors. We develop methods to

estimate and to reduce such uncertainties.

We demonstrate the need to obtain experimental designs using multiple criteria by

showing that using a single-criterion may lead to high errors. We propose using an ensemble of

surrogates to reduce uncertainties in selecting the best surrogate and sampling strategy. We also

develop an averaging technique for multiple surrogates that protects against poor surrogates and

performed at par with best surrogate for many problems.

We assess the accuracy of different error estimation models, including an error estimation

model based on multiple surrogates, used to quantify prediction errors. While no single error

model performs well for all problems, we show possible advantage of combining multiple error

models.










We apply these techniques to two problems relevant to space propulsion systems. First, we

employ surrogate-based strategy to understand the role of empirical model parameters and

uncertainties in material properties in a cryogenic cavitation model, and to calibrate the model.

We also study the influence of thermal effects on predictions in cryogenic environment in detail.

Second, we use surrogate models to improve the hydrodynamic performance of a diffuser by

optimizing the shape of diffuser vanes. For both problems, we observed improvements using

multiple surrogate models.

While we have demonstrated the approach using space propulsion components, the

proposed techniques can be applied to any large-scale problem.









CHAPTER 1
INTRODUCTION AND SCOPE

Liquid rocket propulsion systems are the most popular form of space propulsion systems

for high thrust and specific impulse applications as required for space applications (Humble et

al., 1995, Chapter 5). Unlike propulsion systems used in aircraft, space propulsion systems carry

both fuel and oxidizer with the vehicle. This poses additional requirements on the selection of

suitable propellants and design of propulsion systems.

Apart from high energy density, the choice of propellants is also affected by the ease of

storage and handling, mass or volume of propellant, and nature of products of combustion.

Typical propellants used for liquid space propulsion are refined petroleum (RPl) with liquid

oxygen (LOX), hypergolic propellants (mono-methyl hydrazine with nitrogen tetroxide) and

cryogens (liquid hydrogen LH2 and LOX). Cryogens (LH2 and LOX) are most popular due to

higher power/gallon ratio and specific thrust, lower weight of LH2, and cleaner combustion

products (water). Despite difficulties in storage (due to the tendency of cryogens to return to

gaseous state unless super-cooled, the boiling point of LH2 and LOX at standard conditions is -

423o F and -298o F, respectively) and safety considerations, the rewards of using cryogens as

space propellants are significant (NASA online facts, 1991).

Space Propulsion Systems

A conceptual schematic of a typical bi-propellant space propulsion system is shown in

Figure 1-1. There are five major components of the propulsion system: fuel and oxidizer storage

tanks, fuel and oxidizer pumps, gas turbine, combustion chamber, and nozzle. Based on the type

of power cycle, different propulsion systems are classified as follows:

*Gas-Generator Cycle (Figure 1-2(A))

A small amount of fuel and oxidizer is fed to gas generator where the fuel is burnt at less
than optimal ratio to keep the temperature in turbine low. The hot gases produced in gas










generator drive the turbine to produce power required to run the fuel and oxidizer pumps.
Thrust of the engine is regulated by controlling the amount of propellants through gas
generator. This is an open cycle since the hot gas from turbines is either dumped overboard
or sent into the main nozzle downstream. This configuration is useful for moderate power
requirements but not good for the applications which require high power.

* Staged Combustion Cycle (Figure 1-2(B))

This is a closed cycle system in which an enriched mixture of fuel and oxidizer is
generated in pre-burner, and after passing through the turbine this vaporized mixture is fed
to the main combustion chamber. No fuel or oxidizer is wasted in this cycle as complete
combustion takes place in the main combustion chamber. This cycle is used for high power
applications but the engine development cost is high and components of propulsion system
are subj ected to harsh conditions.

* Expander Cycle (Figure 1-2(C))

In this closed cycle, the main combustion chamber is cooled by passing liquid fuel that in
turn gets vaporized by exchange of heat. This fuel vapor runs the turbine and is fed back to
the main combustion chamber, where complete combustion takes place. The limitation on
heat transfer to the fuel limits the power available in the turbine. This configuration is
more suitable to small/mid size engines.

* Pressure-Fed Cycle (Figure 1-2(D))

This is the simplest configuration which does not require any pump or turbine. Instead, the
fuel and the oxidizer are fed to the combustion chamber by the tank pressure. This cycle
can be applied only to relatively low-chamber-pressure applications because higher
pressure requires bulky storage tanks.

Design Requirements of Propulsions Systems

The design of an optimal propulsion system requires efficient performance of all

components. As can be seen from Figure 1-2, pumps, turbine, combustion chamber, and nozzle

are integral part of almost all space propulsion systems. The system level goal of designing a

space propulsion system is to obtain the highest possible thrust with the lowest weight (Griffin

and French, 1991), but the requirements of individual components are also governed by

corresponding operating conditions. Storage tanks are required to withstand high pressure in

cryogenic environments while keeping the weight of tank low. Nozzles impart high velocities to

high temperature combustion products to produce maximum possible thrust. The requirements









for the design of the turbine and the combustion chamber are high efficiency, compact design,

and ability to withstand high pressures and temperatures. Similarly, pumps are required to supply

the propellants to the combustion chamber at a desired flow rate and inj section pressure. Maj or

issues in the design of pumps include harsh environments, compact design requirements, and

cavitation under cryogenic conditions. Each subsystem has numerous design options for

example, number of stages, number and geometry of blades in turbines and pumps, different

configurations and geometry of inj ectors and combustion chambers etc., that relate to the

subsystem level requirements.

System Identification and Optimization: Case Studies

The design of a propulsion system is extremely complex due to the often conflicting

requirements posed on individual components and interaction of different subsystems.

Experimental design of propulsion systems is very expensive, time consuming, and involves

harsh environments. With improvements in numerical algorithms and increase in computational

power, the role of computational fluid dynamics (CFD) for design of complex systems with

various levels of complexities has grown many-fold. Computer simulation of design problems

has not only reduced the cost of developing the designs but also has reduced risks and design

cycle time. The flexibility of trying alternative design options also has increased extensively

compared to the experiments. With improvements in computer hardware and CFD algorithms,

the complexity of the simulation model is increasing in an effort to capture physical phenomenon

more accurately.

Our current efforts in the design of liquid rocket propulsion systems are focused on two

distinct classes of problems. First, we try to understand the roles of thermal effects and model

parameters used for numerical modeling of cryogenic cavitation; and second, we aim to carry out

shape optimization of diffuser vanes to maximize diffuser efficiency. While our interests in the










design of diffuser are motivated by the ongoing interests in the Moon and Mars exploration

endeavors, the study of cryogenic cavitation model validation and sensitivity analysis is

relatively more generic, but very relevant to the design of liquid rocket propulsion systems. We

discuss each problem in more detail as follows.

Sensitivity Evaluation and Model Validation for a Cryogenic Cavitation Model

Though a lot of improvements have been made over the years in the numerical modeling of

fluid physics, areas like cavitation, both in normal as well as cryogenic environment, are still

continuously developing. Complex physics, phase changes and resulting variations in material

properties, interaction of convection, viscous effects and pressure, time dependence, multiple

time scales, interaction between different phases of fluids and many fluids, temperature

dependent material properties, and turbulence make these problems difficult (Utturkar, 2005).

Numerous algorithms (Singhal et al. 1997, Merkle et al. 1998, Kunz et al. 2000, Senocak and

Shyy 2004a-b, Utturkar et al. 2005a-b) have been developed to capture the complex behavior of

cavitating flows, which have serious implications on the performance of the propulsion

components .

We study one cryogenic cavitation model in detail to assess the role of thermal boundary

conditions, thermal environment, uncertainties in material properties, and empirical model

parameters on the prediction of pressure and temperature field. Finally, we calibrate the

cryogenic cavitation model parameters and validate the outcome using several benchmark data.

Shape Optimization of Diffuser Vanes

The diffuser is a critical component in liquid rocket turbomachinery. The high velocity

fluid from turbo-pump is passed through the diffuser to partially convert kinetic energy of the

fluid into pressure. The efficiency of a diffuser is determined by its ability to induce pressure

recovery that is characterized by the ratio of outlet to inlet pressure. While exploring different









concepts for diffuser design, Dorney et al. (2006a) observed that diffusers with vanes are more

effective than vaneless diffusers. Consequently, we seek further improvements in diffuser

efficiency through shape optimization of vanes.

Since computational cost of simulations is high, we adopt a surrogate model-based

framework for optimization and sensitivity analysis that is briefly introduced in next section and

discussed in detail in a following chapter. Together, the present effort demonstrates that the same

technical framework can be equally adapted to treat hardware and computational modeling

development, with information to allow one to inspect the overall design space characteristics, to

facilitate quantitative trade-off considerations between of multiple and competing obj ectives, and

to rank order the importance of the various design variables.

Surrogate Modeling

High computational cost of simulations involved in performance evaluation of propulsion

components makes direct coupling of optimization tools and simulations infeasible for most

practical problems. To alleviate the problems associated with the optimization of such complex

and computationally expensive components, surrogate models based on a limited amount of data

have been frequently used. Surrogate models offer a low-cost alternative to evaluate a large

number of designs and are amenable to the optimization process. Surrogate models also can be

used to assess the trends in the design space as well as help identify the problems associated with

numerical simulations.

A sample of recent application of surrogate models for the design of space-propulsion

system is given as follows. Lepsch et al. (1995) used polynomial response surface

approximations to minimize the empty-weight of the dual fuel vehicles by considering

propulsion systems and vehicle design parameters. Rowell et al. (1999) and the references within

discuss the application of regression techniques for the design of single-stage-to-orbit vehicles.









Madsen et al. (2000) used polynomial response surface models for the design of diffusers. Gupta

et al. (2000) used response surface methodology to improve the lift-to-drag ratio of artificially

blunted leading edge spherical cone that is a representative geometry for reentry vehicle while

constraining the heat transfer rates. Chung and Alonso (2000) estimated boom and drag for low

boom supersonic business jet design problem via kriging.

Shyy et al. (2001a-b) employed global design optimization techniques for the design of

rocket propulsion components; Papila et al. (2000, 2001, 2002), Papila 2001, approximated the

design obj ectives for supersonic turbines using polynomial response surface approximations and

neural networks; Vaidyanathan et al. (2000, 2004a-b), Vaidyanathan (2004) modeled

performance indicators of liquid rocket inj ectors using polynomial response surface

approximations; Simpson et al. (2001b) used kriging and polynomial response surface

approximations for the design of aerospike nozzles; Steffen (2002a) developed an optimal design

of a scramj et inj ector to simultaneously improve efficiency and pressure loss characteristics

using polynomial response surfaces. In a follow-up work, Steffen et al. (2002b) used polynomial

response surface approximations to analyze the parameter space for the design of a combined

cycle propulsion components, namely, mixed-compression inlet, hydrogen fueled scramjet

combustor, a ducted-rocket nozzle.

Charania et al. (2002) used response surface methodology to reduce computational cost in

system-level uncertainty assessment for reusable launch vehicle design. Huque and Jahingir

(2002) used neural networks in optimization of integrated inlet/ej ector system of an

axisymmetric rocket-based combined cycle engine. Qu et al. (2003) applied different polynomial

response surface approximations for the structural design of hydrogen tanks; Keane (2003)

employed response surfaces for the optimal design of wing shapes; Umakant et al. (2004) used










kriging surrogate models to compute probability density functions for robust configuration

design of air-breathing hypersonic vehicle. Baker et al. (2004a-b) used response surface

methodology for system level optimization of booster and ramj et combustor while considering

multiple constraints. Levy et al. (2005) used support vector machines and neural networks to

approximate obj ectives temperature field and pressure loss while searching Pareto optimal

solutions for the design of a combustor.

Issues with Surrogate Modeling

The accuracy of surrogate models is an important factor for its effective application in

design and optimization process. Some of the issues that influence the accuracy of surrogate

models are: (1) number and location of sampled data points, (2) numerical simulations, and (3)

choice of the surrogate model. The characterization of uncertainty in predictions via different

error estimation measures is also useful for optimization algorithms such as, EGO (Jones et al.,

1998) etc. These issues are widely discussed in literature (Li and Padula 2005, Queipo et al.

2005). While a detailed review of different issues and surrogate models is provided in next

chapter, we briefly discuss the intent of current research in context of addressing the issues

related to surrogate prediction accuracy.

Sampling Strategies

Typically the amount of data used for surrogate model construction is limited by the

availability of computational resources. Many design of experiments (DOE) techniques are

developed to sample the locations for conducting simulations such that the errors in

approximation are reduced. However, these strategies mostly optimize a single criterion that

caters to an assumed source of error in approximation. The dominant sources of errors in

approximation are rarely known a priori for practical engineering problems. This renders the

selection of appropriate DOE technique a very difficult task because an unsuitable DOE may









lead to poor approximation. We demonstrate this issue of high errors in approximation due to a

single-criterion based DOE with the help of simple examples, and highlight the need to

simultaneously consider multiple criteria.

Type of Surrogate Model

The influence of the choice of surrogate model on prediction accuracy is widely explored

in literature. Many researchers have compared different surrogates for various problems and

documented their recommendations. The general conclusion is that no single surrogate model

may prove adequate for all problems, and the selection of a surrogate for a given problem is

often influenced by the past experience. However, as we demonstrate in this work, the suitability

of any surrogate model also depends on the sampling density and sampling scheme. Then, the

selection of an appropriate surrogate model becomes more complicated exercise. Here, we

present methods to exploit available information by simultaneous use of multiple surrogate

models. Specifically, we use multiple surrogate models to assess the regions of high uncertainty

in response predictions. Further we develop a weighted average surrogate model, which is

demonstrated to be more robust than the individual surrogate models for a wide variety of

problems and sampling schemes.

Estimation of Errors in Surrogate Predictions

Since surrogate models only approximate the response, there are errors in predictions. An

estimate of prediction errors is beneficial in determining the sampling locations in many

surrogate-based optimization methods such as EGO (Jones et al., 1998) and for adaptive

sampling. The effectiveness of these methods depends on the accuracy of error estimation

measures. Mostly, these error estimation measures are based on statistical assumptions. For

example, prediction variance for polynomial response surface approximation is developed

assuming that errors are exclusively due to noise that follows a normal distribution of zero mean










and cr2 Variance independent of locations. When these assumptions are not satisfied, the

accuracy of error estimation measures is questionable. We compare different error estimation

measures using a variety of test problems and give recommendations. We also explore the idea

of simultaneously using multiple error estimation measures.

While we use the presented surrogate-based framework for design and optimization of

diffuser design and cryogenic cavitation model, we employ analytical examples, which are

primarily used to test optimization algorithms, to exhibit the key concepts relating to different

issues with surrogate modeling.

Scope of Current Research

In short, the goal of present work is to develop methodologies for designing optimal

propulsion systems while addressing issues related to numerical uncertainties in surrogate

modeling. The scope of the current work can be summarized as follows:

1) To illustrate risks in using a single criterion based experimental design for approximation and

the need to consider multiple criteria.

2) To explore the use of an ensemble of surrogates to help identify regions of high uncertainties

in predictions and to possibly provide a robust prediction method.

3) To appraise different error estimation measures and to present methods to enhance the error

detection capability by combining multiple error measures.

4) To demonstrate the proposed surrogate-model based approach to liquid rocket propulsion

problems dealing with (a) cryogenic-cavitation model validation and sensitivity study to

appraise the influence of different parameters on performance, and (b) shape optimization of

diffuser vanes to maximize diffuser efficiency.









The organization of this work is as follows. We review different surrogate models and

relevant issues associated with surrogate modeling in Chapter 2. In Chapter 3, we evidence risks

in using a single criterion for constructing design of experiments. Methods to harness the

potential of multiple surrogate models are presented in Chapter 4. The performance of different

error estimation measures is compared in Chapter 5 and we propose methods to simultaneously

use multiple error measures. This surrogate-based analysis and optimization framework is

applied to two problems related to liquid rocket propulsion, model validation and sensitivity

analysis of a cryogenic cavitation model in Chapter 6, and shape optimization of diffuser vanes

in Chapter 7. We recapitulate maj or conclusions of the current work and delineate the scope of

future work in Chapter 8.

















Oxid izer


Figure 1-1. Schematic of liquid fuel rocket propulsion system.



























A man. uno B

GARpap Om~Lhm RIUp~















C 5=* "" D
Figure 1-2. Classification of propulsion systems according to power cycles. A) Gas-generator
cycle. B) Staged combustion cycle. C) Expander cycle. D) Pressure-fed cycle.









CHAPTER 2
ELEMENTS OF SURROGATE MODELING

Surrogate models are widely accepted for the design and optimization of components with

high computational or experimental cost (typically encountered in CFD simulations based

designs), as they offer a computationally less expensive way of evaluating designs. Surrogate

models are constructed using the limited data generated from the analysis of carefully selected

designs. Numerous successful applications of surrogate models for design and optimization of

aerospace systems, automotive components, electromagnetic applications and chemical

processes etc. are available in literature. A few examples are given as follows.

Kaufman et al. (1996), Balabanov et al. (1996, 1999), Papila and Haftka (1999, 2000), and

Hosder et al. (2001) constructed polynomial response surface approximations (PRS) for

structural weight based on structural optimizations of high speed civil transport. Hill and Olson

(2004) applied PRS to approximate noise models in their effort to reduce the noise in the

conceptual design of transport aircraft. Madsen et al. (2000), Papila et al. (2000, 2001), Shyy et

al. (2001a-b), Vaidyanathan et al. (2000, 2004a-b), Goel et al. (2004), and Mack et al. (2005b,

2006) used polynomial- and neural networks-based surrogate models as design evaluators for the

optimization of propulsion components including turbulent flow diffuser, supersonic turbine,

swirl coaxial inj ector element, liquid rocket inj ector, and radial turbine designs. Burman et al.

(2002), Goel et al. (2005), and Mack et al (2005a) used different surrogates to maximize the

mixing efficiency facilitated by a trapezoidal-shaped bluff body in the time dependent Navier-

Stokes flow while minimizing the resulting drag coefficient.

Knill et al. (1999), Rai and Madavan (2000, 2001) and, Madavan et al. (2001) used

surrogate models for airfoil shape optimization. Dornberger et al. (2000) used neural networks

and polynomial response surfaces for design of turbine blades. Kim et al. (2002), Keane (2003),









Jun et al. (2004) applied PRS to optimize wing designs. Ong et al. (2003) used radial basis

functions to approximate the obj ective function and constraints of an aircraft wing design.

Bramantia et al. (2001) used neural network-based models to approximate the design

obj ectives in electro-magnetic problems. Farina et al. (2001) used multiquadrics interpolation-

based response surface approximations to optimize the shape of electromagnetic components

like, C-core and magnetizer. Wilson et al. (2001) used response surface approximations and

kriging for approximating the obj ectives while designing piezomorph actuators.

Redhe et al. (2002a-b), Craig et al. (2002), and Stander et al. (2004) used PRS, kriging and

neural networks in design of vehicles for crashworthiness. Rais-Rohani and Singh (2002) used

PRS to approximate limit state functions for estimating the reliability of composite structures.

Rikards and Auzins (2002) developed PRS to model buckling and axial stiffness constraints

while minimizing the weight of composite stiffened panels. Vittal and Haj ela (2002) proposed

using PRS to estimate the statistical confidence intervals on the reliability estimates. Zerpa et al.

(2005) used kriging, radial basis functions, and PRS to optimize the cumulative oil recovery

from a heterogene ou s, multi -pha se re servoi r subj ect to an A SP (al kal ine- surfactant-p olymer)

flooding .

Steps in Surrogate Modeling

Li and Padula (2005) and Queipo et al. (2005) have given a comprehensive review of the

relevant issues in surrogate modeling. In this chapter, we discuss the key steps in the surrogate

modeling as explained with the help of Figure 2-1. Discussion of maj or issues and the most

prominent approaches followed in each step of the surrogate modeling process, briefly described

below, lays the outline of this chapter.










Design of Experiments (DOEs)

The design of experiment is the sampling plan in design variable space. Other common

names for DOEs are experimental designs or sampling strategies. The key question in this step is

how we assess the goodness of such designs, considering the number of samples is severely

limited by the computational expense of each sample. We discuss the most prominent

approaches related to DOE in a subsequent section. Later in Chapter 3, we demonstrate some

practical issues with the construction of DOEs.

Numerical Simulations at Selected Locations

Here, the computationally expensive model is executed for all the designs selected using

the DOE specified in the previous step. In context of the present work, the details of relevant

numerical simulation tools used to evaluate designs were briefly discussed in appropriate

chapters .

Construction of Surrogate Model

Two questions are of interest in this step: 1) what surrogate models) should we use (model

selection) and, 2) how do we Eind the corresponding parameters (model identification)? A formal

description of the problem of interest is discussed in next section. A framework for the

discussion and mathematical formulation of alternative surrogate-based modeling approaches is

outlined in next section and the section on construction of surrogate models.

Model Validation

The purpose of this step is to establish the predictive capabilities of the surrogate model

away from the available data (generalization error). Different schemes to estimate generalization

error for model validation are discussed in this chapter. We compare different error estimation

measures specific to a few popular surrogates in a following chapter.









Mathematical Formulation of Surrogate Modeling Problem

With reference to Figure 2-2, surrogate modeling can be seen as a non-linear inverse

problem for which one aims to determine a continuous function ( y(x)) of a set of design

variables from a limited amount of available data (y). The available data y while deterministic

in nature can represent exact evaluations of the function y(x) or noisy observations; and in

general cannot carry sufficient information to uniquely identify y(x). Thus, surrogate modeling

deals with the twin problems of: 1) constructing a model f(x) from the available data y (model

estimation), and 2) assessing the errors E attached to it (model appraisal). A general description

of the anatomy of inverse problems can be found in Snieder (1998).

Using the surrogate modeling approach the prediction of the simulation-based model

output is formulated as y(x) = f(x)+ e(x) The prediction expected value and its variance V(y)

are illustrated in Figure 2-3, with B being a probability density function.

Different model estimation and model appraisal components of the prediction have been

shown to be effective in the context of surrogate based analysis and optimization (see for

example, McDonald et al., 2000; Chung and Alonso, 2000; Simpson et al., 2001a; Jin et al.,

2001), namely polynomial response surface approximation (PRS), Gaussian radial basis

functions (GRF) (also referred as radial basis neural networks RBNN), and (ordinary) kriging

(KRG) as described by Sacks et al. (1989). Model estimation and appraisal components of these

methods are presented in a following section.

A good paradigm to illustrate how particular solutions (9) to the model estimation

problem can be obtained is provided by regularization theory (see for example, Tikhonov and

Arsenin (1977), and Morozov (1984)), which imposes additional constraints on the estimation.









More precisely, jl can be selected as the solution to the following Tikhonov regularization

problem:


minI(9),=h ~Lv y, -(x"))+AnD"'@sdx (2.1)


where S is the family of surrogate models under consideration, L(x) is a loss or cost function

used to quantify the so called empirical error (e.g., L(x) = x ), Ai is a regularization parameter,

and D"'f represents the value of the m-derivative of the proposed model at location x. Note that

D"'f represents a penalty term; for example, if m is equal to two, it penalizes high local

curvature. Hence, the first term enforces closeness to the data (goodness of fit), while the second

term addresses the smoothness of the solution with Ai (a real positive number) establishing the

tradeoff between the two. Increasing values of Ai provide smoother solutions. The purpose of the

regularization parameter Ai is, hence, to help implement Occam's razor principle (Ariew, 1976),

which favors parsimony or simplicity in model construction. A good discussion on statistical

regularization of inverse problems can be found in Tenorio (2001).

The quadratic loss function (i.e., Lz norm) is most commonly used in part because it

typically allows easy estimation of the parameters associated with the surrogate model; however,

it is very sensitive to outliers. The linear (also called Laplace) loss function takes the absolute

value of its argument (i.e., L1 norm); on the other hand, the Huber loss function is defined as

quadratic for small values of its argument and linear otherwise. The so called r -loss function has

received considerable attention in the context of the support vector regression surrogate (Vapnik,

1998; Girosi, 1998), and assigns an error equal to zero if the true and estimated values are within

an r distance. Figure 2-4 illustrates the cited loss functions.









Design of Experiments

As stated earlier, the design of experiment is the sampling plan in design variable space

and the key question in this step is how we assess the goodness of such designs. In this context,

of particular interest are sampling plans that provide a unique value (in contrast to random

values) for the input variables at each point in the input space, and are model-independent; that

is, they can be efficiently used for fitting a variety of models.

Typically, the primary interest in surrogate modeling is minimizing the error and a DOE is

selected accordingly. Two maj or components of the empirical error and corresponding

expressions for average error formulation considering quadratic loss functions are given as

follows.

* Variance: measures the extent to which the surrogate model f(x) is sensitive to particular
data set D. Each data set D corresponds to a random sample of the function of interest.
This characterizes the noise in the data,

Evar (x) = E4DS [f E4DS [~X)1]2 (2.2)

* Bias: quantifies the extent to which the surrogate model outputs (i.e., f(x)) differ from the
true values (i.e., y(x)) calculated as an average over all possible data sets D (ADS),


ELm,, (x) = CE4D iSX)]- y(x) (2.3)

In both expressions, E4DS denotes the expected value considering all possible data sets.

There is a natural tradeoff between bias and variance. A surrogate model that closely fits a

particular data set (lower bias) will tend to provide a larger variance and vice-versa. We can

decrease the variance by smoothing the surrogate model but if the idea is taken too far then the

bias error becomes significantly higher. In principle, we can reduce both bias (can choose more

complex models) and variance (each model more heavily constrained by the data) by increasing

the number of points, provided the latter increases more rapidly than the model complexity.










In practice, the number of points in the data set is severely limited (e.g., due to

computational expense) and often during the construction of surrogate model, a balance between

bias and variance errors is sought. This balance can be achieved, for example, by reducing the

bias error while imposing penalties on the model complexity (e.g., Tikhonov regularization).

With reference to most applications, where the actual model is unknown (see previous and

next sections), and data is collected from deterministic computer simulations, bias error is the

dominant source of error because the numerical noise is small, and a DOE is selected

accordingly. When response surface approximation is used, there is good theory for obtaining

minimum bias designs (Myers and Montgomery, 1995, Section 9.2) as well as some

implementations in low dimensional spaces (Qu et al., 2004). For the more general case, the bias

error can be reduced through a DOE that distributes the sample points uniformly in the design

space (Box and Draper, 1959; Sacks and Ylvisaker, 1984; as referenced in Tang, 1993).

However, fractional factorial designs replace dense full factorial designs for computationally

expensive problems. The uniformity property in designs is sought by, for example, maximizing

the minimum distances among design points (Johnson et al., 1990), or by minimizing correlation

measures among the sample data (Iman and Conover, 1982; Owen, 1994). Practical

implementations of a few most commonly used DOEs are discussed next.

Factorial Designs

Factorial designs are one of the simplest DOEs to investigate the effect of main effects and

interactions of variables on the response for box-shaped design domain. 2N -factorial design

where each design variable is allowed to take two extreme levels is often used as a screening

DOE to eliminate the unimportant variables. Qualitative and binary variables can also be used

for this DOE. A typical two-level full factorial DOE for three variables is shown in Figure 2-5.









These designs can be used to create a linear polynomial response surface approximation. For

higher order approximations, the number of levels of each variable is increased for example a

quadratic polynomial response surface approximation can be fitted by a three-level full factorial

design (3N' -designs).

Some times the number of experiments is reduced by using 2hi- (or 1/2p) fractional

factorial designs, which require the selection of p independent design generators (least influential

interactions). Typically, these designs are classified according to resolution number:

* Resolution III: Main effects are not aliased with other main effects, but are confounded
with one or more two-way interactions.

* Resolution IV: Main effects are not aliased with other main effects or two-way
interactions. Two factor interactions are confounded with other two-way interactions.

* Resolution V: Main effects and two-way interactions are not confounded with one
another.

More details can be obtained from Myers and Montgomery (1995, Chapter 4, pp. 156-

179). Factorial designs produce orthogonal designs for polynomial response surface

approximation. However, for higher dimensions (N~ > 6 ), factorial designs requires a large

number of experiments making them particularly unattractive for computationally expensive

problems.

Central Composite Designs

These include designs on 2 '~ vertices, 2N~ axial points, and N~ repetitions of central

point. The distance of axial point a is varied to generate, face-centered, spherical design, or

orthogonal design. A typical central composite design for three-variable problem is shown in

Figure 2-6. These designs reduce variance component of the error. The repetitions at the center

reduce the variance, improve stability (defined as the ratio of maximum variance to minimum

variance in the entire design space), and give an idea of magnitude of the noise, but are not









useful for the computer simulations. These designs are also not practical for higher dimension

spaces (N, > 8) as the number of simulations becomes very high. For N, > 3 when the designs

on the vertices of the design spaces are not feasible, Box-Behnken designs can be used for

quadratic polynomial response surface approximation. These are spherical designs (sampling

designs at these locations enables us to exactly determine the function value on the points equi-

distant from the center) but these designs introduce higher uncertainty near the vertices.

Variance Optimal DOEs for Polynomial Response Surface Approximations

Moment matrix M = XIX/N 1 for PRS (see next section) is very important quantity as

this affects the prediction variance, and the confidence on the coefficients, hence, used to

develop different variance optimal DOEs. D-optimal design maximizes the determinant of the

moment matrix M~to maximize the confidence in the coefficients of polynomial response surface

approximation. A-optimal design minimizes the trace of inverse of2~to minimize the sum of

variances of the coefficients. G-optimal design minimizes the maximum of prediction variance.

I-optimal design minimizes the integral of the prediction variance over the design domain. All

these DOEs require the solution of difficult optimization problem, which is solved heuristically

in higher dimensional spaces.

Latin Hypercube Sampling (LHS)

Stratified sampling ensures that all portions of a given partition are sampled. LHS (McKay

et al., 1979) is a stratified sampling approach with the restriction that each of the input variables

(xk) has all portions of its distribution represented by input values. A sample of size Ns can be

constructed by dividing the range of each input variable into N, strata of equal marginal



SAn example of matrix X is given in Equation (2.6).









probability 1/Ns and sampling once from each stratum. Let us denote this sample by


x j = 1, 2,..., Ns; k = 1, 2,.., N .The sample is made of components of each of the x 's

matched at random. Figure 2-7 illustrates a LHS design for two variables, when six designs are

selected.

While LHS represents an improvement over unrestricted stratified sampling (Stein, 1987),

it can provide sampling plans with very different performance in terms of uniformity measured

by, for example, maximum minimum-distance among design points, or by correlation among the

sample data. Figure 2-8 illustrates this shortcoming; the LHS plan in Figure 2-8(B) is

significantly better than that in Figure 2-8(A).

Orthogonal Arrays (OA)

These arrays were introduced by C. R. Rao in the late 40's (Rao, 1947), and can be defined

as follows. An OA of strength r is a matrix of N, rows and N, columns with elements taken

from a set of N, symbols, such that in any Ns x r submatrices each of the (N, ), possible rows

occur the same number 7 (index) of times. The number of rows ( Ns) and columns ( N, ) in the

OA definition represents the number of samples and variables or factors under consideration,

respectively. The N, symbols are related to the levels defined for the variables of interest, and

the strength r is an indication of how many effects can be accounted for (to be discussed later in

this section) with values typically between two and four for real-life applications. Such an array

is denoted by OA( Ns, N,, N r ). Note that, by definition, a LHS is an OA of strength one, OA


(Ns, N,, N ,1). There are two limitations on the use of OA for DOE:

*Lack of flexibility: Given desired values for the number of rows, columns, levels, and
strength, the OA may not exist. For a list of available orthogonal arrays, theory and
applications, see, for example, Owen (1992), Hedayat et al. (1999), and references therein.










*Point replicates: OA designs proj ected onto the subspace spanned by the effective factors
(most influential design variables) can result in replication of points. This is undesirable
for deterministic computer experiments where the bias of the proposed model is the main
concern.

Optimal LHS, OA-based LHS, Optimal OA-based LHS

Different approaches have been proposed to overcome the potential lack of uniformity of

LHS. Among those, most of them adjust the original LHS by optimizing a spread measure (e.g.,

minimum distance or correlation) of the sample points. The resulting designs have been shown to

be relatively insensitive to the optimal design criterion (Ye et al., 2000). Examples of this

strategy can be found in the works of Iman and Conover (1982), Johnson et al. (1990), and Owen

(1994). Tang (1993) and Ye (1998) presented the construction of strength r OA-based LHS

which stratify each r-dimensional margin, and showed that they offer a substantial improvement

over standard LHS. Another strategy optimizes a spread measure of the sample points, but

restricts the search of LHS designs, which are orthogonal arrays, resulting in so called optimal

OA-based LHS (Leary et al., 2003). Adaptive DOE, in which model appraisal information is

used to place additional samples, have also been proposed (Jones et al., 1998, Sasena et al., 2000,

Williams et al., 2000).

A summary of main characteristics and limitations of different DOE techniques is listed in

Table 2-1. If feasible, two sets of DOE are generated, one (so called training data set) for the

construction of the surrogate (next section), and second for assessing its quality (validation as

discussed in a later section). Given the choice of surrogate, the DOE can be optimized to suit a

particular surrogate. This has been done extensively for minimizing variance in polynomial RSA

(e.g., D- and A- optimal designs, Myers and Montgomery, 1995, Chapter 8) and to some extent

for minimizing bias (e.g., Qu et al., 2004). However, for non-polynomial models, the cost of the

optimization of a surrogate-specific DOE is usually prohibitive, and so is rarely attempted.









Construction of Surrogate Model

There are both parametric (e.g., polynomial response surface approximation, kriging) and

non-parametric (e.g., proj ection-pursuit regression, radial basis functions) alternatives for

constructing surrogate models. The parametric approaches presume the global functional form of

the relationship between the response variable and the design variables is known, while the non-

parametric ones use different types of simple, local models in different regions of the data to

build up an overall model. This section discusses the estimation and appraisal components of the

prediction of a sample of both parametric and non-parametric approaches.

Specifically, the model estimation and appraisal components corresponding to polynomial

response surface approximation (PRS), kriging (KRG), and radial basis functions (RBF)

surrogate models are discussed next, followed by a discussion of a more general non-parametric

approach called kernel-ba~sed regression. Throughout this section a square loss function is

assumed unless otherwise specified, and given the stochastic nature of the surrogates, the

available data is considered a sample of a population.

Polynomial Response Surface Approximation (PRS)

The regression analysis is a methodology to study the quantitative relation between a

function of interest y and Npl basis functions f,, using Ns sample values of the response y, a


for a set of basis functions ~f z) (Draper and Smith, 1998, Section 5.1). Monomials are the most

preferred basis functions by practitioners. For each observation i, a linear equation is formulated:


y, (f (x)) = ff '/~) + EI; E(ez) = 0, V'(ez) = cr2, (2.4)










where the errors E, are considered independents with expected value equal to zero and variance

02 ,and 13 represents the quantitative relation between basis functions. The set of equations

specified in Equation (2.4) can be expressed in matrix form as:

y = XI) + E; E(E) = 0, V(E) = a 2I, (2.5)

where Xis a Ns x Np matrix of basis functions, also known as Gramian design matrix, with the

design variable values for sampled points. A Gramian design matrix for a quadratic polynomial

in two variables (N, = 2; Npl = 6) is shown in Equation (2.6)


1xx ') x,(1)2 X1(1) (1) (2)
1 2)
1 (2) (2) 1(2)2 X1(2) (2) (2)

X = 1 ,22 12 2 22(2.6)
1 x" "1 x(N,)2 1(N) (Ns) (Ns)2






J(x) = [ b f, (x), (2.7)


where bj is the estimated value of the coefficient associated with the jth basis function fi (x).

Then, the error in approximation at a design point x is given as e(x) = y(x) (x) The

coefficient vector b can be obtained by minimizing a loss function L, defined as


L = [ez ,(2.8)
I=1

where el is the error at ith data point, p is the order of loss function, and N, is the number of

sampled design points. A quadratic loss function (p = 2), that minimizes the variance of the error









in approximation, is mostly used because we can estimate coefficient vector b using an

analytical expression, as follows,

b =(XTX) 1X y. (2.9)

The estimated parameters b (by least squares) are unbiased estimates of |3 that minimize

variance. At a new set of basis function vector f for design point P, the predicted response and

the variance of the estimation are given as


f(x)= bl f,(x;), an~d V (f(x))= cr2f T(XI'X)-'f). (2.10)


Kriging Modeling (KRG)

Kriging is named after the pioneering work of D.G. Krige (a South African mining

engineer), and was formally developed by Matheron (1963). More recently, Sacks et al. (1989,

1993), and Jones et al. (1998) made it well-known in the context of the modeling, and

optimization of deterministic functions, respectively. The kriging method in its basic formulation

estimates the value of a function (response) at some unsampled location as the sum of two

components: the linear model (e.g., polynomial trend), and a systematic departure representing

low (large scale) and high frequency (small scale) variation components, respectively.

The systematic departure component represents the fluctuations around the trend, with the

basic assumption being that these are correlated, and the correlation is a function of distance

between the locations under consideration. More precisely, it is represented by a zero mean,

second-order, stationary process (mean and variance constant with a correlation depending on a

distance) as described by a correlation model.

Hence, these models (ordinary kriging) suggest estimating deterministic functions as:

y(x) = pu + E(x), E(E) = 0, cov(E(x(I), E(x(")) f 0 Vi, j, (2.11)









where pu is the mean of the response at sampled design points, and E is the error with zero

expected value, and with a correlation structure that is a function of a generalized distance

between the sample data points. A possible correlation structure (Sacks et al., 1989) is given by:


cvex) x ) = a2 O 1 -f k x ) (2.12)


where N_ denotes the number of dimensions in the set of design variables x; a identifies the

standard deviation of the response at sampled design points, and, A is a parameter, which is a

measure of the degree of correlation among the data along the kth direction. Specifically, the

parameters pu, E, and # are estimated using a set of N, samples (x, y), such that a likelihood

function is maximized (Sacks et al., 1989). Given a probability distribution and the

corresponding parameters, the likelihood function is a measure of the probability of the sample

data being drawn from it. The model estimates at unsampled points is:

E~fx)) p r"R-(f -p),(2.13)

where ^` above the letters denotes estimates, r identifies the correlation vector between the set of

prediction points and the points used to construct the model, R is the correlation matrix among

the Ns sample points, and 1 denotes an Ns -vector of ones. On the other hand, the estimation

variance at unsampled design points is given by:

1R (2r)4
V(y(x))= a2 rTR'r+ (.4
1T R 1


Gaussian processes (Williams and Rasmussen, 1996, Gibbs, 1997), another well-known

approach to surrogate modeling, can be shown to provide identical expressions for the prediction

and prediction variance to those provided by kriging, under the stronger assumption that the









available data (model responses) is a sample of a multivariate normal distribution (Rao, 2002,

Section 4a).

Radial Basis Functions (RBF)

Radial basis functions have been developed for the interpolation of scattered multivariate

data. The method uses linear combinations of NRB radially symmetric functions hr (x), based on

Euclidean distance or other such metric, to approximate response functions as,

NRBF
y(x)= w~lh,(x)+4, (2.15)


where w represents the coefficients of the linear combinations, h~ (x) the radial basis functions,


and E, independent errors with variance cr2

The flexibility of the model, that is its ability to fit many different functions, derives from

the freedom to choose different values for the weights. Radial basis functions are a special class

of functions with their main feature being that their response decreases (or increases)

monotonically with distance from a central point. The center, the distance scale, and the precise

shape of the radial function are parameters of the model.

A typical radial function is the Gaussian, which in the case of a scalar input, is expressed

as,


h, (x)= exp .) (2.16)


The parameters are its center c and its radius 3. Note that the response of the Gaussian RBF

decreases monotonically with the distance from the center, giving a significant response only in

the center neighborhood.









Given a set of N, input/output pairs (sample data), a radial basis function model can be

expressed in matrix form as,

f = Hw, (2.17)

where H is a matrix given by,



hdx (X) hdx (g -- h xB (l))
H = z (2.18)




Similar to polynomial response surface approximation method, by solving Equation (2. 17), the

optimal weights (in the least squares sense) can be found to be,

w = A- H'y, (2.19)

where A' is a matrix given by,

A- = (H H)- (2.20)

The error variance estimate can be shown to be given by,

-2 yTr2y
cr (2.21)
tr ( .)

where P,- is a proj section matrix,

, = I- HA- H (2.22)

The RBF model estimate for a new set input values is given by,

J(x) = hl v~, (2.23)

where h is a column vector with the radial basis functions evaluations,










h, (x)
h = h (x)(2.24)



On the other hand, the prediction variance is the variance of the estimated model f(x) plus the

error variance and is given by:


PVy(x))= V(h~ri)+ V(g)= (h7(HrH) h+1) y Pr(225
Ns NRB

Radial basis function is also known as radial basis neural networks (RBNN) as described

by Orr (1996, 1999a-b). The MATLAB implementation of radial basis functions or RBNN

(function 'newrb'), used in this study, is described as follows. Radial basis neural networks are

two-layer networks consisting of a radial-basis function and a linear output layer. The output of

each neuron is given by


f =radbujas w-xx0.832 prad (2.26)

radba~s(x) = exp(-x2), (2.27)

where w is the weight vector associated with each neuron, x is the input design vector, 'spread'

is a user defined value that controls the radius of influence for each neuron where the radius is

half the value of parameter 'spread'. Specifically, the radius of influence is the distance at which

the influence reaches a certain small value. If 'spread' is too small, the prediction will be poor in

regions that are not near the position of a neuron; and if 'spread' is too large, the sensitivity of

the neurons will be small. Neurons are added to the network one by one until a specified mean

square error goal 'goal' is reached. If the error goal is set to zero, neurons will be added until the

network exactly predicts the input data. However, this can lead to over-fitting of the data, which

may result in poor prediction between data points. On the other hand, if error goal is large, the









network will be under-trained and predictions even on data points will be poor. For this reason,

an error goal is judiciously selected to prevent overfitting while keeping the overall prediction

accuracy high.

Kernel-based Regression

The basic idea of RBF can be generalized to consider alternative loss functions and basis

functions in a scheme known as kernel-based regression. With reference to Equation (2.1), it can

be shown that independent of the form of the loss function L, the solution of the variational

problem has the form (the Representer Theorem: see Girosi, 1998; Poggio and Smale, 2003):


j)(x) = az G(x, x '') >+b, (2.28)


where G(x, x ')) is a (symmetric) kernel function that determines the smoothness properties of

the estimation scheme. Table 2-2 shows the kernel functions of selected estimation schemes with

the kernel parameters being estimated by model selection approaches (see next section for

details).

If the loss function L is quadratic, the unknown coefficients in Equation (2.28) can be

obtained by solving the linear system,

(NsAI~+G)a = y, (2.29)

where l is the identity matrix, and G a square positive definite matrix with elements

Ggl = G;((xm, xij) Note that, the linear system i s well posed, since (NsAIll+ G) i s strictly

positive and well conditioned for large values of NsA If loss function L is non-quadratic, the

solution of the variational problem still has the form of Equation (2.28) but the coefficients a~

are found by solving a quadratic programming problem in what is known as 'support vector

regression' (Vapnik, 1998).










Maj or characteristics of different surrogate models are summarized in Table 2-3.

Comparative studies have shown that depending on the problem under consideration, a particular

modeling scheme (e.g., polynomial response surface approximation, kriging, radial basis

functions) may outperform the others and in general, it is not known a priori which one should

be selected. See for example, the works of Friedman and Stuetzle (1981), Yakowitz &

Szidarovsky (1985), Laslett (1994), Giunta and Watson (1998), Simpson et al. (2001a-b), Jin et

al. (2001). Considering, plausible alternative surrogate models can reasonably fit the available

data, and the cost of constructing surrogates is small compared to the cost of the simulations,

using multiple surrogates may offer advantages compared to the use of a single surrogate.

Recently, multiple surrogate-based analysis and optimization approaches have been suggested by

Zerpa et al. (2005) and Goel et al. (2006b) based on the model averaging ideas of Perrone and

Cooper (1993), and Bishop (1995). The multiple surrogate-based analysis approach is based on

the use of weighted average models, which can be shown to reduce the prediction variance with

respect to that of the individual surrogates. The idea of multiple surrogate-based approximations

is discussed in Chapter 4.

Model Selection and Validation

Generalization error estimates assess the quality of the surrogates for prediction and can be

used for model selection among alternative models; and establish the adequacy of surrogate

models for use in analysis and optimization studies (validation). This section discusses the most

prominent approaches in the context of surrogate modeling.

Split Sample (SS)

In this scheme, the sample data is divided into training and test sets. The former is used for

constructing the surrogate while the latter, if properly selected, allows computing an unbiased

estimate of the generalization error. Its main disadvantages are, that the generalization error









estimate can exhibit a high variance (it may depend heavily on which points end up in the

training and test sets), and that it limits the amount of data available for constructing the

surrogates.

Cross Validation (CV)

It is an improvement on the split sample scheme that allows the use of the most, if not all,

of the available data for constructing the surrogates. In general, the data is divided into k subsets

(k-fold cross-validation) of approximately equal size. A surrogate model is constructed k times,

each time leaving out one of the subsets from training, and using the omitted subset to compute

the error measure of interest. The generalization error estimate is computed using the k error

measures obtained (e.g., average). If k equals the sample size, this approach is called leave-one-

out cross-validation (known also as PRESS in the polynomial response surface approximation

terminology). Equation (2.30) represents a leave-one-out calculation when the generalization

error is described by the mean square error (GMSE).


GM~SE = k (-7)2,(230


where g(-') represents the prediction at x ') using the surrogate constructed from all sample

points except (x ') y, ). Analytical expressions are available for leave-one-out case for the

GMSE without actually performing the repeated construction of the surrogates for both

polynomial response surface approximation (Myers and Montgomery, 1995, Section 2.7) and

kriging (Martin and Simpson, 2005).

The advantage of cross-validation is that it provides nearly unbiased estimate of the

generalization error, and the corresponding variance is reduced (when compared to split-sample)

considering that every point gets to be in a test set once, and in a training set k-1 times









(regardless of how the data is divided); the variance of the estimation though may still be

unacceptably high in particular for small data sets. The disadvantage is that it requires the

construction of k surrogate models; this is alleviated by the increasing availability of surrogate

modeling tools. A modified version of the CV approach called GCV-generalized cross

validation, which is invariant under orthogonal transformations of the data (unlike CV) is also

available (Golub et al., 1979).

If the Tikhonov regularization approach for regression is adopted, the regularization

parameter ii can be identified using one or more of the following alternative approaches: CV-

cross validation (leave-one-out estimates), GCV (smoothed version of CV), or the L-curve

(explained below). While CV and GCV can be computed very efficiently (Wahba, 1983;

Hutchinson and de Hoog, 1985), they may lead to very small values of ii even for large samples

(e.g., very flat GCV function). The L-curve (Hansen, 1992) is claimed to be more robust and

have the same good properties of GCV. The L-curve is a plot of the residual norm (first term)

versus the norm nII)m jH of the solution for different values of the regularization parameter and

displays the compromise in the minimization of these two quantities. The best regularization

parameter is associated with a characteristic L-shaped "corner" of the graph.

Bootstrapping

This approach has been shown to work better than cross-validation in many cases (Efron,

1983). In its simplest form, instead of splitting the data into subsets, subsamples of the data are

considered. Each subsample is a random sample with replacement from the full sample, that is, it

treats the data set as a population from which samples can be drawn. There are different variants

of this approach (Hall, 1986; Efron and Tibshirani, 1993; Hesterberg et al., 2005) that can be










used for model identification as well as for identifying confidence intervals for surrogate model

outputs. However, this may require considering several dozens or even hundreds of subsamples.

For example, in the case of polynomial response surface approximation (given a model),

regression parameters can be estimated for each of the subsamples and a probability distribution

(and then confidence intervals) for the parameters can be identified. Once the parameter

distributions are estimated, confidence intervals on model outputs of interest (e.g., mean) can

also be obtained.

Bootstrapping has been shown to be effective in the context of neural network modeling;

recently, its performance in the context of model identification in regression analysis is also

being explored (Ohtani, 2000, Kleijnen and Deflandre 2004).










|I Design of Experiments |


Numerical Simulations at
Selected Locations


Model Validation

Figure 2-1. Key stages of the surrogate-based modeling approach.


Simulation- Analysis problem
based model y

Appraisal Data yi
problem


If necessary


Construction of Surrogate
Models (Model Selection
and Identification)


Estimation problem


Figure 2-2. Anatomy of surrogate modeling: model estimation + model appraisal. The former
provides an estimate of function while the latter forecasts the associated error.


-I














I B


V


B(E(y), V(y))


E(y)


Figure 2-3. A surrogate modeling scheme provides the expected value of the prediction E(y)
(solid line) and the uncertainty associated with that prediction, illustrated here using a
probability density function 8 .


--


C | D
Figure 2-4. Alternative loss functions for the construction of surrogate models. A) Quadratic. B)
Laplace. C) Huber. D) r -loss function.





Figure 2-5. A two-level full factorial design of experiment for three variables.


Figure 2-6. A central composite design for three-dimensional design space.


























e


-Y.


b


b


b


b


b


Figure 2-7. A representative Latin hypercube sampling design with Ns = 6, N~ = 2 for uniformly

distributed variables in the unit square.


Figure 2-8. LHS designs with significant differences in terms of uniformity (Leary et al., 2003).
A) Random LHS. B) Correlation optimized LHS. C) OA-based LHS. (Figure
reprinted with kind permission of Taylor and Francis group, Leary et al., 2003, Figure










Table 2-1. Summary of main characteristics of different DOEs.


Limitations





Table 2-2. Examples of kernel functions and related estimation schemes.
Kernel function Estimation scheme

Gr(x, x,)= (1 + xxl)d Polynomial of degree d (PRD)

G~xx-)=x-X x, Linear splines (LSP)

G~x, xl ex -i- x 2 Gaussian radial basis function (GRF)


Main features
Used to investigate main effect and
interaction of variables for box-shaped
domains, gives orthogonal designs, caters
to noise
Cater to noise, applicable for box-shaped
domains, repetition of points on center to
improve stability
Cater to noise, Applicable to irregular
domains too

- Maximize confidence in coefficients
- Minimize the sum of variances of
coefficients
- Minimize the maximum of prediction
variance
- Minimize the integral of prediction
variance over design domain
Caters to bias error, stratified sampling
method, good for high number of variables


Name
Factorial
designs


CCD


Variance
optimal designs

D-optimal
A -optimal

G-optimal

I-optimal

Latin hypercube
sampling (LHS)


Orthogonal
arrays (OA)


OA-based LHS


Irregular domains not good
for N, > 6


Irregular domains, not good for
N, > 8, repetition of points not
useful for simulations
High computational cost, not
good when noise is low









Not good when noise is
significant, occasional poor
DOE due to random
components
Limited number of orthogonal
arrays, difficult to create OAs


Limited OAs, may leave large
holes in design space


Box-shaped domains, the moment matrix is
diagonal for monomial basis functions so
coefficients of approximation are
uncorrelated
Combine OA and LHS designs to improve
distribution of points











Table 2-3. Summary of main characteristics of different surrogate models.
Name Main features
Polynomial response Global parametric approach, good for slow varying functions, easy to
surface approximation construct, good to handle noise, not very good for simulations based
data
Kriging Global parametric approach, handles smooth and fast varying
functions, computationally expensive for large amount of data
Radial basis function Local, non-parametric approach, computationally expensive, good for
fast varying functions
Kernel-based Global, non-parametric approach, uses different loss functions,
functions relatively new approach









CHAPTER 3
PITFALLS OF USING A SINGLE CRITERION FOR SELECTING EXPERIMENTAL
DESIGNS

Introduction

Polynomial response surface (PRS) approximations are widely adopted for solving

optimization problems with high computational or experimental cost as they offer a

computationally less expensive way of evaluating designs. It is important to ensure the accuracy

of PRSs before using them for design and optimization. The accuracy of a PRS, constructed

using a limited number of simulations, is primarily affected by two factors: (1) noise in the data;

and (2) inadequacy of the fitting model (called modeling error or bias error). In experiments,

noise may appear due to measurement errors and other experimental errors. Numerical noise in

computer simulations is usually small, but it can be high for ill-conditioned problems, or if there

are some unconverged solutions such as those encountered in computational fluid dynamics or

structural optimization. The true model representing the data is rarely known, and due to limited

data available, usually a simple model is fitted to the data. For simulation-based PRS,

modeling/bias error due to an inadequate model is mainly responsible for the error in prediction.

In design of experiments techniques, sampling of the points in design space seeks to reduce

the effect of noise and reduce bias errors simultaneously. However, these obj ectives (noise and

bias error) often conflict. For example, noise rejection criteria, such as D-optimality, usually

produce designs with more points near the boundary, whereas the bias error criteria tend to

distribute points more evenly in design space. Thus, the problem of selecting an experimental

design (also commonly known as design of experiment or DOE) is a multi-obj ective problem

with conflicting obj ectives (noise and bias error). The solution to this problem would be a Pareto

optimal front of experimental designs that yields different tradeoffs between noise and bias

errors. Seeking the optimal experimental designs considering only one criterion, though popular,










may yield minor improvements in the selected criterion with significant deterioration in other

criteria.

In the past, the maj ority of the work related to the construction of experimental designs is

done by considering only one design obj ective. When noise is the dominant source of error, there

are a number of experimental designs that minimize the effect of variance (noise) on the

resulting approximation, for example, the D-optimal design, that minimizes the variance

associated with the estimates of coefficients of the response surface model. Traditional variance-

based designs minimize the effect of noise and attempt to obtain uniformity (ratio of maximum

to minimum error in design space) over the design space, but they do not address bias errors.

Classical minimum bias designs consider only space-averaged or integrated error measures

(Myers and Montegomery, 1995, pp. 208-279) in experimental designs. The bias component of

the averaged or integrated mean squared error is minimized to obtain so-called minimum bias

designs. The fundamentals of minimizing integrated mean squared error and its components can

be found in Myers and Montgomery (1995, Chapter 9), and Khuri and Cornell (1996, Chapter 6).

Venter and Haftka (1997) developed an algorithm implementing a minimum-bias criterion for

irregularly shaped design spaces where no closed form solution exists for experimental design.

They compared minimum-bias and D-optimal experimental designs for two problems with two

and three variables. The minimum-bias experimental design was found to be more accurate than

D-optimal for average error but not for maximum error. Qu et al. (2004) implemented Gaussian

quadrature-based minimum bias design and presented minimum bias central composite designs

for up to six variables.

There is some work done on developing experimental designs by minimizing the

integrated mean squared error accounting for both variance and bias errors. Box and Draper










(1963) minimized integrated mean squared errors averaged over the design space by combining

average weighted variance and average bias errors. Draper and Lawrence (1965) minimized the

integrated mean square error to account for model inadequacies. Kupper and Meydrech (1973)

specified bounds on the coefficients associated with the assumed true function to minimize

integrated mean squared error. Welch (1983) used a linear combination of variance and bias

errors to minimize mean squared error. Montepiedra and Fedorov (1997) investigated

experimental designs minimizing the bias component of the integrated mean square error subject

to a constraint on the variance component or vice-versa. Fedorov et al. (1999) later studied

design of experiments via weighted regression prioritizing regions where the approximation is

needed to predict the response. Their approach considered both variance and bias components of

the estimation error.

Bias error averaged over the design space has been studied extensively, but there is a

relatively small amount of work to account for pointwise variation of bias errors because of

inherent difficulties. An approach for estimating bounds on bias errors in PRS by a pointwise

decomposition of the mean squared error into variance and the square of bias was developed by

Papila and Haftka (2001). They used the bounds to obtain experimental designs (EDs) that

minimize the maximum absolute bias error. Papila et al. (2005) extended the approach to account

for the data and proposed data-dependent bounds. They assumed that the true model is a higher

degree polynomial than the approximating polynomial, and that it satisfies the given data

exactly. Goel et al. (2006a) generalized this bias error bounds estimation method to account for

inconsistencies between the assumed true model and actual data. They demonstrated that the

bounds can be used to develop adaptive experimental designs to reduce the effect of bias errors

in the region of interest. Recently, Goel et al. (2006c) presented a method to estimate pointwise









root mean square (RMS) bias errors in approximation prior to the generation of data. They

applied this method to construct experimental designs that minimize maximum RMS bias error

(min-max RMS bias designs).

Since minimum-bias designs do not achieve uniformity, designs that distribute points

uniformly in design space (space filling designs like Latin hypercube sampling) are popular even

though these designs have no claim to optimality. Since Latin hypercube sampling (LHS)

designs can create poor designs, as illustrated by Leary et al. (2003), different criteria like,

maximization of minimum distance between points, or minimization of correlation between

points are used to improve its performance. We will demonstrate in this chapter that even

optimized LHS designs can occasionally leave large holes in design space, which may lead to

poor predictions. Thus, there is a need to consider multiple criteria. Some previous efforts of

considering multiple criteria are as follows. In an effort to account for variance, Tang (1993) and

Ye (1998) presented orthogonal array based LHS designs that were shown to be better than the

conventional LHS designs. Leary et al. (2003) presented strategies to find optimal orthogonal

array based LHS designs in a more efficient manner. Palmer and Tsui (2001) generated

minimum-bias Latin hypercube experimental designs for sampling from deterministic

simulations by minimizing integrated squared bias error. Combination of face-centered cubic

design and LHS designs is quite widely used (Goel et al., 2006d).

The primary obj ective of this work is to demonstrate the risks associated with using a

single criterion to construct experimental designs. Firstly, we compare LHS and D-optimal

designs, and demonstrate that both these designs can leave large unsampled regions in design

space that may potentially yield high errors. In addition, we illustrate the need to consider

multiple criteria to construct experimental designs, as single-criterion based designs may









represent extreme tradeoffs among different criteria. Min-max RMS bias design, which yields a

small reduction in the maximum bias error at the cost of a huge increase in the maximum

variance, is used as an example. While the above issue of tradeoff among multiple criteria

requires significant future research effort, we explore several strategies for the simultaneous use

of multiple criteria to guard against selecting experimental designs that are optimal according to

one criterion but may yield very poor performance on other criteria. In this context, we firstly

discuss which criteria can be simultaneously used meaningfully; secondly, we explore how to

combine different criteria. We show that complimentary criteria may cater to competing needs of

experimental designs. Next, we demonstrate improvements by combining a geometry-based

criterion LHS and a model-based D-optimality criterion, to obtain experimental designs. We also

show that poor experimental designs can be filtered out by creating multiple experimental

designs and selecting one of them using an appropriate error-based (pointwise) criterion. Finally,

we combine the above mentioned strategies to construct experimental designs.

The chapter is organized as follows: Different error measures used in this study are

summarized in the next section. Following that, we show maj or results of this study. We

illustrate the issues associated with single criterion-based experimental designs, and show a few

strategies to accommodate multiple criteria. We close the chapter by recapitulating maj or

findings.

Error Measures for Experimental Designs

Let the true response r(x) at a design point x be represented by a polynomial f "(x)P,

where f(x) is the vector of basis functions and p is the vector of coefficients. The vector f(x)

has two components: f a)(x) is the vector of basis functions used in the PRS or fitting model, and

f (2)(X) iS the vector of additional basis functions that are missing in the linear regression model









(assuming that the true model is a polynomial). Similarly, the coefficient vector P can be written

as a combination of vectors P'' and p'2 that represent the true coefficients associated with the

basis function vectors f '(x) and f (2x), respectively. Precisely,


qx) =c f *(x: = "' f- ~ ~ -(f cz](x)~()" p. + f il' *()" p (3.1)

Assuming normally distributed noise E with zero mean and variance G2 (N(0, a2)), the

observed response y(x) at a design point x is given as

y(x) = r(x)+ E. (3.2)

The predicted response f(x) at a design point x is given as a linear combination of

approximating basis functions vector f"')(x) with corresponding estimated coefficient vector b :

J(x) = (f '(x))T b. (3.3)

The estimated coefficient vector b is evaluated using the data (y) for Ns design points as

(Myers and Montgomery, 1995, Chapter 2):

b = (X() X(1 )-1 X() y, (3.4)

where X'' is the Gramian matrix constructed using f '(x) (refer to Appendix A).

The error at a general design point x is the difference between the true response and the

predicted response, e(x) = r(x)- f(x) When noise is dominant, estimated standard error ees (x) ,

used to appraise error, is given as (Myers and Montogomery, 1995)


ees(x)Ji~i= j\ar[f~)] of(x)(X X ) f (x) (3.5)

where 0,2 is the estimated variance of the noise, and as is the standard error in approximation.









When bias error is dominant, the root mean square of bias error e("'"(x) at design point x

can be obtained as (Goel et al. 2006c, and Appendix A)

e"(;;sa)= E ()) f x)f '"x) E(pp f x)A' "(x (3.6)

where Ep(g(x)) is the expected value of g(x) with respect to a, and A is the alias matrix

A = (X ')X ')1 X!'!TX '. However, the true model may not be known, and Equation (3.6) is

only satisfied approximately for the assumed true model.

Prior to generation of data, all components of the coefficient vector p (2) are assumed to


have a uniform distribution between -7 and 7 (7 is a constant) such that E,; (b" ) = Y I,

where l is an NP2 x NP2 identity matrix. Substituting this in Equation (3.6), the pointwise RMS

bias error (Goel et al. 2006c, and Appendix A) is


e(""(x) () ff11 wr'" (x)A 7~I f '(x) -' A'f'(x)]
(3.7)
= df (x)-f (x)A f (x)-A'f "f(x)

Since 7 has a uniform scaling effect, prior to generation of data it can be taken as unity for

the sake of simplicity. It is clear from Equation (3.7) that the RMS bias error at a design point x

can be obtained from the location of data points (defines alias matrix A) used to fit the response

surface, the form (f"') and f 2) of the assumed tr-ue function (which is a higher order polynomial

than the approximating polynomial), and the constant 7. Goel et al. (2006c) demonstrated with

examples that this error prediction model gives good estimates of actual error fields both when

the true function is polynomial and when it is non-polynomial. Two representative examples of a









polynomial true function and a non-polynomial true function trigonometricc function with

multiple frequencies) are presented in Appendix B, respectively.

Many criteria used to construct/compare different EDs are presented in the literature. A

few commonly used error metrics as well as new bias error metrics are listed below.

*Maximum estimated standard error in design space (Equation (3.5) with "a = 1)

(eema ) max e, (x)). (3.8)

*Space-averaged estimated standard error (Equation (3.5) with "a = 1)

(e le s(x)dx. (3.9)
es avg vol (V) ,

*Maximum absolute bias error bound (Papila et al., 2005) in design space

(eI )max = mx e (x)), (3.10)

where c(2) = 1 and


(e)a ,2)X-A/ x 2 (3.11)


*Maximum RMS bias error in design space (Equation (3.7) with y = 1)

(ebr~ms) = max (erm cx) (3.12)

*Space-averaged RMS bias error (Equation (3.7) with y = 1)

(e re) er:" (x)dx. (3.13)


This criterion is the same as space-averaged bias error.

Among all the above criteria, the standard error based criteria are the most commonly used.

For all test problems, a design space coded as an N,,-dimensional cube V = [-1,1]"' is used and









bias errors are computed following the common assumption that the true function and the

response surface model are cubic and quadratic polynomials, respectively.

Besides the above error-metric based criteria, the following criteria are also frequently

used.

* D-efficiency (Myers and Montgomery, 1995, pp. 393)


D, = max|m M) /a; |M = XmrXN sy (3.14)


Here, max |M|I in Equation (3.14) is taken as the maximum of all experimental designs.

This criterion is primarily used to construct D-optimal designs. A high value of D-

efficiency is desired to minimize the variance of the estimated coefficients b.

* Radius of the largest unoccupied sphere (rmax)

We approximate the radius of the largest sphere that can be placed in the design space

such that there are no experimental design points inside this sphere. A large value of rmax

indicates large holes in the design space and hence a potentially poor experimental

design. This criterion is not used to construct experimental designs, but this allows us to

measure the space-filling capability of any experimental design.

Test Problems and Results

This section is divided into two parts. In the first subsection, we compare widely used

experimental designs, like LHS designs, D-optimal designs, central composite designs and their

minimum bias error counterparts. We show that different designs offer tradeoffs among multiple

criteria and experimental designs based on a single error criterion may be susceptible to high

errors on other criteria. In the second subsection, we discuss a few possible strategies to

simultaneously accommodate multiple criteria. Specifically, we present two strategies, (1)









combination of a geometry-based criterion (LHS) with a model-based criterion (D-optimality),

and (2) simultaneous use of multiple experimental designs combined with pointwise error

estimates as a filtering criterion to seek protection against poor designs.

Comparison of Different Experimental Designs

Space filling characteristics of D-optimal and LHS designs

Popular experimental designs, like LHS designs that cater to bias errors by evenly

distributing points in design space or numerically obtained D-optimal designs that reduce the

effect of noise by placing the design points as far apart as possible, can occasionally leave large

holes in the design space due to the random nature of the design (D-optimal) or due to

convergence to local optimized LHS designs. This may lead to poor approximation. Firstly, we

demonstrate that for D-optimal and optimized LHS designs, a large portion of design space may

be left unsampled even for moderate dimensional spaces. For demonstration, we consider two- to

four-dimensional spaces V' = [-1, 1]1.. The number of points in each experimental design is

twice the number of coefficients in the corresponding quadratic polynomial, that is, 12 points for

two dimensions, 20 points for three dimensions, and 30 points for four-dimensional design

spaces. We also create experimental designs with 40 points in four-dimensional design space.

We generate 100 designs in each group to alleviate the effect of randomness.

D-optimal designs were obtained using the MATLAB routine 'candexch' such that

duplicate points are not allowed (duplicate points are not useful to approximate data from

deterministic functions or numerical simulations). We supplied a grid of points (grid includes

corner/face points and points sampled at a grid spacing randomly selected between 0.15 and 0.30

units) and allocated a maximum of 50000 iterations to find a D-optimal design. LHS designs









were constructed using the MATLAB routine 'lhsdesign' with the 'maximin' criterion that

maximizes the minimum distance between points. We allocated 40 iterations for optimization.

For each experimental design, we estimated the radius (rmax of the largest unsampled

sphere that fits inside the design space and summarized the results with the help ofboxplots in

Figure 3-1. The box encompasses the 25th to 75th percentiles and the horizontal line within the

box shows the median value. The notches near the median represent the 95% confidence interval

of the median value. It is obvious from Figure 3-1 that rmax increases with the dimensionality of

the problem, i.e., the distribution of points in high dimensional spaces tends to be sparse. As

expected, an increase in the density of points reduced rmax (compare four-dimensional space with

30- and 40- points). The reduction in rmax was more pronounced for D-optimal designs than LHS

designs. LHS designs had a less sparse distribution compared to D-optimal designs, however, the

median rmax of approximately 0.75 units in four-dimensional space for LHS designs indicated

that a very large region in the design space remained unsampled and data points are quite far

from each other.

The sparse distribution of points in the design space is illustrated with the help of a three-

dimensional example with 20 points in Figure 3-2, where the largest unsampled sphere is shown.

For both D-optimal and LHS designs, the large size of the sphere clearly demonstrates the

presence of large gaps in the design space that makes the surrogate predictions susceptible to

errors. This problem is expected to become more severe in high dimensional spaces. The results

indicate that a single criterion (D-optimality for D-optimal designs, and max-min distance for

LHS designs) based experimental design may lead to poor performance on other criteria.

Tradeoffs among various experimental designs

Next, we illustrate tradeoffs among different experimental designs by comparing min-max

RMS bias design (refer to Appendix B), face-centered cubic design (FCCD), D-optimal design










(obtained using JMP, Table 3-1), and LHS design (generated using MATLAB routine

'lhsdesign' with 'maximin' criterion, and allocating 1000 iterations to get a design, Table 3-2) in

four-dimensional space. Note that all experimental designs, except FCCD, optimize a single

criterion, i.e., D-optimal designs optimize D-efficiency, LHS designs maximize the minimum

distance between points, and min-max RMS bias designs minimize the influence of bias errors.

On the other hand, FCCD is an intuitive design obtained by placing the points on the faces and

vertices.

The designs were tested using a uniform 114 grid in the design space V = [-1,1I]4 and

different metrics are documented in Table 3-3. We observed that no single design (used in the

generic sense, meaning a class of designs) outperformed other designs on 'all' criteria. The D-

optimal and the face-centered cubic design had high D-efficiency; the min-max RMS bias design

and the LHS design had low D-efficiency. The min-max RMS bias design performed well on

bias error based criteria but caused a significant deterioration in standard error based criteria, due

to the peculiar placement of axial points near the center. While the D-optimal design was a good

experimental design according to standard error based criteria, large holes in the design space

(rmax = 1) led to poor performance on bias error based criteria. Since LHS designs neglect

boundaries, they resulted in very high maximum standard error and bias errors. However, LHS

designs yielded the least space-averaged bias error estimate. The FCCD design, which does not

optimize any criterion, performed reasonably on all the metrics. However, we note that FCCD

designs in high dimensional spaces are not practical due to the high ratio of the number of

simulations to the number of polynomial coefficients.

We used polynomial examples to illustrate the risks in using experimental designs

constructed with a single criterion. Firstly, we considered a simple quadratic function F,(x)









(Equation (3.15) in four-dimensional space) with normally distributed noise e (zero mean and

unit variance),

F, (x) = r(x) + E,
77(x) = 10(1+ xl + x3 (, X2~(.5

We construct a quadratic polynomial response surface approximation using the data at 25

points sampled with different experimental designs (min-max RMS bias design; FCCD; D-

optimal, Table 3-1; and LHS, Table 3-2) and compute actual absolute error in approximation at a

uniform grid of 1 14 pOints in the design space V = [-1,1I]4 The accuracy measures based on the


data(, thatL are most3 commonI~lly used, are theC adjuse 3coefcin ofCIC~CI deeriato RCCIIIIII 'andth

standard error normalized by the range of the function (RMSE) in Table 3-4. Very high values of

normalized maximum, root mean square, and average absolute errors (normalized by the range

of the data for the respective experimental design) in Table 3-4 indicate that the min-max RMS

bias design (also referred to as RMS bias CCD) is indeed a poor choice of experimental design

whenI theC error is due to nisel, thoVughl all approximlation ac~curac~y melasures (Rd sadr

error) suggested otherwise. That is, the high errors come with no warning from the fitting

process! High values of the ratio of space-averaged, root mean square, or maximum actual errors

to sandrd rrorindcat th risks,+ associated,,,, wi,:th relying, on measures such as R determine


the accuracy of approximations (We pursue this issue in detail in a Chapter 5). Among other

experimental designs, LHS design had a high normalized maximum error near the corners, where

no data is sampled. FCCD and D-optimal designs performed reasonably, with FCCD being the

best design.

Secondly, we illustrate errors due to large holes in design space observed in the previous

section. A simple function that is likely to produce large errors would be a cosine with maximum









at the center of the sphere. However, to ensure a reasonable approximation of the true function

with polynomials, we used a truncated Maclaurin series expansion of a translated radial cosine

function cos(k Ix -xfhs l),namely


F(x)=20j 1r pr l; r=k xx d,l (3.16)


where Xed is a fixed point in design space, and k is a constant. We considered two instances of

Equation (3.16) by specifying the center of the largest unoccupied sphere associated with LHS

design (Table 3-2) and D-optimal design (Table 3-1) as the fixed point.

F2 X)= 20 1~-r r %);r = k2X-X hsl ,X hs = [-0. 168, -0. 168, -0. 141, 0. 167], (.7


F3~X) = 20 1- %lir r ) k3X-Xot, X optx = [0.0, 0.0, 0.0, 0.0], (.8

The constants k2 and k3 WeTO Obtained by maximizing the normalized maximum actual

absolute error in approximation of F2 X) USing LHS experimental design and approximation of

F3 X) USing D-optimal experimental design, respectively, subj ect to a reasonable approximation

(determined by the condition, R~ > 0.90) of the two functions by all considered experimental

designs (FCCD, D-optimal, LHS, and RMS bias CCD). As earlier, the errors were normalized by

dividing the actual absolute errors by the range of data values used to construct the experimental

design. Subsequently, the optimal value of the constants were k2 = 1.13 and k3 = 1.18 .

We used quadratic polynomials to approximate F(x) and errors are evaluated at a uniform

grid of 114 pOints in the design space V = [-1,1]4 The quality of fit, maximum, root mean

square, and average actual absolute errors in approximation for each experimental design are

summarized in Table 3-4. We observed that despite a good quality of fit (high R~ and low









normalized standard error), the normalized maximum actual absolute errors were high for all

experimental designs. In particular, the approximations constructed using data sampled at the D-

optimal and the LHS designs performed very poorly. This means that the accuracy metrics,

though widely trusted, can mislead the actual performance of the experimental design. The high

maximum error in approximations using the LHS designs occurred at one of the corners that was

not sampled (thus extrapolation error), however, we note that LHS designs yielded the smallest

normalized space-averaged and root mean square error in approximation. On the other hand, the

maximum error in approximations using D-optimal designs appeared at a test point closest to the

center x, in the case of F,(x), and near x ig in the case of F,(x). Besides, high normalized

space-averaged errors indicated poor approximation of the true function F(x). The other two

experimental designs, FCCD and RMS bias CCD, performed reasonably on maximal errors. The

relatively poor performance of RMS bias CCD for average and RMS errors is explained by

recalling that the experimental design was constructed by assuming the true function to be a

cubic polynomial, whereas F(x) was a quartic polynomial.

An important characteristic for all experimental designs is the ratio of space-averaged, root

mean square, or maximum actual absolute error to estimated standard error. When this ratio is

large, the errors are unexpected and therefore, potentially damaging. The FCCD design provided

a reasonable value of the ratio of actual to estimated standard errors, however, RMS bias design

performed very poorly as the actual errors were much higher than the standard estimated error.

This means that the estimated standard error is misleading about the actual magnitude of error

that cannot be detected in an engineering example where we do not have the luxury of using a

large number of data points to test the accuracy of approximation. Similarly, for all functions, the

ratio of maximum actual absolute error to standard error for LHS designs (29-52) was much









higher than for D-optimal designs (about 9). The surprise element is also evident by the excellent

values of R~ of 0.99 and 1.00 compared to 0.90 for the D-optimnal design.

The results presented here clearly suggest that different experimental designs were non-

dominated with respect to each other and offered multiple (sometimes extreme) tradeoffs, and

that it might be dangerous to use a single criterion based experimental design without thorough

knowledge of the problem (which is rarely the case in practice).

Extreme example of risks in single criterion based design: Min-max RMS bias CCD

A more pathological case, demonstrating the risks in developing experimental designs

using a single criterion, was encountered for moderate dimensional cases while developing the

central composite design counterpart of the minimum bias design, i.e., minimizing the maximum

RMS bias error. The performance of the min-max RMS bias designs constructed using two

parameters (refer to Appendix B) for two- to five-dimensional spaces on different metrics is

given in Table 3-5. For two- and three-dimensional spaces, the axial points (given by at) were

located at the face and the vertex points (given by al) were placed slightly inwards to minimize

the maximum RMS bias errors. The RMS bias designs performed very reasonably on all the

error metrics. A surprising result was obtained for optimal designs for four- and five-dimensional

spaces: while the parameter corresponding to vertex points (al) was at its upper limit (1.0), the

parameter corresponding to the location of axial points (az) hit the corresponding lower bound

(0.1). This meant that to minimize maximum RMS bias error, the points should be placed near

the center. The estimated standard error was expectedly very high for this design. Contrasting

face-centered cubic design for four-dimensional cases with three- and four-dimensional min-max

RMS bias designs (Table 3-5) isolated the effect of dimensionality and the change in

experimental design (location of axial points) on different error metrics. The increase in bias









errors (bounds and RMS error) was attributed to increase in dimensionality (small variation in

bias errors with different experimental designs in four-dimensional design space), and the

increase in standard error for min-max RMS bias design was the outcome of the change in

experimental design (the location of axial points given by a2). This unexpected result for four-

and higher dimensional cases is supported by theoretical reasoning (Appendix B), and very

strong agreement between the predicted and the actual RMS bias errors for the min-max RMS

bias design and the face-centered central composite design (Appendix B).

To further illustrate the severity of the risks in using a single criterion, we show the

tradeoffs among the maximum errors (RMS bias error, estimated standard error, and bias error

bound) for a four-dimnensional design space [-1,I]4, Obtained by varying the location of the axial

points (a2) fTOm near the center (a2=0.1i, min-max RMS bias design) to the face of the design

space (a2=1.0, central composite design), while keeping the vertex locations (al=1.0) fixed. The

tradeoff between maximum RMS bias error and maximum estimated standard error is shown in

Figure 3-3(A), and the tradeoff between maximum RMS bias error and maximum bias error

bound is shown in Figure 3-3(B). Moving the axial point away from the center reduced the

maximum bias error bound and the maximum estimated standard error but increased the

maximum RMS bias error. The relatively small variation in maximum RMS bias error compared

to the variation in maximum estimated standard error and maximum bias error bound

demonstrated the low sensitivity of maximum RMS bias error with respect to the location of

axial points (a2), and explains the success of the popular central composite designs (a2=1.0) in

handling problems with bias errors. While we noted that each design on the curves in Figure 3-3

corresponds to a non-dominated (tradeoff) point, a small increase in maximum RMS bias error

permits a large reduction in maximum estimated standard error, or in other words, the









minimization with respect to a single criterion (here maximum RMS bias error) may lead to

small gains at a cost of significant loss with respect to other criteria. Tradeoff between maximum

bias error bound and maximum RMS bias error also reflected similar results, though the

gradients were relatively small.

The most important implication of the results presented in this section is that it may not be

wise to select experimental designs based on a single criterion. Instead, tradeoff between

different metrics should be explored to find a reasonable experimental design. While detailed

exploration of this issue requires significantly more research, our initial attempts to

simultaneously accommodate multiple criteria are illustrated next.

Strategies to Address Multiple Criteria for Experimental Designs

As discussed in the previous section, the experimental designs optimized using a single

criterion may perform poorly on other criteria. While a bad experimental design can be identified

by visual inspection in low dimensional spaces, we need additional measures to filter out bad

designs in high dimensional spaces (Goel et al., 2006e). We explored several strategies to

simultaneously accommodate multiple criteria in an attempt to avoid poor experimental designs.

In this context, we discuss two issues:

* Which criteria are meaningful for different experimental designs? and

* How can we combine different criteria?

Since the experimental designs are constructed to minimize the influence of bias error and

noise, a sensible choice of suitable criteria for any experimental design should seek balance

among the two sources of errors, i.e., bias and noise. Consequently, if we select an experimental

design that primarily caters to one source of error, for example, noise, the secondary criterion

should be introduced to address the other source of error, bias error in this case, and vice-versa.

We elaborate on this idea in a following subsection.









Once we have identified criteria to construct experimental designs, we seek ways to

combine different criteria. Taking inspiration from multi-obj ective optimization problems, we

can accommodate multiple criteria according to several methods, for example,

* Optimize the experimental design to minimize a composite function that represents the
weighted sum of criteria,

* Optimize the experimental design to minimize the primary criterion while satisfying
constraints on the secondary criteria, and

* Solve a multi-obj ective optimization problem to identify different tradeoffs and then select
a design that suits the requirements the most.

Here, we show two ways to avoid poor experimental designs using a four-dimensional

example. Firstly, we present a method to combine the model-based D-optimality criterion that

caters to noise with the geometry-based LHS criterion that distributes points evenly in design

space and reduces space-averaged bias errors. Secondly, we demonstrate that selecting one out of

several experimental designs according to an appropriate pointwise error-based criterion reduces

the risk of obtaining poor experimental designs. Further, we show that the coupling of multiple

criteria and multiple experimental designs may be effective to avoid poor designs.

Combination of model-based D-optimality criterion with geometry based LHS criterion

We used an example of constructing an experimental design for a four-dimensional

problem with 30 points (response surface model and assumed true model were quadratic and

cubic polynomials, respectively). Three sets of experimental designs were generated as follows.

The first set comprised 100 LHS experimental designs generated using the MATLAB routine

'lhsdesign' with the 'maximin' criterion (a maximum of 40 iterations were assigned to find an

experimental design). The second set comprised 100 D-optimal experimental designs generated

using the MATLAB routine 'candexch' with a maximum of 40 iterations for optimization. A

grid of points, (with grid spacing randomly selected between 0. 15 and 0.30) including face and









corner points, was used to search for the D-optimal experimental designs. The third set of

(combination) experimental designs was obtained by combining D-optimal (model-based

criterion) and LHS designs (geometry-based criterion). We selected 30 design points from a 650

point LHS design ('lhsdesign' with 'maximin' criterion and a maximum of 100 'iterations' for

optimization) using the D-optimality criterion ('candexch' with a maximum of 50000 iterations

for optimization). For each design, we computed the radius rmax of the largest unsampled sphere,

D-efficiency, maximum and space-averaged RMS bias and estimated standard error using a

uniform 114" grid in the design space [-1,I]4

We show the tradeoff among different criteria for D-optimal, LHS, and combination

designs in Figure 3-4. As can be seen from Figure 3-4(A), the D-optimal designs were the best

and LHS designs were the worst with respect to the maximum estimated standard and RMS bias

error. Compared to the LHS designs, the combination designs significantly reduced the

maximum estimated standard error with marginal improvement on the maximum RMS bias error

criterion (Figure 3-4(A)), and improved D-efficiency without sacrificing rmax (Figure 3-4(D)).

The advantages of using combination designs were more obvious in Figure 3-4(B), where we

compared space-averaged bias and estimated standard errors. We see that D-optimal designs

performed well on space-averaged estimated standard errors but yielded high space-averaged

RMS bias errors. On the other hand, the LHS designs had low space-averaged RMS bias errors

but high space-averaged estimated standard errors. The combination designs simultaneously

yielded low space-averaged RMS bias and estimated standard errors. This result was expected

because the Latin hypercube sampling criterion allows relatively uniform distribution of the


2 The average number of points in the uniform grid used to generate D-optimal designs was 1300. So to provide a
fair comparison while keeping the computational cost low, we obtain 650 points using LHS and use this set of points
to develop combination experimental designs.










points by constraining the location of points that are used to generate combination designs using

the D-optimality criterion. Similarly, we observed that unlike D-optimal designs, combination

experimental designs performed very well on the space-averaged RMS bias error and the r;;2a

criterion (refer to Figure 3-4(C)), and the performance was comparable to that of the LHS

designs.

Mean and coefficient of variation (COV) of different metrics for the three sets of

experimental designs are tabulated in Table 3-6. D-optimal designs outperformed LHS designs in

terms of the ratio of maximum to average error (stability), D-efficiency, maximum RMS bias

error, and maximum estimated standard error. Also, for most metrics, the variation in results due

to sampling (COV) was the least among the three. As seen before, LHS designs performed the

best on space-averaged RMS bias errors. The designs obtained by combining two criteria (D-

optimality and LHS), were substantially closer to the best of the two except for (ens")m Thus,

they reduced the risk of large errors. Furthermore, the variation with samples (COV) is also

reduced. The results suggested that though different experimental designs were non-dominated

(tradeoffs) with respect to each other, simultaneously considering multiple criteria by combining

the model-based D-optimality criterion and the geometry-based LHS criterion may be effective

in producing more robust experimental designs with a reasonable tradeoff between bias errors

and noise.

Multiple experimental designs combined with pointwise error-based filtering

Next, we demonstrate the potential of using multiple experimental designs to reduce the

risk of finding poor experimental designs. The main motivation is that the cost of generating

experimental designs is not high so we can construct two or three experimental designs using

LHS, or D-optimality, or a combination of the two criteria, and pick the best according to an










appropriate criterion. To illustrate the improvements by using three EDs over a single ED, each

of the two criteria-maximum RMS bias error and maximum estimated standard error-were used

to select the best (least error) of the three EDs. For illustration, 100 such experiments were

conducted with LHS designs, D-optimal designs, and the combination of LHS and D-optimal

designs (as described above).

Actual magnitudes of maximum RMS bias error and maximum estimated standard error

for all 300 designs and the 100 designs obtained after filtering using min-max RMS bias or

maximum estimated standard error criteria are plotted in Figure 3-5 for three sets of (100)

experimental designs. As is evident by the shortening of the upper tail and the size of the

boxplots in Figure 3-5, both the min-max RMS bias and maximum estimated standard error

criteria helped eliminate poor experimental designs for all three sets. Filtering had more impact

on the maximum error estimates than the space-averaged error estimates. The numerical

quantifieation of improvements in actual magnitude of maximum and space-averaged error based

on 100 experiments is summarized in Table 3-7. We observed that the pointwise error-based

(min-max RMS bias or estimated standard error) Eiltering significantly reduced the mean and

COV of maximum errors. We also noted improvements in the individual experimental designs

using multiple criteria. LHS designs were most significantly improved by picking the best of

three based on estimated maximum standard error. D-optimal designs combined with the min-

max RMS bias error based filtering criterion helped eliminate poor designs according to the

RMS bias error criterion. It can be concluded from this exercise that potentially poor designs can

be filtered out by considering a small number of experimental designs with an appropriate (min-

max RMS bias or maximum estimated standard) error criterion. The filtering criterion should be

complimentary to the criterion used for construction of the experimental design, i.e., if a group of









EDs are constructed using a variance based criterion, then the selection of an ED from the group

should be based on bias error criterion, and vice-versa.

Results presented in this section indicate that use of multiple criteria (LHS and D-

optimality) and multiple EDs help reduce maximum and space-averaged bias and estimated

standard errors. Implementing the above findings, we can obtain experimental designs with

reasonable tradeoff between bias error and noise in three steps as follows:

* Generate a large number of LHS experimental design points,

* Select a D-optimal subset within the LHS design (combine model-based and geometry-
based criteria),

* Repeat first two steps three times and select the design that is the best according to one of
the min-max RMS bias or maximum estimated standard error criteria (filtering using
pointwise error-based criterion).

Concluding Remarks

In this chapter, we demonstrated the risks of using a single criterion to construct

experimental designs. We showed that constructing experimental designs by combining multiple

(model, geometry, and error based) criteria and/or using multiple experimental designs reduces

the risk of using a poor experimental design.

For four-dimensional space, comparison of computed LHS and D-optimal designs, that

involve random components and may yield poor approximation due to random components or

convergence to local optima, revealed that the D-optimal designs were better for maximum

errors, and LHS designs were better for space-averaged RMS bias errors. Both designs were

susceptible to leaving large spheres in design space unsampled. A comparison of popular

experimental designs (face-centered cubic design, min-max RMS bias design, D-optimal design,

and LHS design) revealed the non-dominated (tradeoff among different criteria) nature of

different designs. The min-max RMS bias design, obtained by placing the axial points close to









the center, performed the best in reducing maximum RMS bias error, but was the worst design

for estimated standard error metrics and D-efficiency. LHS design gave the best performance in

terms of space-averaged bias errors. However, face-centered cubic design that is an intuitive

design yielded a reasonable tradeoff between bias error and noise reduction on all metrics. The

same conclusions were supported by approximation of three example polynomials that

highlighted the susceptibility of different experimental designs to the nature of the problem,

despite the fact that the accuracy metrics suggested a very good fit for each example. We

concluded that different experimental designs, constructed using one error criterion, do not

perform the best on all criteria. Instead, they offer tradeoffs.

In moderate dimensional spaces these single criterion-based designs can often lead to

extreme tradeoffs, particularly by using the maximum RMS bias error measure as a design

criterion, such that small gains in the desired criterion are achieved at the cost of significant

deterioration of performance in other criteria. A tradeoff study, conducted to study the variation

of different error metrics with the location of axial points in central-composite designs,

illustrated the perils of using a single criterion to construct experimental designs and emphasized

the need to consider multiple criteria to tradeoff bias error and noise reduction.

To address the risk of using a poor experimental design by considering a single criterion,

we explored a few strategies to accommodate multiple criteria. We demonstrated that the

experimental design obtained by combining two criteria, the D-optimality criterion with LHS

design, offered a reasonable tradeoff between space-averaged RMS bias and estimated standard

error, and space-fi11ing criteria. Specifically, combination designs significantly improved the

poor experimental designs. We showed that the risk of getting a poor experimental design could

be further reduced by choosing one out of three experimental designs using a pointwise error-









based criterion, e.g., min-max RMS bias or maximum estimated standard error criterion. The

combination of D-optimal designs and min-max RMS bias error was particularly helpful in

reducing bias errors. Finally, we adopted selection of experimental designs by combining the D-

optimality criterion with LHS design and selecting one out of three such combination designs to

cater to both bias error and noise reduction. However, since these results are based on a limited

number of examples, we note the need of future research to address the issues related to

accommodating multiple criteria while constructing experimental designs.


















A 2D1 02 D30 4-02-2 3-2 D3 D4
Fiue -. oplt (aedo 10deins frais rna)ofte agetuncupe shr
inside~~~~~~ th einsae[1 ]'(hr ,i tenme fvrals.xai hw
th dmesinliy f h dsin pcean cresonin umerofp int inth
expeimetaldesgn.Smalerr~n i desredto voi lage nsaple re ion. D-...
opia dein are selected- usn MALA rotn 'cnech w peiyardo
pint wihgi pcn ewe .5ad030 n aiu f500ieain



Figure for Boxptimizatidon) LHS designs are gnratd usin MATLAB routine 'lgs uocpshsesign


ponswith gi pcn ewe .1 n .0 n a maximum of 4000 iterations frotmzto.A -pia ein.B H

designs.


A -1-1 -1 B
Figure 3-2. Illustration of the largest spherical empty space inside the 3D design space [-1, 1]3
(20 points). A) D-optimal designs. B) LHS designs.












1.186, , ,


a


Axial point~
'nearceiffer~


1.175

1.175




1.16

1.155


0 10 20 30 40 50 60 70 80 2 4( 86 68 7 7.2
A max(e ~ ~maxceb. B

Figure 3-3. Tradeoffs between different error metrics. A) Maximum estimated standard error
(es)max and maximum RMS bias error (e nn)max B) Maximum bias error bound

(eb)ma and maximum RMS bias error (e nn)max for four-dimensional space. 25 points
were used to construct central composite experimental designs with vertex location
fixed at al= 1.0.


Axial point on
thei~e au2








21














0.95

0.9





0.75


0.65



0.5
0.5


* ------------------












o


* D-optimal
* LHS
SCormbination









**


=




On


0 1 2 34
max(ee,(x))


5 6


0.6 0.7 0.8 0.9 1 1.1
avg(e (x))


1.2 1.3
B


4


* *


**

* q* ** *
** I.


*D-optimnal
*LHS
u combination


0O


0.7 D-optimal
LHS
165 Combination





0965 0.75 0 8 0.85 0 9 0.9 1


0 a 0 5 0 6 0; ne 0 80
D-efliciency


Figure 3-4. Comparison of 100 D-optimal, LHS, and combination (D-optimality + LHS)
experimental designs in four-dimensional space (30 points) using different metrics.

rmax: radius of the largest unsampled sphere, e(""(x): RMS bias error, e,s(x): estimated

standard error, (.)max: maximum of the quantity inside parenthesis, (.)avg: Spatial
average of the quantity inside parenthesis. All metrics, except D-efficiency (Dyf), are

desired to be low. A) (ees)max vs. (e ""s)max. B) (ees)civg VS. (e tas)civg C) rmax VS. (e tas)civg

D) D~f vs. rmax


r D-optimal
LHS
o Combins~ion


0.95

0 9


0.851 "

0.8







62 0 3













7

6.5


5.5


4.5



3.

2.5-
mBE mSE
A AIILHS designs

2.2t ~


1.1


1










aBE aSE
AII LHS designs

1.1C


aBE aSE
F.srering using SE

1 1t 1


5I

mBE mSE
Filtering using SE


5
mBE mSE
Filtering using BE


I I
aBE aSE
F.Isrerig ursing BE

1.11 1


2.2 ~ 2.2C


0g~ 8


0 7C


1.4) .


1.41


0.7)


01.8


0.8)


0.8)


0 o5~
aBE aSE aBE aSE
Filtering using BE Filtering using SE


0.6t ,
mBE mSE


0.6~ 0.6C 05~ _
mBE mSE mBE mSE aBE aSE


B AII Deoptimal designs Filtering using BE


Filtering using SE AII D-optimnal destgns


mBE mSE
Al P1D-optimal designs


mBE mSE
Filtering using BE


mBE mSE
Filtering using SE


aBE aSE aBE aSE
All D-optimaldesigns Filtering using BE


aBE aSE
Filtering using SE


Figure 3-5. Simultaneous use of multiple experimental designs concept, where one out of three
experimental designs is selected using appropriate criterion (filtering). Boxplots show
maximum and space-averaged RMS bias, and standard estimated errors in four-
dimensional design space; considering all 300 designs, 100 designs filtered using
min-max RMS bias error as a criterion and 100 designs filtered using min max
estimated standard error as a criterion (mBE denotes maximum RMS bias error, mSE