<%BANNER%>

Facilitating Bayesian Identification of Elastic Constants through Dimensionality Reduction and Response Surface Methodology

Permanent Link: http://ufdc.ufl.edu/UFE0025155/00001

Material Information

Title: Facilitating Bayesian Identification of Elastic Constants through Dimensionality Reduction and Response Surface Methodology
Physical Description: 1 online resource (233 p.)
Language: english
Creator: Gogu, Christian
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2009

Subjects

Subjects / Keywords: Mechanical and Aerospace Engineering -- Dissertations, Academic -- UF
Genre: Aerospace Engineering thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: The Bayesian method is a powerful approach to identification since it allows to account for uncertainties that are present in the problem as well as to estimate the uncertainties in the identified properties. Due to computational cost, previous applications of the Bayesian approach to material properties identification required simplistic uncertainty models and/or made only partial use of the Bayesian capabilities. Using response surface methodology can alleviate computational cost and allow full use of the Bayesian approach. This is a challenge however, because both response surface approximations (RSA) and the Bayesian approach become inefficient in high dimensions. Therefore we make extensive use of dimensionality reduction methods including nondimensionalization, global sensitivity analysis and proper orthogonal decomposition. We apply the proposed Bayesian approach to several problems. First we analyze a three bar truss problem where we identify three general situations where the Bayesian approach presents an advantage in terms of accuracy. We then move on to identification from natural frequencies of vibrating composite plates. The use of nondimensional response surface approximation allows the application of the Bayesian approach at reasonable computational cost. Finally we consider the problem of identification of orthotropic elastic constants from full field measurements on a plate with a hole where we need to make use of the proper orthogonal decomposition technique in order to reduce the dimensionality of the fields.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Christian Gogu.
Thesis: Thesis (Ph.D.)--University of Florida, 2009.
Local: Adviser: Haftka, Raphael T.
Local: Co-adviser: Sankar, Bhavani V.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2009
System ID: UFE0025155:00001

Permanent Link: http://ufdc.ufl.edu/UFE0025155/00001

Material Information

Title: Facilitating Bayesian Identification of Elastic Constants through Dimensionality Reduction and Response Surface Methodology
Physical Description: 1 online resource (233 p.)
Language: english
Creator: Gogu, Christian
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2009

Subjects

Subjects / Keywords: Mechanical and Aerospace Engineering -- Dissertations, Academic -- UF
Genre: Aerospace Engineering thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: The Bayesian method is a powerful approach to identification since it allows to account for uncertainties that are present in the problem as well as to estimate the uncertainties in the identified properties. Due to computational cost, previous applications of the Bayesian approach to material properties identification required simplistic uncertainty models and/or made only partial use of the Bayesian capabilities. Using response surface methodology can alleviate computational cost and allow full use of the Bayesian approach. This is a challenge however, because both response surface approximations (RSA) and the Bayesian approach become inefficient in high dimensions. Therefore we make extensive use of dimensionality reduction methods including nondimensionalization, global sensitivity analysis and proper orthogonal decomposition. We apply the proposed Bayesian approach to several problems. First we analyze a three bar truss problem where we identify three general situations where the Bayesian approach presents an advantage in terms of accuracy. We then move on to identification from natural frequencies of vibrating composite plates. The use of nondimensional response surface approximation allows the application of the Bayesian approach at reasonable computational cost. Finally we consider the problem of identification of orthotropic elastic constants from full field measurements on a plate with a hole where we need to make use of the proper orthogonal decomposition technique in order to reduce the dimensionality of the fields.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Christian Gogu.
Thesis: Thesis (Ph.D.)--University of Florida, 2009.
Local: Adviser: Haftka, Raphael T.
Local: Co-adviser: Sankar, Bhavani V.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2009
System ID: UFE0025155:00001


This item has the following downloads:


Full Text

PAGE 1

1 FACILITATING BAYESIAN IDENTIFICATI ON OF ELASTIC CONSTANTS THROUGH DIMENSIONALITY REDUCTION AND RE SPONSE SURFACE METHODOLOGY By CHRISTIAN GOGU A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2009

PAGE 2

2 2009 Christian Gogu

PAGE 3

3 To my parents, Ileana and Grigore

PAGE 4

4 ACKNOWLEDGMENTS First, I would like to thank Prof. Raphael Haftka and Dr. Jrme Molimard, my main advisors during this period, for their time, availab ility and excellent guidan ce. I remain in awe of their never-ending enthusiasm and feel very fort unate to have been advised by them. I am also very grateful to my three other advisors Dr. Rodolphe Le Riche, Prof. Alain Vautrin and Prof. Bhavani Sankar for their advice, pa tience and very generous support. I would like to especi ally thank Prof. Peter Ifju an d his student Weiqi Yin for the collaboration on the experimental part of this wo rk, collaboration thanks to which I was able to have a real test case in a re latively short amount of time. I would also like to thank my PhD committee members, Dr. Nam-Ho Kim, Prof. Christian Bes and Dr. Lawrence Winner, for their willingne ss to serve on my committee, for evaluating my dissertation, and for offering constructive cr iticism to help me improve this work. Financial support by the NASA Cons tellation University Institute Program (CUIP) and the Ecole Nationale Suprieure des Mines de Sain t Etienne grant is gratefully acknowledged. Many thanks go also to Dr. Satish Bapanapal li for the collaboration on the ITPS at the beginning of my PhD, to Dr. Tushar Goel for his assistance with surrogates and sensitivity analysis, and to Dr. Gustavo Silva for his support with the open hole plate model. I duly thank all my labmates and colleagues both in Saint E tienne and in Gainesville for the interaction we had during all this period. Whether research or administration related, or just as good friends you were there for me and helped me out when needed, often putting a smile back on my face. Last but not least I want to thank my parents who have always supported me in everything I have done. Your love and understa nding has never stopped being there for me. To you I dedicate this work.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS...............................................................................................................4LIST OF TABLES................................................................................................................. ..........9LIST OF FIGURES................................................................................................................ .......13ABSTRACT....................................................................................................................... ............16ABSTRACT (FRENCH).............................................................................................................. .18EXTENDED SUMMARY (FRENCH).........................................................................................20 CHAPTER 1 INTRODUCTION HOW DIMENSIONA LITY REDUCTION AND RESPONSE SURFACE METHODOLOGY CAN BENEFI T BAYESIAN IDENTIFICATION.............30Motivation and Scope........................................................................................................... ..30Application Problems and Outline..........................................................................................342 DIMENSIONALITY REDUCTION FOR RESPONSE SURFACE APPROXIMATIONS APPLIED TO THERMAL DESIGN.................................................37Introduction................................................................................................................... ..........37Methodology for Reducing the Number of Variables in an RSA..........................................40ITPS Atmospheric Reentry Application Problem..................................................................44Finite Element Model of the Thermal Problem......................................................................46Minimum Number of Parameters for the Temperature Response Surface.............................48Simplifying Assumptions fo r the Thermal Problem.......................................................48Nondimensionalizing the Thermal Problem....................................................................49Maximum BFS Temperature RSA.........................................................................................55RSA Construction Computational Cost..................................................................................60Applying the RSA for Comparison of Materials for the ITPS...............................................61Summary........................................................................................................................ .........633 LIMITS OF BASIC LEAST SQUARES IDENTIFICATION AN INTRODUCTION TO THE BAYESIAN APPROACH APPLIED TO ELASTIC CONSTANTS IDENTIFICATION................................................................................................................64Introduction................................................................................................................... ..........64Least Squares and Bayesian Methodol ogies to Parameter Identification...............................66Least Squares Formulations............................................................................................66Bayes Theorem...............................................................................................................67

PAGE 6

6 The Bayesian Identification.............................................................................................68Progressing from Least Squares to Bayesian Identification............................................71A Three Bar Truss Didactic Example.....................................................................................73Description of the Three Bar Truss Example..................................................................73Sources of uncertainty.....................................................................................................74The Least Squares Method..............................................................................................74The Bayesian Method......................................................................................................75Illustrative Example.........................................................................................................77Least Squares and Bayesian Comparison for the Three Bar Truss Problem..........................82The Comparison Method.................................................................................................82Results for Different-Sensitivity Strains.........................................................................83Least Squares Implicit Weighting According to Response Sensitivity...........................85Results for Different Uncer tainty in the Strains..............................................................89Results for Correlation among the Responses.................................................................92Results for All Three Effects Together...........................................................................93Average Performance of the Least Squares and Bayesian Approach.............................96Vibration Identification Problem............................................................................................97Description of the Problem..............................................................................................97Sources of uncertainty.....................................................................................................98The Identification Methods.............................................................................................99Results........................................................................................................................ ...102Importance of Handling Multiple Uncertainty Sources................................................104Summary........................................................................................................................ .......1064 CHOOSING AN APPROPRIATE FIDELI TY FOR THE APPROXIMATION OF NATURAL FREQUENCIES FOR ELASTIC CONSTANTS ID ENTIFICATION...........109Introduction................................................................................................................... ........109Dickinsons Analytical Approximation................................................................................110Frequency Response Surface Approximation (RSA)...........................................................112Determining Nondimensional Variables for the Frequency RSA.................................112RSA Construction Procedure........................................................................................115Frequency RSA Results.................................................................................................116Identification Schemes......................................................................................................... .119Sources of uncertainty aff ecting the identification...............................................................121Identification Using the Response Surface Approximation.................................................122Least Squares Identification..........................................................................................122Bayesian Identification..................................................................................................123Identification Using Dickinsons An alytical Approximate Solution....................................124Least Squares Identification with Bounds.....................................................................124Least Squares Identification without Bounds................................................................125Bayesian Identification..................................................................................................126Graphical Comparison of the Identificati on Approaches with the Low Fidelity Approximation...........................................................................................................128Summary........................................................................................................................ .......131

PAGE 7

7 5 BAYESIAN IDENTIFICATION OF OR THOTROPIC ELASTIC CONSTANTS ACCOUNTING FOR MEASUREMENT ER ROR, MODELLING ERROR AND PARAMETER UNCERTAINTY.........................................................................................133Introduction................................................................................................................... ........133Vibration Problem.............................................................................................................. ..135Bayesian Identification........................................................................................................ .136Bayesian Formulation....................................................................................................136Sources of Uncertainty..................................................................................................137Error and Uncertainty Models.......................................................................................138Bayesian Numerical Procedure.....................................................................................141Applicability and Benefits of Separabl e Monte Carlo Simulation to Bayesian Identification..............................................................................................................143Bayesian Identification Results............................................................................................144Uncertainty propagation through l east squares identification..............................................150Identification of the plates homogenized parameters..........................................................152Summary........................................................................................................................ .......1536 REDUCING THE DIMENSIONALITY OF FULL FIELDS BY THE PROPER ORTHOGONAL DECOMPOSITION METHOD...............................................................155Introduction................................................................................................................... ........155Proper Orthogonal Decomposition.......................................................................................156Simulated Experiment..........................................................................................................159Experiment Description.................................................................................................159Numerical Model...........................................................................................................159Dimensionality Reduction Problem......................................................................................161Problem Statement.........................................................................................................161POD Implementation.....................................................................................................161POD Results.................................................................................................................... ......164POD Modes...................................................................................................................164POD Truncation.............................................................................................................165Cross Validation for Truncation Error..........................................................................166Material Properties Sensitivities Truncation Error........................................................168POD Noise Filtering......................................................................................................170Summary........................................................................................................................ .......1717 BAYESIAN IDENTIFICATION OF OR THOTROPIC ELASTIC CONSTANTS FROM FULL FIELD MEASUREMENTS ON A PLATE WITH A HOLE.......................181Introduction................................................................................................................... ........181Open Hole Plate Tension Test..............................................................................................183Bayesian Identification Problem...........................................................................................184Formulation...................................................................................................................184Sources of Uncertainty..................................................................................................185Error Model...................................................................................................................186Bayesian Numerical Procedure.....................................................................................188

PAGE 8

8 Response Surface Approximations of the POD coefficients................................................188Bayesian Identification on a Simulated Experiment............................................................191Bayesian Identification on a Moir Full Field Experiment..................................................193Full Field Experiment....................................................................................................193Bayesian Identification on Moir Full Field Displacements.........................................196Summary........................................................................................................................ .......2008 SUMMARY AND FUTURE WORK..................................................................................202Summary........................................................................................................................ .......202Future Work.................................................................................................................... ......204APPENDIX A POLYNOMIAL RESPONSE SU RFACE APPROXIMATIONS........................................206Background..................................................................................................................... ......206Polynomial Response Surface Approximation Modeling....................................................207General Surrogate Modeling Framework......................................................................207Design of Experiments (DoE).......................................................................................208Numerical Simulation....................................................................................................208Polynomial Response Surface Construction.................................................................208Response Surface Verification......................................................................................210B GLOBAL SENSITIVITY ANALYSIS................................................................................212C PHYSICAL INTERPRETATION OF THE BAYESIAN IDENTIFICATION RESULTS WITH EITHER MEASUR EMENT OR INPUT PARAMETERS UNCERTAINTIES...............................................................................................................215D MOIRE INTERFEROMERTY FULL FIELD MEASUREMENTS TECHNIQUE............218E COMPARISON OF THE MOIRE INTER FEROMETRY FIELDS AND THEIR PROPER ORTHOGONAL DECOMPOSITION PROJECTION........................................220LIST OF REFERENCES.............................................................................................................223BIOGRAPHICAL SKETCH.......................................................................................................233

PAGE 9

9 LIST OF TABLES Table page 2-1 Dimensions of the ITPS (see also Figure 2-1) used among other to establish the simplifying assumptions....................................................................................................492-2 Dimensional groups for the thermal problem....................................................................512-3 Lower bounds (LB) and upper bounds (UB) used for sampling in the 15 variables space.......................................................................................................................... .........562-4 Ranges of the nondimensional design variables................................................................573-1 Numerical values for di fferent-sensitivity strains..............................................................833-2 Extreme case identification result s for different-sensitivity strain....................................843-3 Numerical values for vari able response uncertainty..........................................................903-4 Extreme case identification result s for different response uncertainty..............................903-5 Numerical values for response correlation........................................................................923-6 Extreme case identification results for response correlation.............................................923-7 Numerical values for th e three combined effects..............................................................933-8 Extreme case identification result s for the three combined effects...................................933-9. Average performance of the methods in the different cases..............................................963-10 Assumed true values of th e laminate elastic constants......................................................973-11 Assumed uncertainties in the plat e length, width, thickness and density ( a, b h and ).............................................................................................................................. ..........983-12 Simulated experi mental frequencies................................................................................1033-13 Least squares and Bayesian results for a randomly simulated particular case................1033-14 Average performance for the plate vi bration problem with 100 repetitions....................1034-1 Expression of the coefficients in Di ckinsons approximate formula (Eq. 4-1) for natural frequencies of free plate.......................................................................................1114-2 Variables involved in the vi bration problem and their units............................................1134-3 Nondimensional parameters char acterizing the vibration problem.................................113

PAGE 10

10 4-4 Wide Bounds on the model input parameters (denoted WB)..........................................1174-5 Mean and maximum relative absolute error of the frequency RSA predictions (denoted RSAWB) compared at 250 test points................................................................1174-6 Mean and maximum relative absolute e rror of the analytical formula frequency predictions compared at 250 test points...........................................................................1184-7 Narrow bounds on the model input parameters (denoted NB)........................................1184-8 Mean and maximum relative absolute error of the frequency RSA predictions (denoted RSANB) compared at 250 test points.................................................................1184-9 Experimental frequencies from Pedersen and Frederiksen (1992)..................................1204-10. Plate properties: length (a), width (b), thickness(h) and density ( )................................1204-11 Normal uncorrelated prior distri bution of the material properties...................................1214-12 Truncation bounds on the prior distri bution of the material properties...........................1214-13 Assumed uncertainties in the plat e length, width, thickness and density ( a, b h and ).............................................................................................................................. ........1224-14 LS identified propertie s using the frequency RSAWB......................................................1224-15 Residuals for LS identifica tion using the frequency RSAs.............................................1234-16 Most likely point of the pos terior pdf using the frequency RSANB.................................1244-17 LS identified properties using the analytical approxima te solution (bounded variables)..................................................................................................................... .....1254-18 Residuals for LS identification usi ng the analytical approximate solution.....................1254-19 LS identified properties using the analytical approxima te solution (unbounded variables)..................................................................................................................... .....1254-20 Residuals for LS identification usi ng the analytical approximate solution.....................1264-21 Most likely point of th e posterior pdf using the anal ytical approximate solution...........1275-1 Experimental frequencies from Pedersen and Frederiksen (1992)..................................1355-2 Plate properties: length (a), width (b), thickness(h) and density ( )................................1355-3 Normal uncorrelated prior distributi on for the glass/epoxy composite material.............1375-4 Truncation bounds on the prior distri bution of the material properties...........................137

PAGE 11

11 5-5 Absolute difference between the frequencie s obtained with thin plate theory and thick plate theory for the mean values of the a priori material properties.......................1405-6 Assumed uncertainties in the plat e length, width, thickness and density ( a, b h and ). Normal distributions are used.....................................................................................1415-7 Least squares identified properties...................................................................................1455-8 Mean values and coefficient of variat ion of the identified pos terior distribution............1465-9. Variance-covariance matrix (symmetric ) of the identified po sterior distribution...........1465-10 Correlation matrix (symmetric) of the identified poste rior distribution..........................1475-11 Mean values and coefficient of va riation obtained by uncertainty propagation through the least squa res identification............................................................................1515-11 Least squares and Bayesian results fo r ply elastic constants identification with normally distributed measurement error..........................................................................1535-12 Least squares and Bayesian result s for homogenized bending elastic constants identification with normally distributed measurement error...........................................1535-13 Least squares and Bayesian results for bending rigidities identif ication with normally distributed measurement error.........................................................................................1536-1 Bounds on the input parameters of interest (for a graphite/epoxy composite material)..1626-2 Input parameters for snapshots 1 and 199........................................................................1636-3 Error norm truncation criterion........................................................................................1656-4 Cross validation CVRMS truncation error criterion...........................................................1676-5 Input parameters to the finite elemen t simulation for the sensitivity study and the simulated experiment.......................................................................................................1686-6 Truncation errors for the sensit ivities to the elastic constants.........................................1696-7 Difference (in absolute value) between th e finite element field and the projection of the noisy field onto the first 5 POD basis vectors............................................................1717-1 Normal uncorrelated prior distribution of the material properties for a graphite/epoxy composite ma terial...........................................................................................................1857-2 Truncation bounds on the prior distri bution of the material properties...........................1857-3 Error measures for RSA of the U-field POD...................................................................190

PAGE 12

12 7-4 Error measures for RSA of the V-field POD...................................................................1907-5 Material properties used for the simulated experiments..................................................1917-6 Mean values and coefficient of variati on of the identified poste rior distribution for the simulated experiment.................................................................................................1917-7 Correlation matrix (symmetric) of the identified posterior distribution for the simulated experiment.......................................................................................................1927-8 Manufacturers specifications and pr operties found by Noh (2004) based on a four points bending test...........................................................................................................1937-9 Normal uncorrelated prior distri bution of the material properties...................................1987-10 Truncation bounds on the prior distri bution of the material properties...........................1987-11 Mean values and coefficient of variati on of the identified poste rior distribution from the Moir interferometry experiment...............................................................................1987-12 Correlation matrix (symmetric) of the identified posterior distribution from the Moir interferometry experiment.....................................................................................1987-12 Mean values and coefficient of variati on of the identified poste rior distribution from the Moir interferometry experiment usi ng a prior based on Nohs (2004) values.........199E-1 Average values of the experimental fields and of the components of the fields that were filtered out by POD projection................................................................................222

PAGE 13

13 LIST OF FIGURES page 2-1 Corrugated core sandwich panel depict ing the thermal boundary conditions and the geometric pa rameters.........................................................................................................452-2 Incident heat flux (sol id line) and convection (dash dot line) profile on the TFS surface as a function of reentry time..................................................................................452-3 1D heat transfer model representati on using homogenization (not to scale).....................462-4 Simplified thermal problem for dimensional analysis.......................................................502-5 Summary of the dimensionality redu ction procedure for the ITPS problem.....................552-6 Maximum BFS temperat ure two-variable RSA.................................................................582-7 Absolute error of the response surface estimates compared to FE predictions..............592-8 Thermal comparison of materials suitable for the web using the contour plot of the maximum BFS temperature RSA......................................................................................623-1 Three bar truss problem................................................................................................... ..733-2 Likelihood value usi ng the distribution of C if E were 5 GPa..........................................793-3 Likelihood function of E given measure C..............................................................................813-4 Illustration of least squares solution z* = Ax* and partial solutions z*i for m =2, n =1.........863-5 Illustration of l east squares solution z* = Ax* and virtual measurements z*i for two different slopes A............................................................................................................. ..873-6 Graphical representation of the least squares and Bayesi an results for the three bar truss example for the different sensitivity case (i.e. different strain magnitude)...............893-7 Graphical representation of the least squares and Bayesi an results for the three bar truss example for the different uncertainty case................................................................913-8 Posterior distribution of { Ex Gxy } found with the Bayesian approach.........................1043-9 Identified posterior pdf for: A) Only Gaussian measurement noise (case i.) B) Only model input parameters uncertainty (case ii .) C) Both Gaussian measurement noise and model input parameters uncertainty (case iii.)..........................................................1054-1 Illustration of the procedure for sampling points in the nondimensional space..............116

PAGE 14

14 4-2 Two dimensional representation in a thr ee point plane of: A) The posterior pdf. B) The likelihood function....................................................................................................1284-3 Two dimensional representation in a thr ee point plane of the least squares objective function....................................................................................................................... .....1295-1 Identified posterior joint distribution plots......................................................................1465-2 Convergence of the: A) mean value. B) standard deviation. C) correlation coefficient of the identified distribution.............................................................................................1486-1 Specimen geometry......................................................................................................... .1596-2 Finite element mesh....................................................................................................... ..1606-3 Displacement and strain maps for snapshot 1..................................................................1726-4 First six POD modes fo r U displacement fields...............................................................1736-5 First six POD modes fo r V displacement fields...............................................................1746-6. First five strain equivalent POD modes for x (first column), y (second column) and xy (third column).............................................................................................................1756-7 Displacements truncation error in snapshot 1 using 3, 4 and 5 modes............................1766-8 Displacements truncation error in snapshot 199 using 3, 4 and 5 modes........................1776-9 Strain equivalent tr uncation error in snapshot 1 using 3, 4 and 5 modes for x (first column), y (second column) and xy (third column)........................................................1786-10 Strain equivalent trunc ation error in snapshot 19 9 using 3, 4 and 5 modes for x (first column), y (second column) and xy (third column)........................................................1796-11 Cross valid ation error (CVRMS) as a function of truncation order...................................1806-12 Noisy U and V fields of the simulated experiment..........................................................1807-1 Specimen geometry......................................................................................................... .1837-2 Flow chart of the cost (in green) and dimensionality reduction (in red) procedure used for the likelihood function calculation.....................................................................1917-3 Specimen with the Moir diffr action grating (1200 lines/mm).......................................1947-4 Experimental setup for the open hole tension test...........................................................1957-5 Fringe patterns in the: A) U direc tion. B) V direction for a force of 700N.....................195

PAGE 15

15 7-6 Displacement fields obtained from the fringe patterns (no filtering was used at all) in: A) The U direction. B) The V direction...........................................................................196A-1 One dimensional polynomial response surface approximation.......................................207A-2 Steps in surrogate modelling............................................................................................207A-3 Latin hypercube sampling with sampling size Ns=6........................................................208C-1 Identified posterior pdf for: A) Only Gaussian measurement noise (case i.) B) Only model input parameters uncertainty (case ii .) C) Both Gaussian measurement noise and model input parameters uncertainty (case iii.)..........................................................215D-1 Schematic of a Moir interferometry setup.....................................................................218E-1 Difference between the displacement fields obtained from the Moir fringe patterns and their POD projection for th e: A) U field. B) V field.................................................220E-2 Strain equivalent differe nce maps between the fields obtained from the Moir fringe patterns and their POD projection for A) x. B) y. C) xy................................................221

PAGE 16

16 Abstract of Dissertation Pres ented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy FACILITATING BAYESIAN IDENTIFICATI ON OF ELASTIC CONSTANTS THROUGH DIMENSIONALITY REDUCTION AND RE SPONSE SURFACE METHODOLOGY By Christian Gogu December 2009 Chair: Raphael T. Haftka Cochair: Jerome Molimard Major: Aerospace Engineering The Bayesian method is a powerful approach to identification since it allows to account for uncertainties that are present in the problem as well as to estimate the uncertainties in the identified properties. Due to computational cost, previous applications of the Bayesian approach to material properties identification required si mplistic uncertainty models and/or made only partial use of the Bayesian capabilities. Using response surface methodology can alleviate computational cost and allow full use of the Bayesian approach. This is a challe nge however, because both response surface approximations (RSA) and the Bayesian approa ch become inefficient in high dimensions. Therefore we make extensive use of di mensionality reduction methods including nondimensionalization, global sensitivity an alysis and proper orthogonal decomposition. Dimensionality reduction of RSA is also important in optimization and this is demonstrated first on a problem of material select ion for an integrated thermal protection system, where we reduced the number of variables require d in an RSA from fifteen to only two (Chapter 2).

PAGE 17

17 We then introduce the Bayesian identification approach, first on a simple three bar truss problem, then on a plate vibration problem. On thes e problems we find three general situations in which the Bayesian approach has a significant ad vantage over least squares identification: large differences in the response sensitivities to th e material properties, large differences in the response uncertainties and their correlation (Chapter 3). We then move to the identification problem of orthotropic elastic constants from the natural frequencies of free plates To maintain reasonable comput ational cost, re sponse surface approximations of the natural frequencies ar e constructed aided by nondimensionalization. We show that the fidelity of the approximations is essential for accurate identification (Chapter 4). We then apply the Bayesian approach to iden tify the posterior probability distributions of the orthotropic elastic cons tants (Chapter 5) from the vibration test. Some of the properties could only be identified with high uncertainty, which pa rtly illustrates the difficulties of accurately identifying the ply elastic constants from frequency measurements on a multi-ply multiorientation structure. The final two chapters look at identifying the four orthotropic elastic constants from full field displacement measurements taken on a tens ile test on a plate with a hole. To make the Bayeian approach tractable the proper orthogon al decomposition method is used to reduce the dimensionality of the fields (Chapter 6). Finally we present the results of the Bayesian identification for the open hole tension test, first on a simulated experiment, then on a Moir in terferometry experiment that we carried out (Chapter 7). As for the vibration based identifi cation we find that the different properties are identified with different uncertainti es, which we are able to quantify.

PAGE 18

18 ABSTRACT (FRENCH) FACILITER LIDENTIFICATI ON BAYESIENNE DES PROPRIETES ELASTIQUES PAR REDUCTION DE DIMENSIONNALITE ET LA METHODE DES SURFACES DE REPONSE La mthode didentification ba yesienne est une approche qui peut tenir compte des diffrentes sources dincertitude prsentes dans le problme et permet destimer lincertitude avec laquelle les paramtres sont identifis. Cepe ndant, cause du cot en temps de calcul, ces applications de lapproche bayesienne lidentification des proprits des matriaux ncessitaient soit des modles dincertitude simpli stes ou nutilisaient pas la mthode bayesienne son potentiel entier. Lutilisation de la mthode des surfaces de rponse permet de palier aux problmes de temps de calcul et lutilisation du potentiel entier de lapproche bayesien ne. Cela continue par contre de poser le problme de linefficacit en haute dimension des surfaces de rponse ainsi que de la mthode bayesienne. Pour cela nous utilisons des mthodes de rduction de dimensionnalit telles que ladim ensionnalisation, lanalyse de sensitivit globale et la dcomposition orthogonale propre. Les approches de rduction de dimensionnal it appliques aux surfaces de rponse sont galement importantes en optimisation. Cest pourquoi le Chapitre 2 illustre cette utilisation sur un problme de slection de matriaux pour un systme de protection thermique intgr, o lapproche a permis de rduire le nombre de variables requises pour la surface de rponse de 15 2 seulement. Nous introduisons ensuite lapproche diden tification bayesienne, dabord sur un problme relativement simple dun treillis trois barres, pu is sur un problme de vibration de plaques. Sur ces deux problmes nous trouvons trois situations g nrales o lapproche bayesienne prsente

PAGE 19

19 un avantage par rapport liden tification par moindres carrs cl assique: sensibilit variable, incertitude ou corrlation des rponses (Chapitre 3). Nous passons ensuite lidentification de proprits lastiques or thotropes du matriau composite partir de mesures de frquences propres. Pour maintenir le temps de calcul raisonnable, des approximations par surface de r ponse des frquences propres sont construites en utilisant des paramtres adim ensionnels. Nous montrons dans le Chapitre 4 que la fidlit des surfaces de rponse des frquences propres est es sentielle pour le problme didentification que nous considrons. Dans les Chapitre 5 nous appliquons lapproche bayesienne en vue d identifier la densit de probabilit posteriori des pr oprits lastiques orthotropes. Une partie des proprits a t identifie avec une incertitude plus leve, illu strant les difficults pour identifier toutes les quatre proprits en mme temps partir dune structure lamine multi-pli et multi-orientations. Enfin dans les deux derniers chapitres nous nous intressons liden tification des quatre proprits lastiques orthotropes partir de mesures des champs des dplacements sur une plaque troue en traction. En vue de pouvoir ap pliquer lapproche bayesienne ce problme nous rduisons la dimensionnalit des champs en appliquant la mthode de dcomposition orthogonale propre (Chapitre 6). Dans le Chapitre 7 nous prsentons les rsultats de lidentification baysienne applique la plaque troue dabord sur une exprience si mule puis sur une exprience de Moir interfromtrique que nous avons ralis. Comm e pour lidentification partir de frquences propres nous trouvons que les diffrentes proprits sont identifies avec une incertitude variable et nous quantifions cette incertitude.

PAGE 20

20 EXTENDED SUMMARY (FRENCH) Un grand nombre de techniques exprimentales sont disponibles en vue de dterminer les proprits mcaniques des matriaux. En ce qui c oncerne les proprits l astiques les techniques vont de simples essais uni-axiaux (ASTM D 3039) des techniques bases sur des mesures de champs de dplacements sur des structures co mposites plus complexes (Molimard et al. 2005), chaque technique ayant ses avantages et inconvnients. Lidentification est le terme associ la meilleure estimation possible des paramtres dun modle (les proprits matriaux ici) partir dun jeu de me sures exprimentales donn. Les mthodes les plus frquentes pour cela sont ba ses sur la dtermination des paramtres dun modle numrique qui donnent le meilleur accord, usuellement en termes derreur moindres carrs, entre les mesures et les prdictions du modle (Lawson et al. 1974, Bjorck 1996). Dautres mthodes ont galement t proposes dans le domaine de lidentification des proprits lastiques, tel que la mthode des champs virtuels (Grediac et al. 1989, 1998b), la mthode de lcart en relati on de comportement (Geymonat et al. 2002, 2003) ou la mthode de lcart lquilibre (Amiot et al. 2007). Indpendamment de la mthode didentificatio n utilise deux dfis majeurs continuent dtre : La gestion dans lidentification de sources d incertitude inhrentes lexprience et la modlisation de lexprience Lestimation et la rduction de lincertitude avec laquelle les proprits sont identifies Les mthodes les plus couramme nt utilises tiennent rarement compte de ces deux aspects de manire exhaustive, ne grant pas de linfo rmation statistique et fournissant uniquement une valeur numrique pour les paramtres sans autres information de nature statistique. Diffrentes sources dincertitude peuvent cepen dant affecter les rsultats, ne pas en tenir compte peut donc

PAGE 21

21 biaiser lidentification. De plus, une valeur numr ique seule est dune utilit plus limite car partir dun seul essai les diffre ntes proprits matriaux sont le plus souvent estimes avec un niveau de confiance diffrent. Typiquement pour des matriaux composites, le module de cisaillement et le coefficient de Poisson sont estims beaucoup mo ins prcisment partir de la plupart des essais cherchant d terminer toutes le quatre propri ts lastiques en mme temps. Par consquence, estimer lincertitude avec laque lle les diffrentes prop rits matriaux sont dtermines peut tre aussi important que de dterminer leur valeur la plus probable. Plusieurs approches statistiques didentification existent, qui pe uvent grer divers niveau, de linformation statistique. Ces mthodes in cluent les moindres ca rrs pondrs (Bjorck 1996, Tarantola 2004) ou la mthode du maximum de vraisemblance (Norden 1972). Parmi les mthodes les plus gnrales qui peuvent tenir co mpte des incertitudes dans lexprience et la modlisation et peuvent galement fournir une estimation de lincertitude dans les proprits identifies on peut aussi citer la mthode baye sienne. Cette approche es t base sur une formule trs simple introduite il y a plus de deux sicle s par Thomas Bayes (1793). Cependant, malgr la simplicit de la formule, elle peut poser aujourd hui encore des problmes de temps de calcul. En effet lorsque des distributions st atistiques quelconques doivent tre utilises au sein de formules quelconques la simulation de Monte Carlo (Fishm ann 1996) est la plupart du temps requise. Lobjectif de cette thse est de montrer comm ent en combinant des mthodes de rduction de dimensionnalit et dapproximation par surfaces de rponse, lapproche bayesienne peut tre utilise son plein potentiel un cot numrique raisonnable. A cause de problmes de temps de calcul, le s applications prcdentes de lapproche didentification bayesienne utilisaient des modles dincertitudes simplistes (bruit additif gaussien uniquement) ou alors nutilisaient pas la mthode bayesienne son plein potentiel

PAGE 22

22 (nestimant pas les incertitudes av ec lesquelles les proprits taien t identifies). Sol (1986) a t parmi les premiers appliquer la mthode bayesi enne dans le domaine de lidentification des proprits lastiques des matriaux composites. Pl usieurs auteurs ont suivi (Papazoglou et al. 1996, Lai et Ip 1996, Hua et al. 2000, Marwala et Sibisi 2005, Daghia et al. 2007). Cependant toutes ces tudes considraient le bruit additif gaussien sur les mesures comme seule source dincertitude dans le problme. Ceci permettait de traiter lapproche ba yesienne dune manire purement analytique, vitant ainsi des problm es de temps de calcu l. Linconvnient est cependant quune telle hypothse de bruit nest souvent pas assez pr oche de la ralit. En effet, en plus de lincertitude sur les mesures il y a souv ent de lincertitude sur le modle et ses autres paramtres dentre. Ces incerti tudes ne mnent pas ncessairement un bruit additif gaussien sur la rponse mesure. De plus, parmi les tude s mentionnes, seule celle de Lai et Ip (1996) donnait lcart type avec lequel les proprits taient identifi es. Mais lorsquil y a des corrlations entre les proprits identifies une de scription complte de la densit de probabilit des proprits identifies est ncessaire. Lutilisation de la simulation de Monte Carl o est ncessaire pour g rer des incertitudes non gaussiennes sur un modle quelconque. Pour obtenir un ordre de grandeur du dfi en termes de cot de calcul nous utilisons l exemple dune identifica tion de la distribution de probabilit des quatre proprits lastiques orthot ropes dun composite partir de 10 frquences propres de vibration de la plaque stratifi e. Appliquer ce problme, lappr oche bayesienne base sur la simulation de Monte Carlo ferait typiquement intervenir 2 milliards dvaluation de frquences propres. Il est vident alors quut iliser des solutions numriques (t els que les lments finis) au problme de vibration est beaucoup trop couteux.

PAGE 23

23 Le cot des calculs est encore exacerb si au lieu de 10 mesures (les 10 frquences propres) nous avons des centaines de milliers de mesures tel quest le cas pour des mesures de champ de dplacement pour lesquelles chaque pixel reprsente un point de mesure. En plus du problme prcdent li au nombre dvaluations ncessaires, un nouveau problme se pose en termes de la dimension des distributions de pr obabilit intervenant dans la mthode baysienne. Grer des distributions de probabilit jointes en dimension de plusieurs centaines de milliers est hors de porte lheure actuelle. Le but de ce travail est de dvelopper une approche bayesienne pouvant tenir compte de distributions dincertitudes non gaus siennes et dutiliser la mthode bayesienne son potentiel entier tout en maintenant un temps de calcul ra isonnable. Les deux points dcrits prcdemment doivent tre abords pour cela : 1. Rduire considrablement le temps de calcul dune val uation de fonction 2. Rduire considrablement la dimensionnalit de s densits de probabilit jointes intervenant dans le problme Pour aborder le premier point nous avons choi si dutiliser la mt hode des surfaces de rponse (Myers et Montgomery 2002), en vue d obtenir une approximation peu couteuse en temps de calcul de la solution numrique par lments finis. Un des dfis rsi de alors dans le fait que les surfaces de rponse perdent en prcision lorsque la dimension augmente. Cest pourquoi nous nous intressons plusieurs mthodes de rduction de dimensionnalit pouvant tre appliques la mthode des surfaces de r ponse : ladimensionalisation et lanalyse de sensitivit globale (Sobol 1993). Pour aborder le second point nous avons choi si dutiliser la dcomposition orthogonale propre (Berkooz et al. 1993). La dcompositi on orthogonale propre permet dexprimer les champs de dplacement dans une base de trs faible dimension (typiquement moins dune

PAGE 24

24 dizaine). Cependant cette rduction de dimensionnalit nenlve en rien le requis du point 1 et nous devons toujours utiliser des surfaces de r ponse pour approximer les rsultats couteux du code numrique par lments finis. Lapplication finale est lid entification bayesienne des pr oprits lastiques orthotropes dun composite partir de mesures de champs de dplacements sur une plaque troue en traction. Du point de vue mcanique cet essai prsente plusieurs avantages. Les champs htrognes permettent en effet didentifier toutes les quatr e proprits lastiques al ors que seulement une ou deux proprits sont typiquement dtermines pa rtir des essais normaliss. Lhtrognit du champ permet galement davoir de linformati on sur des variabilits spatiales de proprits matriaux lintrieur de la plaque lamine. Le ssai sur la plaque troue a rcemment t utilis pour lidentification dans le cadre dune approc he moindres carrs (Silv a 2009) et les rsultats obtenus ont t trs encourageants. Du point de vue de la mthode bayessienne, lidentification partir de lessai sur plaque troue est extrmement difficile car il prsente s imultanment les deux dfis mentionns dans les points un et deux ci-dessus. Lidentif ication fait, en effet, interven ir dune part un code lments finis couteux en temps de calcul qui doit tre remplac par une approximation trs fidele mais peu couteuse. Dautre part les mesures de champ contiennent des milliers de points de mesure ncessitant une rduction de dimensionnalit. Avan t de relever les dfis de cette application finale nous avons valu les diff rentes techniques envisages su r une srie de problmes moins complexes. La rduction de dimensionnalit pour la constr uction des surfaces de rponse est un aspect galement trs important dans le contexte des problmes doptimisation. Dans le Chapitre 2 nous illustrons lutilisation des surfaces de rpons e ensemble avec des mthodes de rduction de

PAGE 25

25 dimensionnalit sur un problme de slecti on de matriaux optimaux pour un systme de protection thermique intgr. Un systme de protection ther mique intgr pour vhicules spatiaux a pour but dintgrer le s fonctions de rsistance mcanique structurales et de protection thermique lors de la rentre atmosphrique Nous nous intressons dans ce chapitre essentiellement la fonction de protection thermique qui ncessitait une approximation par surface de rponse pour la slection de matriaux optimaux. Nous montrons dans ce chapitre comment une rduction trs significati ve dans le nombre de variables ncessaires pour la surface de rponse peut tre obtenue en utilisant les techniques danalyse de sensitivit globa le (Sobol 1993) et dadimensiona lisation. Nous proposons dabord une approche gnrale pour la rduction de di mensionnalit puis nous lappliquons au problme du systme de protection thermique intgr, pour lequel une surface de rponse de la temprature maximale atteinte par la face intrieure est requ ise. Le modle lments finis utilis pour le calcul de cette temprature met en jeu 15 para mtres physiques dintrt pour la conception. Ladimensionalisation du problme en combinaison avec une analyse de sensitivit globale a nanmoins permis de montrer que la temp rature ne dpend que de deux paramtres adimensionnels, reprsentant ains i une rduction de dimensionnalit de 15 2 seulement. Ces deux paramtres ont t utiliss pour construire la surface de rponse et la vrifier avec des simulations lments finis. Nous trouvons que la majeure partie de lerre ur dans la surface de rponse vient du fait que les paramtres adimensionne ls manquent de rendre compte dune petite partie de la dpendance de la temprature envers les 15 variables initiales. Cette diffrence a nanmoins t trouve ngligeable pour le but de la slection de matriaux optimaux pour le systme de protection thermique intgr, sl ection qui est illustre la fin du chapitre.

PAGE 26

26 Dans les chapitres suivants nous nous int ressons lidentification bayesienne en considrant de problmes de plus en plus comp lexes. Nous commenons dans le Chapitre 3 par un problme de treillis trois ba rres avec deux objectifs en tte : dune part le problme permet dintroduire lapproche ba yesienne didentification de manir e didactique et dautre part nous cherchons identifier sur ce pr oblme trois cas gnraux ou la mthode bayesienne prsente systmatiquement un avantage par rapport liden tification classique par moindres carrs. Nous avons choisi danalyser dans un premier temps le problme assez simple du treillis, car il permet disoler les diffrents effets qui affectent lidentification. Nous nous intressons ainsi au cas o les diffrentes rponses mesures ont des sensitivit s diffrentes au paramtre identifier, au cas o les rponses mesures ont des incertitudes diff rentes et finalement au cas o les rponses mesures sont corrles entre-el les. Nous montrons alors que da ns ces trois cas lidentification bayesienne est systmatiquement plus prcise que lidentificati on par moindres carrs classique. Dans le pire des cas, lorsque les trois effets sadditionne nt nous trouvons que la mthode bayesienne peut mener un paramtr e identifi plus de dix fois pl us proche de la vraie valeur que la mthode classique des moindres carrs. Aprs le treillis trois barres nous nous intressons un problme plus complexe, lidentification des proprits lastiques orthotrope s dune plaque composite partir de mesures de frquences propre. Nous utilisons dabord une exprience simule sur une plaque simplement appuye pour analyser sur ce cas linfluence des trois effets mi s en vidence sur le problme du treillis. Nous trouvons nouveau que lidentification bayesienne est systmatiquement plus prcise que lidentification moi ndres carrs classique, mme si la diffrence entre les deux est moins importante que pour le treillis.

PAGE 27

27 Dans les Chapitre 4 et 5 nous utilisons de s mesures relles de frquences propres, effectues par Pedersen et Fr ederiksen (1992) sur des plaque s en fibre de verre/poxy aux conditions aux limites libres. Pour ce type de conditions aux limites il nexiste pas de solution analytique exacte pour les frquences propres. Nous analysons donc dans le Chapitre 4 quel type dapproximation nous pouvons utiliser pour mainte nir les couts des calculs raisonnables. Nous nous intressons dune part une solution anal ytique approximative dveloppe par Dickinson (1978) et dautre part des approximations pa r surface de rponse adimensionnelles dun code lments finis. Tandis que la solution analytique approximative a une erreur de lordre de 5%, nous montrons que nous pouvons obtenir des surfaces de rponse trs prcises en utilisant des variables adimensionnelles, avec une erreur infe rieure 0.1%. Nous comparons ensuite les rsultats didentifications effectues avec ces deux types dapproximations. Ceci confirme que lerreur dans la solution analyt ique approximative est trop grande pour quelle puisse tre utilise pour lidentification. Nous choisissons donc d utiliser les approximations par surfaces de rponses adimensionnelles pour lidentification baye sienne qui est effectue dans le Chapitre 5. Les rsultats de cette identification prsentent plusieurs intrts. Dune part nous avons pu tenir compte de plusieurs sources dincertitude da ns le problme : des incertitudes de mesure, des incertitudes de modlisation et des incertitudes su r divers paramtres dentre du modle. Dautre part nous obtenons une di stribution de probabilit pour les proprits identifies. Cette distribution permet une caractrisation complte de s incertitudes avec lesque ls les proprits sont dtermines. Nous trouvons que les diffrentes proprits sont identif ies avec un degr de confiance trs diffrent puisque le module longitudinal est identifi le plus prcisment avec un coefficient de variation (COV) de 3%, tandis que le coefficient de Poisson est identifi avec la plus grande incertitude avec un COV de 12%. N ous trouvons de plus quil y a une corrlation

PAGE 28

28 significative entre plusieurs propr its, corrlation qui est rareme nt identifie par dautres mthodes. Dans les Chapitres 6 et 7 nous passons ensu ite au problme final didentification des proprits lastiques orthotropes partir de mesures de champs de dplacements. Par rapports au problme de vibration, la difficult supplmentair e rside dans le grand nombre de points de mesures ( chaque pixel) rsultant des mesures de champ. Nous dtaillons dans le Chapitre 6 lapproche que nous utilisons pour rduire la dimensionnalit des champs. Lapproche est base sur la dcomposition orthogonale propre (Berkooz et al. 1993). Cette technique est galement connue sous le nom dexpansi on de Karhunen Loeve ou danalys e de composante principale selon son domaine dutilisation. Elle permet d exprimer dans une base modale de faible dimension (moins dune dizaine ici) tout champ dans un certain domaine. Nous montrons que pour le domaine qui nous intresse pour lidentifi cation les champs peuvent tre exprims trs prcisment (erreur maximale de moins de 0.01% sur les dplacements) avec seulement 4 modes. Lapproche prsente galement lintrt de filtrer les bruits de mesure dans les champs. Dans le Chapitre 7 nous prsentons lidentif ication bayesienne partir dun essai de traction sur plaque troue. Lid entification est ralis e dabord sur une exprience simule pour vrifier les rsultats obtenus. Nous utilisons ensu ite de vritables donnes exprimentales venant dun essai de Moir interfromtrique que nous avons ralis. La technique du Moir interfromtrique prsente lavantage doffrir une excellent rsolution spatiale et un bon rapport signal sur bruit. La distributi on de probabilit identifie mont re, comme sur le problme de vibration, que les proprits sont identifies avec des ince rtitudes diffrentes et sont partiellement corrles entre elle s, lavantage de lapproche bayesienne tant alors de pouvoir quantifier ces incertit udes et corrlations.

PAGE 29

29 Enfin dans le Chapitre 8 nous concluons ce trav ail et prsentons quel ques pistes possibles pour de travaux futurs, notamment lutilisati on de lindpendance statistique des sources dincertitude pour rduire encore les temps de calcul, la combin aison de plusieurs expriences pour identifier un jeu de proprits matriaux et comment utiliser au mieux toutes les informations statistiques dans un c ontexte de conception fiabiliste.

PAGE 30

30 CHAPTER 1 INTRODUCTION HOW DIMENSIONALITY REDUCTION AND RESPONSE SURFACE METHODOLOGY CAN BENEFIT BAYESIAN IDENTIFICATION Motivation and Scope The best possible knowledge of ma terial properties has always b een a goal to strive for in the physics and engineering community. In mechan ical design, more accurate knowledge of the material constants has direct implications on performance, efficiency and cost. In aerospace structures for example, more accurately known m echanical properties allow for reduced safety margins, which lead to lighter structures thus improving the vehicles efficiency and operational cost (Acar 2006). To determine a given mechanical property, a multitude of experimental techniques can be used. For determining elastic constants for exampl e, techniques range from simple tensile tests on standardized specimen (ASTM D 3039) to fu ll field displacement measurements on more complex composite structures (Molimard et al. 2005), with each technique having its advantages and drawbacks. Identification is the term associated with m oving from a given experimental measurement to the best possible estimate of the model para meters (material properties here). The most common methods are based on finding the parameters of a numerical model that provide the best match, usually in terms of least squares error, between the measurements and the predictions of the model (Lawson et al. 1974, Bjorck 1996). When the numerical model is finite element based the method is often referred to as finite el ement model updating. In the domain of elastic constants identification from full field displ acement measurements other methods have been introduced as well, such as the virtual fields method (Grediac et al. 1989, 1998b), the constitutive equation gap method (Geymonat et al. 2002, 2003) or the equilibrium gap method (Amiot et al. 2007).

PAGE 31

31 Independently of the chosen method howev er, two major remaining challenges in identification are: handling inherent uncertainties in the expe riment and modeling of the experiment estimating and reducing the uncertainty in the identified properties The most commonly used methods rarely acc ount for theses two points however, since they do not handle statistical information and us ually provide only a numerical estimate for the parameters. Uncertainties can however affect this estimate, so not accounting for them can bias the results. Furthermore, a numerical estimate alone for a material property is of questionable usefulness. Indeed it is often obser ved that from a single test, diffe rent material properties are not obtained with the same confidence. Typically fo r composite materials the shear modulus and the Poissons ratio are known with sign ificantly less accuracy from most tests that seek to determine all four elastic constants simultaneously. Accord ingly, estimating the uncer tainty with which a property is determined can be as important as estimating its mean or most likely value. Multiple identification frameworks exist that ca n handle, to a variable degree, information of statistical nature. These include weighted least square s (Bjorck 1996, Tarantola, 2004) or maximum likelihood methods (Norden 1972). Among the most general methods that can both account for uncertainties in measurements a nd simulation and provide an estimate of the uncertainties in the identified properties is also the Bayesian updating method. This formulation is based on a very simple formula introduced a lmost two and a half centuries ago by Thomas Bayes (1763). Yet, in spite of the simplicity of the formula it can lead even today to major computational cost issues when applied to id entification. Indeed, as many other statistical methods as well, it requires statistical sa mpling and Monte Carlo simulation (Fishman 1996) when applicability to any type of probability distribution as we ll as any formula is required.

PAGE 32

32 The objective of the present dissertation is to demonstrate that by combining multiple dimensionality reduction techniques and res ponse surface methodology, Bayesian identification can be performed to its full capabili ties at reasonable computational cost. Due to computational cost issues previous applications of the Bayesian approach to identification employed either simplistic uncerta inty models (additive Gaussian noise only) and/or made only partial use of the Bayesian cap abilities (did not estimate the uncertainty on the identified properties). In the dom ain of identification of elastic constants of composite materials Sol (1986) was probably the first to use a basic Bayesian approach. Others have followed since (Papazoglou et al. 1996, Lai and Ip 1996, Hua et al. 2000, Marwala and Sibisi 2005, Daghia et al. 2007), however all these studies considered exclusively additive Gaussian noise on the measurements. This assumption allowed handlin g probability density functions in a purely analytical way thus avoiding the need for e xpensive statistical sampling and Monte Carlo simulation. The downside is that such noise assump tion is often not very r ealistic. Indeed, apart from measurement noise there are also uncertaintie s in the model and its input parameters. These uncertainties propagate to the measured quantity in different ways and additive Gaussian noise might not accurately account for this. Furtherm ore, even under the Gaussian noise assumption, only Lai and Ip (1996) estimated the standard de viation in the identifie d properties, which as discussed earlier can be of key in terest to a designer. Moreover, if there is strong correlation in the properties, then providing only standard de viation estimates may not be sufficient, so a complete characterization of th e probability density function would be more appropriate. In order for the Bayesian approach to be able to handle not only Gau ssian but any type of uncertainty both on measurements and model, Mont e Carlo simulation needs to be used. To get an idea of the computational cost challenge involv ed let us consider a problem where we seek to

PAGE 33

33 identify four material properti es (the orthotropic elastic co nstants for example) and their uncertainties from ten simultaneously measured qua ntities (the ten natural frequencies of a plate for example). Applying the Bayesian method invol ving Monte Carlo simulation to this problem would typically imply 2 billion frequency eval uations (see Chapter 5 for the detailed count leading to this number). Numerical solutions (such as finite elem ents) that are usually used in identification are by far too expensive to be ut ilized for such a high number of evaluations. The problem is exacerbated even further if instead of 10 measured quantities we have 100,000 as is typical for full field displacement te chniques where each pixel of the image is a measurement on its own. Not only does the number of required evaluations increase, but even if the cost of these evaluations was decreased, it wo uld still not be possible to directly apply the Bayesian approach. Indeed, applying the Ba yesian method directly to 100,000 measured quantities would mean handling a 100,000-dimensiona l joint probability dens ity function. This is by far outside the realm of what is computationally imaginable today. The goal of the present work is then to ma ke the Bayesian identification of elastic constants computationally tr actable even in its most general form and even when used to its full capabilities. The two previously mentioned issues need to be dealt with to achieve this objective: 1. Drastically reduce the comput ational cost required for each function evaluation. 2. Drastically reduce the dimensionality of the joint probability dens ity functions involved. To address the first point we chose to use response surface methodology (Myers and Montgomery 2002) to approximate the expensiv e response of numerical simulations. The difficulty here is that respons e surface approximations (RSA) can be relatively inaccurate in higher dimensions (more than 10). Indeed, they suffer from the curse of dimensionality meaning that for a given accuracy, the number of simulations required to construct the

PAGE 34

34 approximation increases exponentially with the dime nsionality of the variables space. To achieve our goal we investigate various di mensionality reduction methods th at can be applied to response surface approximations, including nondimensionaliz ation and global sensiti vity analysis (Sobol 1993). To address the second point we chose to use the proper orthogonal decomposition technique (Berkooz et al. 1993) also known as Karhunen Loeve expansion or principal component analysis depending on the applicatio n field. Proper orthogonal decomposition (POD) allows to express the full field pattern of th e displacements in a reduced dimension basis (typically a few dozen dimensions). Note however that POD does not remove the requirement of point 1, since the finite element simulations used to obtain a strain field w ould still be expensive. This means that response surface methodology w ill be used in combination with POD by constructing RSA of the basis coefficien ts of the POD full field decomposition. Application Problems and Outline The ultimate application to which we want to apply the Bayesian approach is the identification of the orthotropi c elastic constants of a compos ite material from full field displacement measurements from a tensile test on a plate with a hole. From a mechanical point of view this application has multiple advantages The heterogeneous strain field allows indeed identifying all four constants from this single ex periment, unlike traditional standard tests, which identify only one or two properties at a time. It can also provide information on the variability of the properties from point to point in the plate. The experiment on the plate with a hole has been recently used for identification within a classica l least squares framework (Silva 2009) and led to very promising results as to the validity of the technique. From a Bayesian identification point of view this final appl ication is however extremely challenging. Both issues associated with Bayesi an identification mentione d in points one and two

PAGE 35

35 above are simultaneously present. First, the a pplication involves an e xpensive finite element model for calculating the strain field. This requires cost reduc tion to be able to obtain the required number of function evaluations. Secon d, the full field technique produces tens of thousands of measurement points. This re quires dimensionality reduction to achieve computationally tractable joint probability dens ity functions. Before taking up the challenge on this fi nal application we investigate various dimensionality reduction methods on a series of less demanding problems. Dimensionality reduction in the context of response surface approximations is very relevant to many optimization problems as well. Indeed RSA are often used in optimization on complex problems and reducing their dimensionality can be beneficial at two levels, first for RSA construction cost reasons, and second because optimization algorit hms are less efficient in high dimensions. In Chapter 2 we demonstrate efficien t use of response surface methodology and dimensionality reduction techniques first on a problem of optimal material selection for an integrated thermal protection system (ITPS). On this problem we investigate how physical reasoning, nondimensionalization and global sensitiv ity analysis can reduce the dimensionality of the problem. We then move to Bayesian identification a nd gradually consider problems of increasing complexity. In Chapter 3 we look at a simple three bar truss problem, with two objectives in mind. First, the example allows to didactically introduce the Bayesian appr oach to identification, second, it aims, in combination with a simple vibration based identification problem, at identifying general situations in which the Baye sian method has a significant advantage in terms of accuracy over classical id entification techniques su ch as least squares.

PAGE 36

36 Next we move in Chapters 4 and 5 to the Ba yesian identification of the four orthotropic elastic constants from natural frequencies of a fr ee plate. This is a problem where we use actual experimental measurements and which provides th e challenge of point 1 at full scale. Finite element simulations are indeed too expensive to be used here, so a drastic computational cost reduction is required. We first inve stigate in Chapter 4 the applicab ility of analytical approximate formulas for the natural frequencies and comp are them with nondimensional response surface approximations constructed based on finite elem ent simulations. Bayesian identification of the joint probability density function of the four orthotropic elastic c onstants is then carried out in Chapter 5. In Chapters 6 and 7 we tackle the final problem of identification from full field measurements on a plate with a hole, offering the full scale challenges. We first investigate in Chapter 6 the dimensionality reduction of the full fields by the proper orthogonal decomposition method. Then in Chapter 7 we apply the Bayesian identification approach, first on a simulated experiment, then on a Moir interferomet ry experiment that we carried out. Finally in Chapter 8 we provi de concluding remarks and pos sible future work items.

PAGE 37

37 CHAPTER 2 DIMENSIONALITY REDUCTION FOR RE SPONSE SURFACE APPROXIMATIONS APPLIED TO THERMAL DESIGN Introduction Dimensional analysis is a seve ral hundred years old concept going as far back as Galilei (1638). This concept has its roots in a very simple idea: the solution to a physical problem has to be independent of the units used. This means th at the equations modeling a problem can always be written in a nondimensional form. In the proc ess, nondimensional parameters are constructed, which, when done appropriately, are the minimum nu mber of variables required to formulate the problem. These basic concepts turned out very pow erful, and throughout the past century dimensional analysis has been extremely succe ssful for solving scientific and engineering problems and for presenting results in a compact form. The first theo retical foundations of dimensional analysis were set by Vaschy (1892) and Buckingham (1914) at the end of the 19th century. Since then and up to the 1960s nondimens ional solutions have been a major form of transmitting knowledge among scientists and engin eers, often in the form of graphs in nondimensionalized variables. Then, the advent of widely available numerical simulation software and hardware made it easier to obtain solutions to physical problems without going through nondimensionalization. This led to reduced interest in di mensional analysis except for reduced scale modeling and in areas where nondi mensional parameters have a strong physical interpretation and allow us to differentiate between regimes of diffe rent numerical solution techniques (Mach number, Reynolds number etc.). With the increase in computational power, numer ical simulation techniques such as finite element analysis (FEA) became not only feasible for single engineering design analyses but also

PAGE 38

38 in design optimization, often in conjunction with the use of surrogate models, also known as response surface approximations (RSA). Surrogate models seek to approximate the re sponse of a process based on the knowledge of the response in a limited number of points (p oints called design of experiments). A typical surrogate model is polynomial response surface which fits a polynomial through a small number of simulations of the process su ch as to minimize the least sq uares error between the response surface prediction and the actual simulated values. For a more in depth description of polynomial response surface approximations the reader can re fer to Appendix A. Other types of surrogate models include kriging (Mather on 1963, Sacks et al. 1989), radial basis neural networks (Orr 1996, 1999) and support vector regressi on (Vapnik 1998) among others. Surrogate models, however, suffer from the curs e of dimensionality, i.e., the number of experiments needed for a surrogate for a given accuracy increases expon entially with the number of dimensions of the problem. This issue can be generally a ttacked in two ways: one is by reducing the cost of a single analysis thus allowing a large number of analyses to be run for the RSA construction; the other being to reduce the number of design variables t hus reducing the dimensionality of the problem. Many techniques, often referred to as model reduction, have been developed to deal with these issues including static and dynamic condensatio n, modal coordinate reduction, Ritz vector method, component mode synthesis, proper orth ogonal decomposition and ba lanced realization reduction. An excellent overview of these te chniques can be found in the book by Qu (2004). A simple, yet relatively little used way of reducing the dimens ionality of the surrogates or response surface approximations is by applying dime nsional analysis to the equations of the physical problem that the finite element (FE) model describes. Kaufma n et al. (1996), Vignaux

PAGE 39

39 and Scott (1999) and Lacey and Steele (2006) sh ow that better accuracy of the RSA can be obtained by using nondimensional variables. This is mainly because for the same number of numerical simulations the gene rally much fewer nondimensional variables allow a fit with a higher order polynomial. Vignaux and Scott (1999) illustrated such a method using statistical data from a survey while Lacey and Steele (2006) applied the method to several engineering case studies including an FE based example. Venter et al. (1998) illustrate d how dimensional analysis can be used to reduce the number of variables of an RSA constructed from FE anal yses modeling a mechanical problem of a plate with an abrupt change in thic kness. The dimensional analysis was done directly on the governing equations and the boundary conditio ns that the FEA solved, reduc ing the number of variables from nine to seven. Dimensional analysis can be used to reduce the number of variab les in any FE based model. Indeed FEA models an underlying set of explicit equations (ordinary or partial differential equations, boundary conditions and in itial conditions). These equations whether coming from mechanical, thermal, fluids or other problems, can be nondimensionalized in a systematic way using the Vaschy-Buckingham th eorem, also known as Pi theorem (Vaschy 1892, Buckingham 1914). Systematic nondimensionali zation techniques are also described in Rieutord (1985), Sonin ( 1997) and Szirtes (1997). Although dimensional analysis is a natural tool to reduce the number of variables through which a problem has to be expressed, an even highe r reduction can be obtain ed if it is combined with other analytical and numerical techniques. The aim of this ch apter is to show that through a combination of physical reasoning, dimensional anal ysis and global sensitiv ity analysis, a drastic reduction in the number of variables needed for an RSA is possible.

PAGE 40

40 The basic idea is the followi ng: even after nondimensionaliza tion, it is still possible to end up with nondimensional parameters that only have marginal influence on the quantity of interest for the design problem considered. Identifying a nd removing these parameters can further reduce the total number of variables. This can be done at two moments. Before nondimensionalization, physical reasoning can allow formulating a set of assumptions that simplify the equations of the problem. After nondimensionalization a global se nsitivity analysis, e.g. Sobol (1993), can be used to fix any remaining parameters with negligible effects. In the next section we present the gene ral methodology for reducing the number of variables in a response surface approximation. In th e rest of the chapter we apply the method to solve a transient thermal probl em of spacecraft atmospheric reentry wherein the maximum temperature attained is critical. First we describe the thermal pr oblem of atmospheric reentry and the corresponding FE model used in the analysis Dimensional analysis on a simplified problem in conjunction with global sensitivity analysis is used next to reduce th e number of variables. The RSA is constructed using the accurate FE m odel and the ability of the RSA to account for all the variables of interest to the problem tested. We then discuss advantag es of the procedure in terms of computational cost. Fina lly we give a brief overview of how the RSA was used to carry out a material comparison and selection for the de sign and optimization of an integrated thermal protection system (ITPS). We close the ch apter by presenting concluding remarks. Methodology for Reducing the Number of Variables in an RSA We consider the general problem in whic h we are interested in the response Y of a finite element (FE) problem denoted by S The response Y potentially depends on s parameters of interest, denoted ws = { w1,, ws}. We consider the case where a response surface approximation (RSA) of Y is needed. If s is high (>10), then it can be benefici al to seek to construct the RSA in a lower dimension space. Indeed an RSA in a lower dimension space reduces the computational

PAGE 41

41 cost (number of simulations required) for a fi xed accuracy or improves the accuracy for a fixed computational cost. A low dimension is also prefer able especially if the RSA is later used for optimization. In order to construct the RSA as a function of a small number of parameters we use the following procedure, involv ing three major steps. (i.) Using preliminary physical reason ing we can often determine that only r out of the s initial parameters ( r s ) significantly aff ect the response Y Indeed in many engineering problems it is known based on empirical, theoretical or numerical evidence that some parameters have little effect on the response for the particul ar problem considered. Different choices for the numerical model or the use of homogenization can also allow to simplify the problem. The simplified problem involving only wr = { w1,, wr}is denoted by S* Sometimes a designer might not have enough domain expertise to formulate all the simplifying assumptions through physical reasoning. If little or nothing is known in advance that can help simplify the problem, this step can th en be aided by a global se nsitivity analysis (GSA) as described by Sobol (1993). GSA is a variance based technique, quantifying the part of the variance of the response explained by each paramete r, thus determining the parameters that have negligible effects. However, the GSA can only be carried out if the computational cost does not become prohibitive. If nothing works it is alwa ys possible to go direc tly to step (ii.). The aim of step (i.), when successful, is to define the simplified problem S* which will facilitate the next ste p, the nondimensionalization. (ii.) In this step we further reduce th e number of variables by determining the nondimensional parameters characterizing the problem. The dimensional problem S* can indeed be transformed into the nondimensional problem using the Vaschy-Buckingham theorem

PAGE 42

42 (Vaschy 1892, Buckingham 1914). Systematic nondi mensionalization techniques are provided in Rieutord (1985) and Sonin (2001). We can then express the nondi mensional response of the problem as a function of the nondimensional parameters q = { 1,, q}. According to the Vaschy-Buckingham theorem q r Note that the problem is equivalent to S*, so no additional approximation is involved in this step. However, since is a solution to which is equivalent to S* it will only provide an approximate solution to the initial problem S (iii.) Out of the q nondimensional parameters that we have determined in step (ii.) not all will necessarily have a significant influence on the response To determine and fix parameters with negligible influence we carry out in this step a global sensitivity an alysis (GSA) (cf. Sobol 1993). After such parameters have been fixed we can write approximately as a function of f = { 1,, f}, with f q At the conclusion of the process, we have f q r s The case with equality everywhere, while theoretically possible, is extremely un likely for an actual engineering problem and hopefully we achieved after these three steps f significantly smaller than s At this point we have determined that the approximate nondimensional response approximately depends only on the parameters f. However our final aim is to construct a response surface approximation of Y the actual response, and not of which is the response of the approximate problem Accordingly, we chose to construct an RSA Y of Y but as a function of the reduced number of nondimensional parameters f. That is, even though we made simplifyi ng assumptions and a GSA to determine f, we will construct the RSA function of these f parameters but using simulations of Y coming from the initial nonsimplified FE model of S This allows part of the e rror induced by constructing the

PAGE 43

43 RSA function of f instead of ws to be compensated by fitting to the actual nonsimplified FE simulations of Y The sampling for the RSA simulations is done in the f space. The RSA Y = f ( f ) is then constructed and the quality of its fit can be analyzed using cla ssical techniques (prediction sum of squares (PRESS) error for example Allen 1971, Myers and Montgomery 2002). Note however that these analyses provide mainly the quality of the fit in the reduced nondimensional variables f but not in the initial variables ws. To remedy this, an additional validation step can be carried out. A number of additional points are sampled in the initial, high-dimensional ws space. The FE response Y is calculated at these points and comp ared to the prediction of the reduced nondimensional RSA Y to make sure the accuracy of the RSA Y is acceptable. In the rest of the chapter we show how we applied this procedure to a transient thermal problem of spacecraft atmospheric reentry, for which a response surface approximation of the maximum temperature was required. Note that the application problem presented is a 1D heat transfer problem. However, the general method de scribed can be applied as well to 2D or 3D finite element problems. Steps (i.) and (iii.) are not affected much by moving from 1D to 3D models other than maybe th rough increased computational cost. Nondimensionalizing the governing equations of the problem in step (ii.) may be slightly more complex. However, while for 1D problems nondimensionalization is simple enough to be often applied by hand, there are systematic nondimensionalization techniques (e.g. Rieutord 1985, Sonin 2001), that can be applied to any governing equa tions and boundary conditions. A final note concerns the app lication of the nondimensional RSA in a design optimization framework. Since the RSA is in terms of nondimensional parameters, these could be chosen as variables for the optimization algorithm. This is however often a bad choice, since it is often

PAGE 44

44 difficult to move from a point (the optimum fo r example) in the nondimensional variables space to the corresponding design point in the physical, dimensional variab les space. A better choice in this case is to do the optimization in terms of the dimensional variables. A typical function evaluation step in the optimiza tion routine would then look as follows: dimensional variables point at which the response is required calculate the corresponding nondimensional variables for this point calculate the response at this point using the nondimensional RSA. While this may leave a large number of design variables, that is usually affordable because surrogate-based function evaluations are inexpensive. ITPS Atmospheric Reentry Application Problem An Integrated Thermal Protection System (IT PS) is a proposed spacecraft system that differs from traditional TPS in that it provides not only thermal insulation to the vehicle during atmospheric reentry but at the sa me time it carries structural loads. Thus the thermal protection function is integrated with the structural func tion of the spacecraft. Our study involves an ITPS based on a corrugated core sandwich panel cons truction. The design of such an ITPS involves both thermal and structural constraints. In the present study we focus on the thermal constraint represented by the maximum temper ature of the bottom face sheet (B FS) of the ITPS panel. The combined thermo-structural appro ach is presented in Bapanapalli et al. (2006) and Gogu et al. (2007a). A response surface approximation (RSA) of the maximum BFS temperature was needed in order to reduce computational time. The RSA is used in order to carry out material selection for the ITPS panel. In order to calculate the maximum BFS temper ature we constructed a finite element (FE) model using the commercial FE software Abaqus. The corrugated core sandwich panel design as well as the thermal problem of atmospheric r eentry is shown in Figure 2-1. The ITPS panel is

PAGE 45

45 subject to an incident heat flux assumed to vary as shown in Figure 2-2. This heat flux is typical of a reusable launch vehicle (RLV). Figure 2-1. Corrugated core sandw ich panel depicting the ther mal boundary conditions and the geometric parameters. Radiation is also modeled on the top face sheet (TFS), while the BFS is assumed perfectly insulated, which is a worst case assumption, since if heat could be removed from the BFS, the maximum temperature would decrease, becoming less critical. The core of the sandwich panel is assumed to be filled with Saffil foam insulation, while we explore different materials for the three main sections: top face sheet (TFS), bottom face sheet (BFS) and web (cf. Figure 2-1) in order to determine the combinations of materi als that result in low BFS temperatures. Figure 2-2. Incident heat flux (s olid line) and convection (das h dot line) profile on the TFS surface as a function of reentry time. Bottom face sheet (BFS) perfectly insulated Incident heat flux Radiation & Convection SAFFIL TFS dT dW L 2p dB BFS Web dC 450 2175 1575 tend=4500 Time from reentry, sec Heat influx rate, W/cm 2 4 3.4 Stage 1: heat influx and radiation Stage 2: no heat influx but r ad i a ti o n a n d co n vec ti o n 6.5 Convective coefficient h, W/m 2 K

PAGE 46

46 Finite Element Model of the Thermal Problem The FE thermal problem is modeled as a one di mensional heat transfer analysis as shown in Figure 2-3. The core of the sa ndwich panel has been homogenized using the rule of mixtures formulae given below. (sin) sinWWSSWWSW C CVVdpd Vp (2-1) (sin) (sin)WWWSSSWWWSSW C CCWWSWCVCVCdCpd C Vdpd (2-2) (sin) sinWWSSWWSW C CkAkAkdkpd k Ap (2-3) where stands for density, C for specific heat, k for conductivity, d for thickness, for the corrugation angle and p for the length of a unit ce ll (cf. Figure 2-1). Symbol A stands for the area of the cross-section through which the heat flows and V for the volume of each section. The subscripts C S and W stand for the homogenized core, the Saffil foam and the structural web sections respectively. Figure 2-3. 1D heat transfer model representation using homogenization (not to scale). It has been shown by Bapanapalli et al. ( 2006) that a one dimensional FE model can accurately predict the te mperature at the bottom face sheet of the sandwich panel. The maximum difference in the BFS temperature prediction betw een the 1D model and a 2D model is typically Incident heat flux Qi ( t ) Homogenized core material properties BFS material p ro p erties TFS material properties Radiation & Convection(t) x L 0

PAGE 47

47 less than 8K. For this preliminary design phase of the ITPS, this difference is acceptable. Radiation, convection and the inci dent heat flux (as shown in Fi gure 2-1 and 2-2) were modeled in the Abaqus 1D model using four steps (three for stage one of Figure 2-2 and one for stage two). Fifty-four three node heat transfer link elements were us ed in the transient analyses. For this one-dimensional thermal model th e governing equations and boundary conditions are as follows: Heat conduction equation: (,)(,) (,)(,)(,) TxtTxt kxTxTCxT x xt (2-4) Initial condition: (,0)iTxtT (2-5) Boundary conditions: 4 0(,) (0,)()(0,)()(0,)Ti xTxt QxtkQtTthtTt x (2-6) (,)0 QxLt (2-7) where is the density, k the thermal conductivity and C the specific heat of the ITPS panel. Ti is the initial temperature of the pa nel before atmospheric reentry, the emissivity of the TFS while qi(t) is the heat influx and h(t) the convection coefficient at the TFS, which vary with time of reentry as shown in Figure 2-2. Most of the ma terial properties are temperature dependent and due to the different materials in the different IT PS sections most material properties also depend on the position x The temperature and spatial depende ncy make nondimensionalization of the previous equations cumbersome. Furthermore, these dependencies increase the number of nondimensional parameters needed, which is cont rary to our goal. Accordingly, the thermal problem is studied in the next section under several assumptions that allow easier nondimensionalization of the equations as well as a reduction in the num ber of variables.

PAGE 48

48 Minimum Number of Parameters for the Temperature Response Surface Simplifying Assumptions for the Thermal Problem Our goal for the ITPS study is to determine which materials are the best for use in the ITPS panel, based on the maximum BFS temperature. Considering that the expected range of this temperature when the materials are varied is about 250 K, an approximation of the temperature with an accuracy of the order of 12.5 K (5%) is considered acceptable for the purpose of material selection. The thermal model presented in the previous section involves 13 ma terial parameters (specific heat Ci, conductivities ki and densities i of the TFS, BFS, web and Saffil as well as the emissivity of the TFS) of which most are temperatur e dependent. Some of these parameters are considered fixed, including as well as all the foam parameters (Saffil has been determined in previous studies by Blosser et al. (2004) and Poteet et al. (2004) to be the best suited foam for use in similar thermal protection systems). Note that the emissivity is defined as the relative emissivity times the Stefan-Boltzmann constant The relative emissivity of the TFS depends more on surface treatments than on the nature of th e TFS material (thus a typical value for this kind of application of 0.8 was used cf. Poteet et al. (2004) and Myers et al. (2000)). Fixing these parameters leaves 9 material variables. Desc ribing temperature dependency of the material properties would increase this number further. On top of the 9 material parameters, we also have 6 geometric design variables (cf. Figure 2-1) we use to find the optimal geometry for ea ch material combination. In total we have 15 variables of interest for the maximu m BFS temperature determination. In order to reduce the number of design variab les, the equations were studied under several simplifying assumptions that removed parameters that have a negligible role on the maximum BFS temperature. These assumptions have been established and checked on a Nextel(TFS)

PAGE 49

49 Zirconia(Web) Aluminum(BFS) ITPS material combination having the dimensions given in Table 2-1. The assumptions are: 1. The three thermal properties of the TFS ( CT, kT and T) have negligible impact on the maximum BFS temperature, mainly due to the small thickness of the TFS (about 2.2mm compared to a total ITPS thickness of a bout 120mm). This assumption allowed removing CT, kT, T and dT from the relevant parameters influencing the BFS temperature. 2. The temperature is approximately constant th rough the BFS, because the BFS thickness is small (typically 5mm thick compared to a total ITPS thickness of 120mm) and its conductivity is about one order of magnitude hi gher than that of the homogenized core. This assumption allowed removing kB and simplifying the boundary condition at the BFS. 3. The temperature dependence of the material pr operties have been simplified as following. In the FE model temperature dependence has been included for all materials, but the largest dependence was for the Saffil foam. He nce in the simplified problem, TFS, web and BFS materials were assigne d constant properties provided by the CES Selector (Granta Design 2005) material database. For Saffil the ma terial properties were assigned values at a representative temperature chosen such as to minimize the difference between the maximum BFS temperature when using the c onstant values and the one when using temperature dependent values for an ITPS desi gn with the dimensions given in Table 2-1 and a Nextel(TFS) Zirconia(Web) Aluminum (BFS) material combination. The effects of varying the materials were then found to be small enough to use this constant value for the range of materials we consider (detailed re sults of the numerical tests are provided in the following sections). These assumptions reduced the number of rele vant material parameters from 15 to 10 and also simplified the problem so that it can be easily nondimensionalized as will be shown next. Table 2-1. Dimensions of the ITPS (see also Fi gure 2-1) used among other to establish the simplifying assumptions. These dimensions were optimal for an Inconel (TFS) Ti6Al4V (Web) Al (BFS) ITPS (cf. Bapanapalli et al. 2006). Parameter dT (mm) dB (mm) dW (mm) (deg) L (mm) p (mm) Value 2.1 5.3 3.1 87 120 117 Nondimensionalizing the Thermal Problem Under the previous simplifying assumptions the thermal problem is equivalent to the one shown in Figure 2-4 and its equations can be rewritten as follows.

PAGE 50

50 Figure 2-4. Simplified thermal problem for dimensional analysis. Heat conduction equation: 2 2(,)(,) for 0CCCendTxtTxt kCtt xt (2-8) Initial condition: (,0)iTxtT (2-9) Boundary conditions: (,)(,)CCoutCBBB x dxdTxtTxt QkCd xt (2-10) 4 0(,) ()(0,)()(0,)inCi xTxt QkQtTthtTt x (2-11) where dC and LB are the thicknesses of th e homogenized core and the BFS, respectively; tend is the duration of the heat influx; C, CC and kC are the density, specific he at and conductivity of the homogenized core; B and CB those of the BFS. In order to nondimensionaliz e these equations, we use th e Vaschy-Buckingham or Pi theorem (Vaschy 1892, Buckingham 1914), which also provides the minimum number of nondimensional variables. The theorem states that we have to count the total number of variables and the corresponding number of dimensional groups The variables and the groups are listed in Table 2-2. Homogenized core material C, CC, kC 0 dC x Qout Qi Qrad + Qconv

PAGE 51

51 Table 2-2. Dimensional groups for the thermal problem. Variable T Ti x dC t tend Unit K K m m s s Variable kC CCC BCBdB Qi h Unit W mK 3Ws mK 2Ws mK 2W m 24W mK 2W mK We have a total of 12 variables in 4 indepe ndent dimensional groups, namely length, time, temperature and power (m, s, K, W). From the Vaschy-Buckingham theorem we know that we can have a minimum of 12 4 = 8 nondimensional variables which are pr ovided in Equations 212 to 2-19. iT T (2-12) Cx d (2-13) endt t (2-14) 2Cend CCCkt dC (2-15) BBB CCCdC dC (2-16) 3Ci CdT k (2-17) () ()Ci CidQt kT (2-18)

PAGE 52

52 () ()C Chtd B i k (2-19) In terms of these nondimensional variables th e simplified thermal problem can be written in the following nondimensional form: Heat conduction equation: 2 2 for 0< <1 (2-20) Initial condition: (,0)1 (2-21) Boundary conditions: 1 1 (2-22) 4 0()(0,)()(0,) Bi (2-23) The complete set of nondimensional variable s needed for the problem is given in Equations 2-12 through 2-19. Note that the nond imensional formulation of the equations was carried out only to determine a reduced number of nondimensional parameters function of which the response surface approximation could be expr essed. The nondimensional formulation is not used for solving the problem, neither do the finite element simulations use this formulation directly. The nondimensional temperature can be expressed function of the nondimensional distance and the nondimensional time as well as function of five other nondimensional parameters. Since at the maximum BFS temperatur e we are at a fixed lo cation and we are not interested in the time at which this maximum occurs, the nondimensional distance and the nondimensional time are not needed for the maxi mum BFS temperature RSA. The physical interpretation of the remaining five nondimensional parameters in Equations 2-15 to 2-19 is the following. the Fourier number or a nondime nsional thermal diffusivity, is

PAGE 53

53 the ratio of the rate of heat c onduction and the rate of heat storag e (thermal energy storage) of the homogenized core. is the ratio of the heat capacity of the BFS and heat capacity of the homogenized core, the ratio between the rate of radiat ion and the rate of heat conduction. is the ratio of the incident heat flux and the rate of heat c onduction, or can be seen as a nondimensional heat flux. Finally Bi, the Biot number, is the ratio of the rate of convection and the rate of heat conduction. We can note that the three nondimensional parameters and Bi are all proportional to dC / k while all the other parameters defining and Bi are fixed in our study. Indeed we are only interested in varying the materials and the geometry but the initial temperature Ti, the emissivity the incident heat flux profile Qi(t) and the convection film coefficient profile h(t) are all fixed in the present study (cf. Figur e 2-2 for the profiles of Qi(t) and h(t) used). This means that for our purpose we can consider only one of th ese three nondimensional parameters: for example. Summing up, simplifying assumptions together with dimensional analysis allowed us to determine that we can cons truct a response surface appr oximation of the maximum BFS temperature function of the three parameters and An initial 3rd degree polynomial response surface in these three parameters was constructed from 40 finite element simulations using Latin hypercube sampling and used for a global sensitiv ity analysis, according to Sobols approach (Sobol 1993). We f ound that variable accounts for 35.7% of the model variance, variable accounts for 64.1% of the model variance while variable accounts for only 0.06% of the model variance. The 0.06% correspond to 9.3 K, which is within the 12.5 K accuracy requirement we set ourselves for the material selection. Note that the GSA was made on an approximation of the response. This is reasonable sin ce in the next section we check and validate in more details the validity of the approach.

PAGE 54

54 From a physical point of view the fact that has a negligible role can be explained as follows: is proportional to dC / kC which is also present in That means that if we want to change while keeping constant we need to also modify tend (which is the only other variable in that does not appear neither in nor in ). If we increase by decreasing kC we need to also increase tend by a certain amount to keep constant. Decreasing kC has the effect of lowering the BFS temperature while increasing tend has the effect of making it higher. From the global sensitivity analysis it turns out that these two effects cancel out, which explains why has very little impact. Note that both dC and kC are relevant to the problem and none of them could be neglected. Parameter which is proportional to dC / kC turns out however to have negligible impact based on the GSA results. This is because both dC and kC appear in the remaining two nondimensional parameters and A summary of the procedure used to redu ce the number of variables needed for constructing the RSA from 15 to 2 is given in Figure 2-5. The reduction obtained is highe r than what could have b een obtained by applying any single technique. The use of onl y simplifying assumptions based on physical reasoning allows a reduction of 15 to 10 variables. Global sensitivity (GSA) analysis alone on the initial problem, which can be seen as an initial screening of th e variables, would have allowed a reduction from 15 to 9 variables. Such a GSA showed that parameters dT, kT, T, CT, kB and have negligible effects. Applying the GSA on the original problem is indeed almost equiva lent to the simplifying assumptions we had arrived to based on the physi cal understanding of the problem, which is not surprising. Depending on the complexity of temper ature dependency of the materials properties,

PAGE 55

55 Figure 2-5. Summary of the dimensionality reduction pr ocedure for the ITPS problem. dimensional analysis alone could have achieve d a maximum reduction of 15 to 8 (this reduction is in the case where no temperature dependency is c onsidered). So we can see that neither of the three techniques alone (simplifying assumpti ons, nondimensionalization, GSA) could have achieved the reduction of 15 to 2 obtained by combining all three. Maximum BFS Temperature RSA A response surface approximation (RSA) in the two nondimensional parameters and was now constructed. We chose to construct a 3rd degree polynomial res ponse surface (PRS) in and For additional details on polynomial response surface modeling and construction the reader can refer to Appe ndix A. The two variables and account for the thermal material properties (Ci, ki, i) as well as for the geometric parameters of the ITPS panel (di, p, L, ) as is shown in equations 2-24 and 2-25. These equations were obtained by substituting back the expressions of C, CC and kC from Equations 2-1 to 2-3 into the equations 2-15 and 2-16 of the nondimensional parameters and 2[(sin)] (0.50.5)[(sin)]WWSWend TBWWWSSWkdkpdt LddCdCpd (2-24)

PAGE 56

56 sin (0.50.5)[(sin)]BBB TBWWWSSWdCp LddCdCpd (2-25) Note that a given (, ) point can be obtained by a multit ude of combinations of the 15 individual variables (9 material s properties and 6 geometric parame ters discussed earlier). If the simplifying assumptions we made are exact, then all the different parameter combinations that lead to a same (, ) couple should have the same maximum BFS temperature. If on the other hand the effect of the simplifying assumptions is large, then the difference in the BFS temperature for different combinations of the same (, ) couple would be la rge as well. Indeed even if two points have the same (, ) values they might not have the same values of the other parameters that we ignored through simplifying assumptions or global sensitivity analysis. The Design of Experiments (DoE) for constr ucting the RSA involved sampling 855 Latin Hypercube (LHS) points in the initial 15 dimensi onal space with the bounds provided in Table 23 (see also Appendix A for the ba sic response surface methodology). Table 2-3. Lower bounds (LB) and upper bounds (U B) used for sampling in the 15 variables space. All units are SI. kW W CW kB B CB kT T CT dT dB dW L p LB 2 2,500 500 2 1,550 900 3 2,000500 0.001 0.00460.00224 80 0.1020.099 UB 7 6,000 950 50 3,000 1,820505,0001,7000.00370.007 0.00416 90 0.1500.150 Variables and were calculated for each of these poi nts and a subset of 20 out of the 855 points was selected according to the D-optimality criterion in the (, ) space within the bounds given in Table 2-4. Finite elements (FE) an alyses, using the model described earlier and involving none of the simplifying assumptions done for nondimensionalization, were run at these 20 points and the 3rd degree PRS was constructed in (, ). This RSA is represented in Figure 26. One of the advantages of having only two variables in the RSA is an easy graphical

PAGE 57

57 representation of the results. This graphical repr esentation possibility will be later used for the material selection. The RMS error in the approximation was 1.09 K, the cross-validation PRESS error 1.96 K and the R2 was 0.99989 (the range of the RSA is a bout 250 K). These are satisfactory error measures for our application, but they poorly a ccount for the total error involved in using this RSA. Indeed part of the total error is due to th e fact that the FE results, obtained without the simplifying assumptions, will not be the same for different combinations of the dimensional parameters corresponding to a same (, ) couple. This error is poorly accounted for with only 20 points. Instead, to check this error we randomly sampled 100 out of the 855 LHS points in the 15-dimensional space. We calculated the maximu m BFS temperature at these 100 points using FE analyses that included temper ature dependent core material ma terial properties and compared to the two-dimensional variables RSA predicti ons. We obtained the following errors: the RMS error was 2.74 K, the mean of the absolute erro r was 2.1 K and the standard deviation of the absolute error 1.70 K. These errors are well within the 12.5 K accuracy requirement we set ourselves for the material selection. Note that an alternative for the error estim ation would have been to construct the RSA directly using the 20 + 100 poi nts and look only at the corres ponding PRESS error, which leads to similar results. Table 2-4. Ranges of the nondi mensional design variables Variable Range 0.1-0.5 0.6-2.4 If we wanted to know how mu ch of the error is due to each of the techniques used (simplifying assumptions, dimensional analysis, global sensitivity analysis (GSA)) we can note

PAGE 58

58 the following. Dimensional analysis alone ne ver involves any error since the nondimensional equations are strictly equivalent to the initial on es. GSA turned out in our case to give very good Figure 2-6. Maximum BFS temp erature two-variable RSA. results. Indeed we found that 99.94% of the va riance of the response could be explained by two variables. Since we showed that three out of th e five variables are equi valent to each other and account for this very small part of the variance, the error of going from fi ve to two variables is likely to be very small in our case. This means th at most of the error in the RSA is explained by the simplifying assumptions. To gain more insight of where the ma ximum errors occur and which simplifying assumptions have the most impact, anti-optimi zation (Elishakoff 1991, Lee et al. 1994) of the error in the RSA was carri ed out. The anti-optim ization process looks to find the places (i.e. materials and geometries) with the highest error in the RSA, and by looking at the designs corresponding to the anti-opt imum we can understand what causes these errors. Antioptimization was carried out in Gogu et al. (2007a). It showed that the RSA has poor accuracy when the geometry is far away from the one for which the represen tative temperature of assumption 3.) was established. For these unusua l geometries the representative temperature shifts due to temperature dependence of the co re; this shift is not accounted for by the RSA TMax BFS (K) BBB CCCdC dC 2 Cend CCCkt dC

PAGE 59

59 which explains the poor accuracy for these geomet ries. To further improve the accuracy of the RSA for a large range of geomet ries we would have to add no ndimensional parameters that account for the temperature dependence. However, for the geometry for which we will use the RSA in the next section, the maximum absolute error among eight test points corresponding to actual material combinations was 7.6 K with a mean of 1.87 K (see Figure 2-7). The figure shows the absolute error of the response surface estimates compared to FE predictions for eight different material combinations (format for the material combination names: Web material BFS material). The TFS material is alumino-silicate Nextel 720 composite laminate except in the reference all-titanium design. The maximum B FS temperature from the RSA is superposed as a contour plot, with the bottom (330 K) and th e top (540 K) contour li nes being labeled. Figure 2-7. Absolute error of the response surface estimates compared to FE predictions. For varying geometries as well as varying ma terials the anti-optimization carried out by Gogu et al. (2007a) showed that the worst case error is 9.05 K. Note that we compare the RSA

PAGE 60

60 predictions to FE results which also have limited accuracy. A convergence study on the finite element model showed that the discretization erro r for the BFS temperature is less than 0.007 K. Accordingly, considering that the errors in the RSA are of the order of 10 K we can assume the finite element simulations are exact, meaning that all the error is considered to come from the use of the dimension reduction method. These erro rs are within the 12.5 K requirement we set ourselves for the material selection. RSA Construction Computational Cost The possibility of graphical representation of the two nondimensional variables RSA is of great benefit in our case. Most of the problems however cannot be reduced to onl y two or three nondimensional variables. In these other ca ses, constructing the RSA in nondimensional variables can still benefit computational cost. In our case we used 40 FE simulations for the global sensitivity analysis in 3 dimensional space, 20 simulations for constructing the 3rd degree PRS in (, ) and 108 simulations to verify the accuracy of the RSA. In total we used 168 simulations. Constructing the maximum BFS temperature RSA in the 15 initial variables leads to following results. A linear PRS in the 15 variable s required 32 FE simulations and led to an RSA with an RMS error of 9.16 K, a PRESS error of 12.9 K and an R2 of 0.969 (for r ecall the range of the RSA is about 250 K). A 2nd degree PRS in the 15 variables required 272 FE simulations and led to an RSA with an RMS error of 1.23 K, a PRESS error of 1.78 K and an R2 of 0.99989. We can note that constructing the 3rd degree PRS in the two nondi mensional variables had an overall computational cost about 40% lower than constructing a 2nd degree PRS in the initial 15 variables, while the error was maintained at an acceptable level for our application. Note also that in most problems a 2nd degree PRS is the minimum usable, linear PRS being very rarely acceptable. Often 3rd degree PRS are even required to achieve acceptable error

PAGE 61

61 measures. For 3rd degree PRS the computati onal cost difference between using all the variables or using the reduced number of nondimensional variables can become very significant. For the thermal problem for example, a 3rd degree PRS in the 15 dimensions would have required 1,632 experiments. Applying the RSA for Comparison of Materials for the ITPS The graphical representation possibility of the two-dimensional RSA was used next for comparison of alternate materials for the ITPS sec tions. For this part the dimensions of the ITPS are once again fixed to the values in Table 2-1. We used the CES Selector (Granta Design 2005) material database software. Constraints on properties such as maximum service temp erature, fracture toughne ss and Youngs modulus were imposed during the search in the database in order to avoid unreasonable materials. In order to compare the web materials, the BFS material was fixed to Aluminum alloy 2024 and the potential web materi als were plotted in the (, ) plane with the RSA of the maximum BFS temperature superposed as a contour plot as shown in Figur e 2-8. Note that in this figure numerous materials are grouped under generic names (denoted by an asterisk) such as stainless steels, nickel chromium alloys or cobalt superalloys. Figure 2-8 shows that materials such as alumino-silicate/Nextel 720 composites or Zirconia ceramics provide a signif icant reduction in the maximum BFS temperature compared to metals such as Ti alloys or Nickel Chromium a lloys, which were considered in previous designs (cf. Bapanapalli et al. 2006). Since we s eek materials leading to a low maximum BFS temperature these materials were selected fo r further study as good potential candidates for the web of the ITPS panel. The same material selection procedure was applied for BFS materials. The complete results are given in Gogu et al. (2007a). In summa ry, the material selection process based on the

PAGE 62

62 RSA constructed here identified a small number of good potential material candidates for the different sections of the ITPS. These materials were used in an optimi zation routine developed Figure 2-8. Thermal comparison of materials suitab le for the web using th e contour plot of the maximum BFS temperature RSA. for the ITPS. The optimization procedure used is the one presented in Bapanapalli et al. (2006), which seeks to minimize the mass of an ITPS pane l by finding the optimal geometry parameters for a given material combinati on. Applying it to the different material combinations found through this material selection process allowed us to obtain bo th the best suited material combination and the optimal corresponding geometry for the ITPS panel.

PAGE 63

63 Summary The present chapter illustra tes the use of combined physical reasoning, dimensional analysis and global sensitivity analysis to si gnificantly reduce the number of variables in response surface approximations (RSA) used for a material selection study for an integrated thermal protection system. Nondimensionalizing the exact e quations involved in the fini te element analysis, while reducing the numbers of variables, can be relativ ely cumbersome and lead to a still relatively high number of nondimensional parameters. Some of these parameters might only have marginal influence on the quantity of interest. In this case the process can be aided by simplifying assumptions and a global sensi tivity analysis that can help further reduce the number of nondimensional parameters by keeping only those th at control most of the variation of the quantity of interest. It is importa nt to note that removing variable s that have small impact on the problem can have relatively small detrimental e ffects on the accuracy of the RSA, since the RSA is fitted to the finite element simulations obt ained without simplifying as sumptions thus allowing for error compensation. The presented approach is general and can be applied to any finite element based response surface construction. It was used here with succe ss on a transient thermal heat transfer problem for an integrated thermal protection system (ITPS ). Dimensional analysis in combination with several simplifying assumptions and a global sens itivity analysis allowed reducing the number of parameters of the response surface approximati on of the maximum temperature from 15 to only 2 while maintaining reasonable accuracy of the RSA. The basic idea of constructing response surface approximati ons in terms of nondimensional variables to reduce computational cost or improve accuracy will be used again in the next chapters.

PAGE 64

64 CHAPTER 3 LIMITS OF BASIC LEAST SQUARES IDEN TIFICATION AN INTRODUCTION TO THE BAYESIAN APPROACH APPLIED TO ELASTIC CONSTANTS IDENTIFICATION Introduction Current design of aerospace structures tends to increasingly consider uncertainty in material properties. Variability in strength, for ex ample, is now used to define the A-basis or Bbasis design allowable (Rust et al. 1989) to co mply with Federal Avia tion Administration (FAA) regulations (FAR Part 25.613). Uncertainty in elastic constants due to measurement and modeling errors is also be ginning to be considered. In this context, elastic constants identificati on from standard tests can doubly benefit from statistical methods. These can not only improve the accuracy with which the materials properties are identified but also provide an estimate of the uncertainty with which the properties are known. Currently, a very widespread method for el astic constants identif ication is based on minimizing the least squares error between the ex perimental data and the model predictions. In spite of the existence of advan ced least squares formulations, th at take into acc ount statistical information, (e.g. books by Lawson and Hanson ( 1974), Bjorck (1996), Tarantola (2004)), the simplest formulations of the least sq uares method, based on minimizing the L2 norm of the misfit (Bjorck 1996), are still extensively used today (e.g. in the domain of identification of mechanical constitutive laws Frederiksen (1997), Le Magorou et al. (2002), Lauwagie et al. (2003, 2004), Maletta and Pagnotta (2004), Molimard et al. ( 2005), Genovese et al. (2005) The basic, nonstatistical least squares formulations used in th ese studies have the advantage of great simplicity and most often work well leading to reasonably accurate results. In some cases however, using statistical identifica tion frameworks may lead to further improvements in accuracy.

PAGE 65

65 The objective of the present chap ter is twofold. First, it se eks to introduce the Bayesian identification approach on a di dactic three bar truss exampl e problem. Second, it seeks to identify and illustrate such situations where the advanced, statistical identification methods lead to significantly better results compared to the basic least squares approach. As an advanced method we chose the Bayesian approach, which we find is among the most general statistical approaches while its formulation remains relative ly simple. Isenberg (1 979) detailed a Bayesian framework for parameter estimation, which later wa s applied by others in pa rticular to frequency or modal identification, i.e. identifying material properties from vibrati on test data (Sol 1986, Alvin 1997, Beck and Katafygiotis 1998, Elseif i and Irvine 2002, Dascotte 2004, Marwala and Sibisi 2005, Daghia et al 2007). The Bayesian method has also been proposed in the context of model validation and verification (Rebba et al. 2006a, 2006b), domain which, similarly to identification, is concerned with the agreemen t between model predictions and experimental data. In the next section we provide a brief theore tical overview of the methods used. Then we introduce a three bar truss illustrative example for the Bayesian identification procedure. In the following section we compare the Bayesian and the least squares approach on the three bar truss, identifying situations differing significantly in th e results. We then provide a comparison of the two methods for the more complex problem of identifying elastic constants from natural frequencies of a composite plate. We finally illust rate the advantage of a Bayesian procedure that can handle not only measurement error. Some concluding remarks close the chapter.

PAGE 66

66 Least Squares and Bayesian Methodologi es to Parameter Identification Least Squares Formulations The most common least squares approach to parameter identification is based on finding the parameters that minimize the squared erro r between experimental observations and model predictions. Let x be a vector of n material properties that we seek to identify and y0 a vector of m experimental observations. We assume we have a mathematical model relating x to predictions of y such that y(x) is the vector of m model predictions. Note that y(x) is usually a function of parameters other than material propertie s as well (geometry, loads, etc). The simplest least squares formula tion minimizes the objective function J(x) defined in Equation 3-1 (Bjorck 1996, chapte r 1). The material properties x corresponding to the minimum are the identified properties. 2 0 111 ()()()() 22m T ii iJyy00xyxyyxyx (3-1) A variation of this basic formulation is to normalize each component of the residue vector by the corresponding experimental observation. This is a partly weighted least squares formulation which has its own limitations as will be shown in the next section. A more general least squares formulation is based on minimizing the following objective function: 11 ()()()() 2TJC00x y x y x y x y (3-2) where C( x ) is the variance-covariance matrix of y ( x ) thus taking statistical information into account. The variancecovariance structure for the response y needs to be assumed (based on

PAGE 67

67 empirical evidence) or calc ulated using a model. If C( x ) is calculated from a model this can significantly increases the comput ational cost of the method. Due to the challenge of constructing a realistic variance-covariance matrix C( x ) most least squares identification applications revert to th e use the formulation of Equation 3-1, which is equivalent to assuming that C( x ) is equal to the identity matrix, thus ignoring poten tial statistical information. The formulation of Equation 3-1 or its normalized version have been recently used for example in the domain of elastic constants id entification from full field strain measurements (Mauvoisin et al. 1994, Lecompte et al. 2005 an d 2007, Molimard et al. 2005, Silva 2009) or from vibration data (Sol 1986, Pedersen and Fred eriksen 1992, Ayorinde and Yu 1999, Araujo et al. 2000, Rikards et al. 2001 and 2003, Cugnoni 2004, Shi et al. 2005). These approaches provided reasonable results in these studies, however more advanced statistical methods (whether advanced least squares or Bayesian) ma y sometimes lead to more accurate results. In the present work we seek situa tions where the advantage of usi ng statistical methods would be the greatest. Bayes Theorem The Bayesian identification approach is base d on the application of Bayes rule which gives the probability of an event A knowing that an event B occurred as shown in Equation 3-3. Often P(A) is called the prior probability of A, Pprior(A) to mark the distinc tion to the probability of A knowing B which is also called th e posterior probability of A (/)() (/) () PBAPA PAB PB (3-3) Assuming now that A and B are random variables having continuous probability distributions (denoted by ), Bayes rule can be extended to Equation 3-4.

PAGE 68

68 / /()() () ()BAaA ABb Bba a b (3-4) The function /()BAab is called the likeli hood function, denoted l(a) while ()Aa is usually called the prior distributi on of A. The denominator can also be written in a different way using the law of total probabil ities for continuous di stributions, which leads to Equation 3-5. / / /()() () ()()BAaA ABb BAaAba a bada (3-5) It is worth noting that the integral in the denominator represents just a normalizing constant, denoted by K from now on and which rarely n eeds to be calculated explicitly. The Bayesian Identification The Bayesian identification approach is a st atistical approach. Unlike the least squares identification method it will not provide a singl e value for the identified parameter but a probability distribution. The unknown material properties have a single value, but it is unknown, and their possible values are represented by the random variable X (with x being an instance of X ). Its probability density function (pdf) is denoted X( x ) In the Bayesian identification we seek to identify the distribution of the material properties X given a vector of measurements y0. The vector of measurements is assumed to be contaminat ed by a vector of unknown measurement errors emeas, stemming from the random variable meas. The true, but unknown response is then given by 0 measmeas trueyye. Letting meas trueY be the random variable of the tr ue response value (it is a random variable because meas true y is unkown), we can then write: 0 measmeas trueYy (3-6)

PAGE 69

69 For the identification we compare the measurem ents to a models predictions (e.g. finite element model) yp0( x ) that is a function of the material properties x and other input parameters p (e.g. geometry, loading) taken at their values p0. Because of modeling error, and because the input parameters may not be accurately known, the random variable of the true response is given as: ()() modelmodel truep0Yxyx (3-7) where model is the random variable of the error due to modeling uncertainty and uncertainty in the other input parameters. Bayesian identification i nvolves calculating for given material properties xfixed the probability that the true response is the same wh ether derived from the measurement or from the model, i.e. the probability that meas trueY=model trueY, which can also be written: 0modelmeas truetrueDYY (3-8) This probability is called likelihood of xfixed given the measurements. For all possible xfixed the likelihood becomes a function of x denoted as l( x ) and defined as the pdf of D given that X = x (denoted D / X = x ) taken at zero: /()DXxl 0 x (3-9) It is important to note that the pdf of D given X = x is a distribution in terms of the random variable D and that x can be seen as a parameter of this distribution. That is, for a certain material properties value x we need to determine what the distribution of D is. The likelihood then, which is a function of x will be this pdf of D for the given x taken at D =0. An alternative formulation of the likelihood function can be achieved by introducing the random variable Y of the measurement prediction for a given x : ()() modelmeas p0Yxyx (3-10)

PAGE 70

70 Equation 3-8 can then be written as 0Yy and the likelihood function of x given that the measurement prediction Y should be equal to the actual measurement y0 is then: /()YXxl0xy (3-11) For each x the likelihood value is th e pdf of the measurement prediction given the material properties, Y / X = x taken at the point of th e actually measured response y0. In the rest of the paper we will use this second formulation involving the random variable of the measurement prediction, which is the most common formulation used in Bayesian identification literature. Note though that traditionally most studies call Y simply the random variable of the measurements. We preferred calling it measurement prediction, since it is constructed for any x (not just the true value of the properties) and because it can include modeling uncertainty model. Briefly, note that the advantage of the first fo rmulation is that it allows to take advantage of the independence of the random variables meas trueY and model trueY in order to reduce the computational cost of Monte Carlo simula tions, which may be required to propagate uncertainties through the model. This aspect will be briefly discussed in Chapter 5. A particular point of interest in the Bayesian approach is th at it can not only account for the measurements through the likelihood functio n but it can also easily account for prior knowledge on the material properties using the prior distribution prior Xx, i.e. the pdf of X prior to observing y0. Such prior knowledge can come from manufa cturers specificat ions or previous tests. Based on the likelihood function and the pr ior distribution, Ba yes rule then gives 0/ XYyx, the pdf of X given that the measurement prediction Y should be equal to y0, by the product of the like lihood function by the prior distribut ion as shown in Equation 3-12.

PAGE 71

71 0/ /11 ()priorprior XYXxX XYyl KK 0xxxyx (3-12) where K denotes a normalizing constant, which is rarely required to be calculated in practice. The identified pdf, 0/ XYyx, can be characterized by mean and most likely values as well as variance-covariance matrix, thus providi ng uncertainty measures in the identified properties, which are usually no t obtained in studies applying th e basic least squares approach. For additional details on the th eory of Bayesian analysis a nd identification the reader can refer to Berger (1985) and Kaipio and Somersalo (2005). Progressing from Least Squares to Bayesian Identification Compared to other statistical approaches to id entification, we chose the Bayesian approach because it has a simple formulation yet it can in corporate prior knowledge and provide statistical information about the identified pa rameters (it provides a distributi on instead of a single value). We provide here a quick overv iew of different identification formulations from the simplest to the most general and the progress made at each step. a. Basic least squares: *argmin()xJ xx where 1 ()()() 2TJ 00x y x yy x y (3-13) b. Generalized least squares: *argmin()C xJ xx where 11 ()()()() 2T CJC00 x yxyxyxy (3-14) c. Maximum likelihood: *argmax()xL xx where /()()YXxL0x y (3-15) d. Bayesian identification: 0/ /1prior YXxX XYyK 0xyx (3-16)

PAGE 72

72 Compared to the basic least squares formula tion the generalized formulation appropriately handles normalization as well as statistical information present in the variance covariance matrix C( x ) These points will be illustrated in de tails in the rest of the chapter. Moving from the generalized least squares formulation to maximum likelihood has the advantage of being able to handl e distributions other than Gaussi an. Also the likelihood function provides an easily obtainable estimate of the uncertainty in the identified parameters. Finally moving from maximum likelihood to Bayesian identification we can now incorporate prior knowledge on the parameters to be identified in form of the prior distribution. Like the maximum likelihood, the Bayesian a pproach can handle non-Gaussian distributions. Moreover, because the Bayesian method identifi es a probability density function it has the advantage over the previous identification formul ations of immediately providing an estimate of the uncertainty (variance-covariance matrix) in the identified parameters. In this chapter we compare the basic least squares identification (a) to the Bayesian identification (d) using simulated experiments. We made the choice of simulated experiments since these allow more flexibility in isolating diffe rent factors that affect the identification. In Chapter 5 we will provide least squares and Bayesian identification results using actual experimental measurements. The identification comparison will illustrate how the two methods handle normalization of the measurements and statistical information a bout the measurements, such as coefficient of variation or correlation of the measurements. It also illustrates the advantage of obtaining the uncertainty in the identified parameters with th e Bayesian approach. Note however that we do not investigate here the advant age of using prior information in the Bayesian approach.

PAGE 73

73 Accordingly, we will choose wide prior distributi ons (i.e. we assume we have only vague prior information). A Three Bar Truss Didactic Example Description of the Three Bar Truss Example The first example is a simple material property identification problem for which the application of both identificati on approaches is straight forw ard, avoiding complexity which could cloud the comparison. Our aim is to illustra te when and why the basi c least squa res and the Bayesian approaches lead to significantly different identified parameters. The truss is subject to a horizontal force p and a vertical force r as shown in Figure 3-1. All three bars are assumed to have the same Young modulus E of 10 GPa, which is unknown and which we want to identify from strain measur ements on two or three of the bars. The cross sectional areas of the bars are known exactly: AA is the cross sectional area of bars A and C and AB the cross sectional area of bar B Figure 3-1. Three bar truss problem. From static analysis we find th e relationships for the strains in the bars given in Equations 3-17 to 3-19. 1 4 3A BA Arp EAA A (3-17)

PAGE 74

74 1 4 3C BA Arp EAA A (3-18) 14 4B B Ar E AA (3-19) Sources of uncertainty We consider that the magnitudes of the forces p and r are uncertain, meaning that due to measurement error the measured values of the for ces might not be their true value. We assume the measured values of both forces are nor mally distributed with a mean value of 104 N for p and 105 N for r and a standard deviation of 500 N for both. Note that we consider these to be the only sources of uncertainty in the problem. In reality there might also be uncertainties in the strain measurements and cross section measurements. We made this simplifying assumption however, so that the problem is not clouded by too many variables, which would make it di fficult to draw the underlying pr ocesses affecting the results. This is in line with the objective of the chapter which is to identify general situations where the Bayesian and basic least squares identificati ons significantly differ in results. Assuming uncertainties only on the forces was a reasonable compromise between simp licity and flexibility for identifying such situations. The Least Squares Method The basic least squares formulation (see formula tion a. in the previous section) finds the parameters that minimize the sum of the square s of the errors between experimental strain observations and model predictions. Assuming we m easure the strains in bars A and B, the least squares objective function (of E quation 3-1) can be written as: 221 ()()() 2measuremeasure AABBJEEE (3-20)

PAGE 75

75 Note that even though the loads p and r are uncertain we have to provide a single nominal value for each. The most natural candidates are the means of the distributions of p and r Note also that in this simple case it was possible to find the minimum analytically. The Bayesian Method Applying Equation 3-12 to the th ree bar truss problem we can write the distribution of the Young modulus of the bars given that we measured measure A andmeasure Bin bars A and B respectively as shown in Equation 3-21. ,1 ,()measuremeasure AB AABBmeasuremeasureprior ABE E EEE K (3-21) The right hand side of this equation is com posed, apart from the normalization constant K of two quantities. The first is the likelihood function of E given the measurements and the other is the prior probability distribution of E Here we assume that the prior knowledge is in the form of a truncated normal distribution with mean value 9.5 GPa and st andard deviation 1.5 GPa. We truncate the distribution at 8 GPa and 11 GPa, meaning that we consider impossible for the properties to lie outsid e these bounds. As explained earlier, th is is a wide prior, centered relatively far away from the true value of 10 GPa to avoid significantl y biasing our comparison in favor of the Bayesian identification procedur e. A few additional details on the impact of the prior will be given in the following sections. The other right hand side term in Equa tion 3-21 is the likelihood function of E given the measurements measure A andmeasure B. The likelihood function measures the probability of getting the test result for a given value of the modulus, and consequently, it provides an estimate of the likelihood of different modul us values given the test results. As we vary E successively from to we calculate the joint probability density function (pdf) of the strains for that E at the

PAGE 76

76 measured strain point {measure AA, measure BB}, that is ,,ABmeasuremeasure AB E. For a given E we have a probability distribution for the st rains, due to the uncertainty in the loads p and r which propagates to the strains (we use Monte Carlo simulation to generate 100,000 samples of the loads p and r and propagate these to strains using Equations 3-17 to 3-19). The strains obtained through the Monte Carlo simulati on are fitted with a joint normal pdf using mean values and variance co-variance matrix. This pdf is then evaluated at the point {measure AA,measure BB}. This procedure is repeated for a series of E values ranging from to thus constructing point by point the posterior pdf of th e left hand side of E quation 3-21. In practice infinity is truncated by reasona ble bounds for the property consid ered. In our case these are the truncation bounds of the prior distribution. Note that the entire Bayesian identification for the three ba r truss example took about 10s on a Pentium Core 2 Duo 2.00 GHz PC. This com putational cost can increase significantly as m the number of experimental values and n the number of parameters to be identified, increase. The major part of the computati onal cost is the propagation of uncertainties through the model. Note that the three bar truss model involved only analytical ex pressions. If more complex models were used (e.g. finite elements) the computati onal cost would increase significantly or even become prohibitive without the use approxima tions (e.g. response su rface approximations (Myers and Montgomery 2002)). To further redu ce computational cost one can use advanced simulation techniques. A common advanced technique used in Bayesian methods is Markov Chain Monte Carlo simulation (Tarantola 2004 ). The combination of the aforementioned methods usually allows to bring the computationa l cost down to reasonable levels for many reallife problems, but still significantl y higher than that of the basic least squares method. This is

PAGE 77

77 however the price to pay for using statistical in formation in input and obt aining the statistics of the identified parameters with arbitrary (not necessarily Gaussian) probability distributions. Illustrative Example To illustrate the Bayesian procedure used, and in particular the construction of the likelihood function we will consider a simple case where all quanti ties are one-dimensional ( m = n =1, with m the number of experimental observations and n the number of material properties). We consider the thr ee bar truss where we have a singl e property to be identified, the Young modulus E and a single measurement, the strain in bar C. Of course there is little interest in ap plying the Bayesian procedure to such a onedimensional case but we use it here only for illu strative purposes, the ba sic principle remaining the same when extending it to multi-dimensional cases (several properties to be identified and several measurement points). Applied to the present one-dimensional case Equation 3-12 of the Bayesian formulation can be written as shown in Equation 3-22. 1 ()measure C CCmeasureprior ECE EEE K (3-22) That is, the distribution of E given that in bar C we measured measure Cis equal to a normalization constant time s the likelihood function of E given that we measured measure C times the prior distribution of E The prior distribution used here is the same wide distribution described in the pr evious subsection. Next we describe in more detail the likeli hood function and its construction. This function measures the probability of getting the test result measure C for a given value of the modulus, and consequently, it provides an estimate of the likel ihood of different modulus values given the test

PAGE 78

78 results. Seen as a function of E we can construct it point by point. That is we fix an E successively from to and calculate fixed Cmeasure C EE. As mentioned earlier, infinity is truncated to reasonable bounds for the propertie s to be identified. For example let us fix E to 5 GPa ( Efixed=5 GPa). We substitute this E back into Equation 3-18 and we propagate the uncertainties in the loads p and r in the same formula. For doing this we use 100,000 samples by Monte Carlo simulation. Thus we obtain the distribution function of C if E was 5 GPa and given the uncertainties in p and r : 5CEGPaC In the general case we obtain this distribution from the Monte Carlo simulations through the empirical histogram. In this particular case we even know that the strain distribution will be normal so we just calculated the mean and standard deviation and used the corres ponding normal distribution. We can note that if E were 5 GPa, the strain in bar C would be much higher (in absolute value) than if E was 10 GPa, that is 5CEGPaC is centered around -5.27 m compared to the value of -2.87 m which is the actual (measured) st rain in the bar with the true E of 10 GPa and the true values of loads. This is reflected in Figure 3-2. Looking back at the construction of the lik elihood function, we need to take the distribution 5CEGPaC at the point measure CC, that is we need to compute 5Cmeasure EGPaC. From Figure 3-2 we can see that this point is located in the tail of the distribution of 5CEGPaC so the corresponding value of 5Cmeasure EGPaC will be relatively low. That means the likelihood that we measure 2.87measure Cm if E was 5 GPa is relatively low.

PAGE 79

79 Figure 3-2. Likelihood value using the distribution of C if E were 5 GPa. Now let us assume that we fix E = 9.18 GPa and repeat the same procedure. In this case we will obtain a distribution similar to that in Figure 3-2 except that it will be centered in -2.87 m This means that the point 9.18Cmeasure EGPaC is the highest point on the distribution, so the likelihood that we measure 2.87measure Cm is the highest if E was 9.18 GPa. Note that if we assume that there is also a uniformly distributed measurement error (noise) which clouds the true value of the truss response ,truemeasuremeasure CCCCC then the calculation of the likelih ood point would involve the integral over the uncertainty range. For example instead of calculati ng a likelihood point by using 9.18Cmeasure EGPaCone would use /9.18()measure CC measure C CCEGPaCCd instead. Calculating this integral is equivalent to saying that the likelihood of measuring measure C is equal to the probability that the simulated strain C fall inside the measurement uncertainty bounds. Measuremen t errors other than uniformly distributed -5.27 C mesure = -2.87 C (m ) E fixed at 5 GPa 5C E GPaC 5Cmeasure EGPaC

PAGE 80

80 would be taken into account di rectly during the Monte Carlo si mulation leading to the strain distribution. After seeing how to compute 2 points of the likelihood function, one for 5 GPa and one for 9.18 GPa, we can in the same way compute fixed Cmeasure C EE where Efixed describes successively points between and with a given infinitesimally small step. In practice we varied E between 8 and 11 GPa (with a step of 5 MP a) based on the prior distribution truncation bounds. If we do this we obtain ()Cmeasure EClE the likelihood function of E given measure C, which is a function of E and is plotted in Figure 3-3 in our case. Note that the point obtained when we fixed E = 5 GPa had such a low value that it do es not even figure on this plot. This makes sense since measuring 2.87measure Cm if E were 5 GPa is extremely unlikely. As previously mentioned the most likely value of E given only the measurement in bar C is E = 9.18 GPa. Had we used a deterministic analysis using the nominal values of the loads and 2.87measure Cm we would substitute these values back into Equation 3-18 and solve for E, obtaining E = 9.18 GPa. It is logical then that we find that 9.18 GPa is the most likely value for the Young modulus since we assume d that the nominal values for the loads are the most likely ones. However, we considered here an unlucky case where the actual values of the loads fell relatively far away from the most likely values (w e considered the true loads to be two standard deviations (2 ) away from their mean values), which e xplains why 9.18 GPa is far from 10 GPa, the actual Young modulus of the bars.

PAGE 81

81 Figure 3-3. Likelihood function of E given measure C. Note that even though the like lihood function is centered in 9.18 GPa there is a small but non negligible probability that the Young modulus is around 10 GP a (cf. Figure 3-3). The true value of 10 GPa is actually about 2 away from the mean value of 9.18 GPa, which seems to make sense as well since the true values of the loads were 2 away from their mean. At this point we can finalize the Bayesian pr ocedure. We have the wide prior distribution of E we defined in the previous subsection, we have just calculated th e likelihood function of E given measure C(Figure 3-3), all that remains is to appl y the Bayesian formulation of Equation 3-22 that is multiply the two functions and normalize th e resulting distribution. This is the probability distribution of the identified Young s modulus given that we measuredmeasure C. On a final note recall that this one-dimensi onal example was used only for illustration purposes. In our main problems discussed in the following sections we have multiple strains, respectively frequencies for th e vibration problem. The basi c procedure though remains the same. E (GPa) 9.18 () C measure EClE

PAGE 82

82 If we have more than one parameter to id entify we have to define a joint probability distribution for the prior. This is illustrated on the vibration problem. The construction of the likelihood function would also be more time cons uming since instead of describing an interval point by point we would have to describe a surface. If we have multiple measurement points (for ex ample strains in each of the bars) we would have to consider the joint probability distribut ion of the measurements when constructing the likelihood function. On our problem if we have the strains in the three bars for example, Figure 3-2 would have to be replaced by the three-dimensional joint st rain distribution which would have to be evaluated at the measurement point ,,measuremeasuremeasure ABC. Least Squares and Bayesian Comparison for the Three Bar Truss Problem The Comparison Method The results of both the least squares and Bayesian approaches depend on the actual, but unknown, values of the loads p and r in an actual experiment. We compare the two methods in two different ways. In the next six subsections we use extreme cases where the actual values of p and r are two standard deviations away from their mean values: ptrue=pm+2p and rtrue=rm-2m. Then in the following subsection we consider 1000 repetitions of the identification processes where the true values of p and r are obtained by Monte Carlo simulations from their distributions. This second case provides the average performance of each method measured by the dispersion (e.g., standard deviation) in th e modulus estimate as the loads are varied. For all the cases we compare the modulus obtai ned from the least squares approach to the most likely value from the Bayesian probability distribution. The differences between the two methods are likely to be influenced by three factor s (i) differences in the sensitivity of strains to Young modulus; (ii) differences in the uncertainty of the measured strains; and (iii) correlation

PAGE 83

83 between measured strains. For investigating poi nts (i) and (ii) we use the three bars truss example with strain measurements on two of the bars while for point (iii) we use strain measurements on all three bars. Note that the prior distribution of which th e Bayesian identifica tion can benefit, can potentially bias the comparison in favor of the Bayesian approach. This is why we chose a wide prior distribution, which, as will be illustrated at the end of the extreme cases comparison, has a negligible effect on the current identification results. Results for Different-Sensitivity Strains In this investigation case we create a situation where the strain in bar A is more sensitive to Youngs modulus variations than th e strain in bar B. Since all strains are inversely proportional to Youngs modulus the sensitivity of the strain is proportional to the st rains magnitude. A high strain (in absolute value) in one of the bars means that the strain in that bar has a high sensitivity to E and vice versa. This equivalence is used in the next sections where we provide the magnitude of the strains as a measure of their sensitivity to E. Note, however, that when a measured quantity has a more complex depende nce on the parameter to be estimated, the sensitivity may no longer be si mply proportional to the magnitude. To create a high strain in bar A which is abou t three times higher than that in bar B (hence three times more sensitive) we use the values in Table 3-1. The same re lative uncertainty in the loads is applied (2.5%), which propagates to about the same relative uncertainty in A and B. Table 3-1. Numerical values for different-sensitivity strains. Input parameters Strains Parameter AA (m2) AB (m2) p (N) r (N) A (mm/m) B (mm/m) (Mean) value 2.10-4 1.10-2 104 105 3.13 0.995 Standard deviation 250 2500 0.0725 0.0248 Measured strains* 3.26 0.945 obtained from Eq. (3-17) and (3-19) with E=10 GPa, p=pmean+2p and r=rmean-2r

PAGE 84

84 Table 3-2. Extreme case identification results fo r different-sensitivity strain (true value of modulus is 10GPa). From measure A alone From measure B alone Least squares Bayesian E (GPa) 9.59 10.52 9.67 9.97 (most likely value) 0.174 (standard deviation) The results of the two identification procedur es are presented in Table 3-2. We also provide the Young modulus that would be obtained with each of the measurements alone by inverting Equations 3-17 and 3-19. The relativel y poor results of the least squares method are because it implicitly assigns more weight to measure A. As we will prove in the next subsection the basic least squares formulation implicitly assi gns more weight to qua ntities that have high sensitivity with respect to the parameter to be identified. Normalizing the strain residue with respect to the measured strains would seem to be an obvious solution to this problem. However, while it so lves indeed the variable sensitivity issue it can create another problem. Both measurement and calculation errors in small strains are often of similar absolute magnitude as the errors in the large strains. So small strains may have very large relative errors. Without normalization, these sma ll and erroneous strains will have only a small effect on the identified property, while normaliza tion will increase their influence. Lets assume, for example, that uncertainties in loads prop agate to an uncertainty in strains of 0.1 m in each of the bars. If one of the strains is 10 m then the uncertainty repres ents only 1%, however if the other strain is 0.5 m then the uncertainty represents 20%. If we normalize these two strains we take the risk of assigning the same weight to a measurement that can be 20% off as to a measurement that can be only 1% off. This means that it is risky to perform normalization without taking into account the uncertainties in the measurements. This brings us to the advantage of taking into account the

PAGE 85

85 uncertainty in the measured (and calculated) results, which is discussed hereafter. Note that if someone does not want to use the Bayesian appr oach for taking into account uncertainties, the simplest method which appropriately handles both normalization and uncertainty is generalized least squares (see the subsection above entitled least squares formulations). Least Squares Implicit Weighting A ccording to Response Sensitivity We provide here the proof of the statement made in the previous subsection: the basic least squares formulation implicitly assigns more weight to quantities which have high sensitivity with respect to the parameter to be identified. For simplicity we consider only linear models of the quantity to be identified: Axb y using the notations introduced at the beginning of the chapter. Note that on the three bar truss example we have a linear model after the change of variable x=1/E. The least squares formulation can be written in this case as shown in Equation 3-23: *argmin()x x Jx where 1 () 2TJxAxAx 00b y b y (3-23) To eliminate the constant b, we perform the change of variables: z=y-b, so that z(x)=Ax and z0=y0-b. Equation 3-23 can then be rewritten as follows. *argmin()x x Jx where 1 () 2TJxAxAx 00 z z (3-24) Since J is convex in x the solution satisfies *()0Jx which gives the normal equations: TT A AxA 0z (3-25) and ATA is usually invertible when m>n (with m the number of experimental observations and n the number of material properties). Solving this equation leads to x*.The normal equations can also be written as: *0TAAx 0z (3-26)

PAGE 86

86 which shows that in the space of the measurements z the residual vector *Ax 0 z is perpendicular to the model surf ace whose directing vectors are Ai. This also means that the solution x* to the basic least squares formulation of Equation 3-23 is obtained by projecting the experimental measurement point z0 onto the model surface z =Ax This is illustrated in Figure 3-4 when m=2 n=1 In order to find how different measurements are weighted, we obtain solutions based on a single measurement as well (this is possible since n =1). The solution for measurement i is 2 *01 argmin 2i ii x x Axz for i = 1,, m (3-27) where subscript i represents the ith line of the respective matrix/vector. Accordingly x*i is the parameter found when only the measurement 0iz was used and z*i is the corresponding value on the surface of the response: z*i = A x*i. We call x*i partial solutions and z*i the corresponding virtual measurements which are represented in Figure 3-4. Figure 3-4. Illustration of least squares solution z* = Ax* and partial solutions z*i for m =2, n =1. We can now investigate how the least squares solution z* depends on the virtual measurements z*i and thus, on the partial solutions x*i. If z* is closer to z*i this can be seen as the

PAGE 87

87 least squares procedure implicitly assigning more weight to z*i. Figure 3-5 illustrates two cases with different slopes. In the case where z1 is more sensitive to E variations than z2 (low slope in the ( z1, z2) plane) then the basic least squares formulation assigns more weight to z*1 and inversely. We can deduce that which partial so lution is assigned more weight depends on the sensitivity of the response to the parameter to be identified. Mathematically this can be quantified by: 0i x y which is also equal to 0i x z Deriving the normal e quation 3-25 we obtain: 00 1 0TTT i ix AAAA z (3-28) From Equation 3-20 we can deduce that the sens itivity of the identified parameters to the ith experiment depends on the ith line of A. Figure 3-5. Illustration of least squares solution z* = Ax* and virtual measurements z*i for two different slopes A. In one case the basi c least squares formulation assigns more weight to z*1 in the other to in the other to z*2.

PAGE 88

88 After this demonstration, it is interesting to analyze how the least squares and the Bayesian approaches are represented graphically for the thr ee bar truss example. We illustrate in Figure 36 the case of different strain sensitivity to E which translates into different strain magnitudes (studied in the previous subsection) The blue line is the model surface Ax y b In our case x=1/E and b = 0 so that z = y = { A, B}T. The red circle is the experimental measurement and its orthogonal projection on the mode l surface (red cross) is the le ast squares identified modulus, E =9.67 GPa. The center of the ellipses, *9.97BayesEGPa is the Bayesian identified modulus (most likely point). The ellipses represen t the joint strain distribution function ,/9.97,ABAB EGPa If we vary E this is the distribution leadi ng to the highest likelihood of the experimental measurement, meaning that if we translate the dist ribution along the model surface, this is the distribution for which the oute r ellipse gets the closes t to the experimental point. Note that this explanation does not take in to account the impact of the prior which we have shown to be negligible in our case, be cause we have chosen a wide prior. Figure 3-6 also helps understanding why the ba sic least squares formulation leads to poor results in the case of different sensitivity to E (i.e. different strain magnitude). Due to the different sensitivity to E the model surface (blue line) has a low slope. The uncertainties in the loads propagate to the same relative uncertainties in the two strains, but different absolute uncertainties due to the different strain sensitivitie s, meaning that the joint strain distribution will be elliptical instead of circul ar. The low slope in combination with the elliptical distribution leads the orthogonal projection of the measurement to be relatively far away from the maximum likelihood point as illustrated in Figure 3-6.

PAGE 89

89 Figure 3-6. Graphical representation of the leas t squares and Bayesian results for the three bar truss example for the different sensitivity ca se (i.e. different strain magnitude). The red circle is the experimental measurement, its orthogonal project ion (red cross) the least squares identified modulus while the center of the ellipses is the Bayesian identified modulus. The ellipses represent th e contour plot of the strain distribution *,/,ABBayesAB EE Results for Different Uncertainty in the Strains To create different uncertainty in the two strains we used the values given in Table 3-3. To isolate this effect from the previous one, the calculated strains have about the same magnitude, thus the same sensitivity to E We chose 5% uncertainty in p and 0.5% in r which result in A having about 7 times higher uncertainty than B. The results of the two identification procedures are presented in Table 3-4 for the ex treme case where the actual values of p and r are two standard deviations away from the mean values Again the Bayesian approach is much more accurate than the least squares approach.

PAGE 90

90 Table 3-3. Numerical values for variable response uncertainty. Input parameters Strains Parameter AA (m2) AB (m2) p (N) r (N) A (mm/m) B (mm/m) (Mean) value 7.85 10-4 1 10-2 104 105 0.980 0.980 Standard deviation 500 500 0.0368 0.0049 Measured strains* 1.05 0.970 obtained from Eq. (3-17) and (3-19) with E =10 GPa, p = pmean+2 p and r = rmean-2 r Table 3-4. Extreme case identification results fo r different response uncertainty (true value of E is 10 GPa). From measure A alone From measure B alone Least squares Bayesian E (GPa) 9.32 10.10 9.69 10.08 (most likely value) 0.058 (standard deviation) Since the two strains have about the same sensitivity to E (due to same magnitude), least squares assigns about the same weight to each, so the identified E is at about the average between the E found with each measurement alone. The uncertainty information is taken into account by the Bayesian method through the li kelihood function, which may be viewed as assigning more weight to the measurement ha ving low uncertainty. This explains why the Bayesian identified modulus is much closer to the one found using measure Balone. Similar to the graphical representation provide d in the previous subsection for different response sensitivity, we provide here an interpretati on of the three bar truss results in the case of different response uncertainty. Figure 3-7 illustra tes the present case of different uncertainty in the loads which propagates to different uncertainty in the strains. We can note that here we have same sensitivity of the strains to the Young modulus, which can be seen by the 45 slope of the model surface. Note also that the elliptical shape of the joint strain distributions is due this time not to different strain sensitivity as is in the pr evious subsection but to different uncertainties in the loads, which propagates to a higher uncertainty in A than in B.

PAGE 91

91 In addition to the joint strain distribution centered in the Bayesian identified modulus (ellipses in full line) we also provide what would be this joint strain distribution if the modulus was the one found by least squares. This show s that the experimental measurement has the maximum likelihood on the strain distributi on with the Bayesian modulus. Indeed the measurement point is the closest possible to the outer most ellipse of all possible distributions translated along the model surface line. In particular the strain distribution centered in the least squares modulus has a significantly lower likelihood. Figure 3-7. Graphical representation of the leas t squares and Bayesian results for the three bar truss example for the different uncertainty case. The red circle is the experimental measurement and its orthogonal projection (red cross) the least squares identified modulus. The ellipses are the contou r plot of the strain distribution *,/,ABiAB EE where Ei is the least square s identified modulus *LS E (dashed line) or the Bayesian identified modulus BayesE (full line).

PAGE 92

92 Results for Correlation among the Responses To show the effect of correlation we need three strain measurements with two strongly correlated but not correlated to the third. For this purpose we used the parameter values in Table 3-5. Note that, in order to isolate this effect fr om the previous one, we have the same relative uncertainty in p and in r which propagates to about the same relative uncertainty in all three strains. We could not, however, find values that lead to the same amplitude for all three strain measurements. Therefore, the least squares ap proach will pay less attention to the small measure B. The correlation between A and C is -0.985 while the correlatio n between the other two couples is 0.086, meaning that only A and C are highly correlated. Table 3-5. Numerical values for response correlation. Input parameters Strains Parameter AA (m2) AB (m2)pm (N)rm (N) A (mm/m) B (mm/m) C (mm/m) (Mean) value 2 10-4 1 10-2 104 105 3.13 0.995 -2.64 Standard deviation 250 2500 0.0725 0.0248 0.0725 Measured strains* 3.27 0.945 -2.79 obtained from Eq. (3-17) to (3-19) with E =10 GPa, p = pmean+2 p and r = rmean-2 r Table 3-6. Extreme case identification result s for response correlation (true value of E =10 GPa). Frommeasure A Frommeasure B Frommeasure C Least squares Bayesian E (GPa) 9.59 10.52 9.43 9.58 9.96 (most likely value) 0.196 (standard deviation) Comparison between Table 3-2 and Table 3-6 sh ows that the least squares method is more affected by adding a correlated measurement (+ 0. 9% difference) than the Bayesian approach (+ 0.1% difference). The explanation is that le ast squares treats all three measurements as independent and due to the small magnitude of measure B it mainly averages measure A andmeasure C.

PAGE 93

93 The Bayesian approach may be viewed as averaging the highly correlated measure A and measure C first, then considering the average value as a single experiment it combines it with the uncorrelated one. Results for All Three Effects Together In this last case we combine all three effect s, which is what may sometimes happen in a real situation. For this purpose we used the numerical values given in Table 3-7. We have different magnitude of the strains, different unc ertainty in the loads and correlation among the strains: the correlation between A and C is -0.999, between A and B 0.014 and between B and C 0.002. Table 3-7. Numerical values for the three combined effects. Input parameters Strains Parameter AA (m2) AB (m2)pm (N)rm (N) A (mm/m) B (mm/m) C (mm/m) (Mean) value 2 10-4 1 10-2 104 105 3.13 0.995 -2.64 Standard deviation 500 500 0.144 0.005 0.144 Measured strains* 3.42 0.985 -2.93 obtained from Eq. (3-17) to (3-19) with E =10 GPa, p = pmean+2 p and r = rmean-2 r Table 3-8. Extreme case identification results for the three combined effects (true value of the E is 10 GPa). Frommeasure A Frommeasure B Frommeasure C Least squares Bayesian E (GPa) 9.16 10.10 9.00 9.14 10.08 (most likely value) 0.058 (standard deviation) We can see in Table 3-8 that in this case the error in the least squares approach is exacerbated. All effects act together and against the least squares method. On the other hand the Bayesian method considers almost only B which has by far the lowest uncertainty, leading the Bayesian method to be much clos er to the true Young modulus.

PAGE 94

94 Additional details on how the Bayesian approach handles correctly normalization, uncertainty and correlation of the measurements are provided next. The loads having normal distributions and the problem being linear in the loads this leads to the strain distribution given E to be Gaussian as well. Using again the general notation introduced at the beginning of this chapter we can write: 1 / 1/2 /211 ()exp()()() 2 (2)()T YXx y m y x Cxx Cx yyyymym (3-29) where my(x) and Cy(x) are the mean of the response and th e variance-covariance matrix of the response for the given x The term 1()()()T y x Cxxyyymym in the previous expression is the one which scales and de-correlates the measurements according to the magnitude, uncertainty and correlation provided by Cy. This can be further understood afte r a change of coordinates. First note that since Cy is a variance-covariance matrix it is sy mmetric and positive definite so it can be decomposed into 2 T yCBDB where 2 1 20 00 0md D d and 1B mbb; B is the eigenvectors matrix of Cy (i.e. 2 yiCd iibb). Then we can write 12111()TT yCBDBBDDB which leads to Equation 3-30. 111 T T TT yCDBDByyyyymymymym (3-30) Substituting this expression back into Equation 3-29 of the Gaussian distribution of the response y for a given x this illustrates how for any given measurements y0, the measurements are de-correlated (by the multiplication by BT, which implements a change of reference into the

PAGE 95

95 basis of the eigenvectors) and scaled (by the multiplication by D-1, containing the variance terms in the eigenvector directions and which account for both response sensitiv ity and uncertainty). This means that each time that in the Bayesian procedure /()YXx y is evaluated at the experimental measurements point y=y0, the measurements are subsequently scaled and decorrelated, which provides an al ternative explanation of why the Bayesian approach correctly handles different response magnitudes, response uncertainties and correlation. At this point we briefly discuss the influe nce of the prior distribution on the Bayesian identification results. All the previous results were obtained for a truncated, normally distributed wide prior with mean value 9.5 GPa and a large st andard deviation of 1.5 GPa. If we change the standard deviation of the prio r to 0.75 GPa keeping the same mean and same truncation bounds we obtain for this extreme case a most likely va lue of 10.07 GPa (10.08 previously, see Table 38), which is a small improvement due to a narrowe r prior. If we change the mean of the prior distribution to 10.5 GPa keeping the standard deviation and the truncation bounds the same, we obtain a most likely value of 10.09 GPa (compared to 10.08). So even though we changed the prior significantly it had very small effects on the Bayesian identification results. Of course a much narrower, more accurate prior distribution w ould have improved the accuracy a lot. So it is worth noting that one of the advantages of the Baye sian approach is the f acility with which it can incorporate prior knowledge, even though investiga ting the influence of the prior was not among our primary objectives in this work. As a final note we need to mention again that the advanced least squares approach involving the variance-c ovariance matrix would also handle this three bar truss problem appropriately by de-correlating and scaling the meas urements similarly to Equation 3-30. In fact if we neglect the influence of the prior, the Ba yesian and the advanced least squares approach

PAGE 96

96 would lead to the same results on this problem This might not be the case however on more complex problems, such as the vibration problem th at we will treat in the next sections, where the uncertainties no longer propagate to Gaussi an uncertainties on the measurements. In such types of problems the Bayesian approach base d on Monte Carlo simulation would appropriately handle non-Gaussian uncertainties as well. Average Performance of the Leas t Squares and Bayesian Approach To complement the results obtained for the ex treme case we repeat the previous procedure 1000 times, for random values of the loads obtai ned by Monte Carlo simu lation instead of the extreme values. The numerical values for the cro ss sections, the loads and their uncertainty are those given earlier for each case. The quality of the methods will be measured mainly by the standard deviation of E as the loads are varied (see Table 3-9). For the individual cases the standard deviations of the Bayesian approach ar e lower by a fraction (diffe rent sensitivity case) to a factor of 3.5 (different uncertainty) The diffe rence is even more striking for the three effects combined, since, on average, the E found by the Bayesian approach will be almost 10 times closer to the true value. Table 3-9. Average performance of the methods in the different cases (not e that the differences in the mean value mainly reflect th e limited sample size of 1000 cases). The numerical values for the cross sections, the loads and their uncertainties are those given in the previous sections. Mean of E (GPa) Standard deviation of E (GPa) Least squares 10.01 0.200 Different sensitivity Bayesian 9.99 0.167 Least squares 10.00 0.178 Different uncertainty Bayesian 9.99 0.051 Least squares 10.01 0.221 Correlation Bayesian 9.99 0.167 Least squares 10.04 0.447 All three together Bayesian 9.99 0.050

PAGE 97

97 Vibration Identification Problem Description of the Problem In this section we explore how the two methods compare for a more realistic identification problem of identifying the elastic properties from the natural frequencies of a composite plate. This problem also illustrates the results on a case where uncertainties on the response are no longer Gaussian. Since we are onl y interested here in compari ng the two identification methods, we simulate experiments from analytical expres sions available for simply supported plates and error models explained hereafter. This also ea ses computational cost which would be a major issue if more realistic boundary conditions requiring more complex numerical models were used (finite elements for example). We consider a [0, 90]s simply supported graphite/epoxy composite laminate of dimensions a = 200 mm, b = 250 mm and of total thickness h = 3 mm. We assume that the true elastic constants of the laminate are those given in Tabl e 3-10. Note that our aim here is to identify these properties of the laminate a nd not those of an individual ply. For the sake of simplicity and easy graphical representation, we iden tify only two properties, assuming that xy is known as well as Ex = Ey. This leaves Ex and Gxy to be identified. Note however that the procedure described remains the same if all four properties would be identified. Table 3-10. Assumed true values of the laminate elastic constants. Parameter Ex (GPa) Ey (GPa) Gxy (GPa) xy True value 57.6 57.6 4.26 0.05 The simulated experiment consists of measuri ng the first nine natura l frequencies of the plate. The measured frequencies are generated using thin plate theory as well as simulating various sources of uncertainty such as measuremen t errors, which will be detailed in the next section. We use the thin plate theory for obt aining the frequencies in terms of density and

PAGE 98

98 rigidities Dij, which leads to Equation 3-31. The bending rigidities Dij are a function of the individual in plane proper ties of a ply (longitudinal and transverse Youngs moduli E1 and E2, Poisson ratio 12 and shear modulus G12), the thickness and the composite layup orientation. For a detailed procedure for calculating the rigidities of a composite laminate from the ply properties the reader can refer to Gurdal et al. (1998). 4224 1112662222 2klkkll fDDDD aabb h (3-31) Sources of uncertainty For this problem we considered that there is uncertainty on input parameters to the model, measurement uncertainty, which was assumed unifo rmly distributed, and finally modeling error. In terms of the model input parameters, denoted p, we assumed that there are uncertainties in the plate dimensions and density. Thes e uncertainties, which were assumed normally distributed and uncorrelated, are gi ven in Table 3-11. Note that we can expect that the resulting uncertainty on the response (frequencies here) will be normal only if the res ponse is linear in the dimensions, which it is clear ly not (see Equation 3-31). Table 3-11. Assumed uncertainti es in the plate length, wi dth, thickness and density ( a, b h and ). Normal distributions are considered. Parameter a (mm) b (mm) h (mm) (kg/m3) Mean value 200 250 3 1536 Standard deviation 0.5 0.5 0.01 7.67 We assume uniformly distributed measurement e rror in an interval which gets higher for the higher natural frequencies, mainly becau se the higher vibration modes have smaller amplitudes than the lower modes. To this we a dd a systematic modeling error, which is zero for the fundamental frequency and increases linear ly. This error may account for the difference

PAGE 99

99 between thin and thick plate theory, since highe r modes have shorter wa velength. The measured frequencies can then be simula ted using the following equation: ,,true measureresptrue klklxxykl f fEGpu (3-32) where f resp is obtained from Equation 3-31, which is a function of { Ex, Gxy}true, which are the true values of the material properties and ptrue, which are the true values of the other input parameters a, b, h and Variable ukl is a random variable uniformly distributed in the interval [ akl bkl] where 11 maxmax2 2resp kllbubkl afaa kl 11 maxmax2 2resp kllbubkl bfbb kl (3-33) For nine frequencies kmax= lmax=3. We chose alb= -2.5 10-3, aub= -4 10-2, blb=2.5 10-3 and bub= -2 10-2. With these numerical values the e rror for the lowest natural frequency f11 would be uniformly distributed within the bounds [0.0025 f11 ,0.0025 f11]. The error for the highest (ninth) natural frequency measured would be uni formly distributed within the bounds [0.04 f11 ,-0.02 f11]. In between these frequencies the bounds of the error vary linea rly with respect to k and l which is assumed to model the increase in the measurement error for the higher frequencies. The fact that the center of the interval is not zer o for all frequencies, but decreases with the frequencies, is assumed to model the error be tween thin and thick plate theory. The random variable ukl thus aggregates the modeling and measurement uncertainties model and meas of Eq. 3-10 into a single quantity. The Identification Methods The least squares method minimizes the objec tive function shown in Equation 3-34, where ,resp klxxy f EG is the response calculated using Eq uation 3-31 using the mean values of a, b, h and

PAGE 100

100 2 ,1..3(,)(,)respmeasure xxyklxxykl klJEGfEGf (3-34) For the experimental measurements we assu me we know the average of the systematic error ( akl + bkl)/2 for which we correct each experimental frequency measure klf Note that this is a significant assumption in favor of l east squares. In reality, the systematic error might be less well known which would then create a bias in the identified properties. On the other hand the Bayesian approach can handle even vague informa tion on the systematic error as we will show in the next paragraphs. The Bayesian approach can be written as shown in Equation 3-35. 1133 111133331133 ,...,,, ,,...,1 ,,...,,measuremeasure xxyxxy xxymeasuremeasureprior xxy xxy ffEGEG EGffffEGffEG K (3-35) A major difference compared to the three bar tr uss is that we now also have measurement error which we can take into account in the Baye sian approach. We assume that we know there is some numerical noise and that thin plate theo ry over-predicts the natural frequency but we assume we do not know the exact amount. For the Bayesian identification we use an error model accounting for the various sources of uncertainty described in the previous subsection: ,,resp klklxxykl f fEGu p (3-36) where fresp is the model response using Equation 3-31, { Ex, Gxy} are the values of the material properties considered at each identification step, and p are simulated values of the other input parameters a, b, h and Variable u is a random variable uniformly distributed in the interval ,klklab where klaand klb are obtained using Equation 3-33 with alb=-5 10-3, aub=-5 10-2, blb=5 10-3 and bub=-1 10-2. Note that these error bounds are signif icantly wider than the ones used for

PAGE 101

101 simulating the actual experiment (s ee previous subsection), reflecti ng the fact that we only have vague knowledge of the error model, and we tend to be conservative. Another major difference with the three bar truss formulation is that we work with the joint probability distribution of Ex and Gxy instead of the one dimensional distribution of E and the nine-dimensional joint frequencies distribution function for the meas urements instead of the twoor three-dimensional strains distribution. Th e higher dimensions significantly increase computational cost of the Bayesian identifica tion. Furthermore, the model here is no longer linear in the material properties x (see notations introduced at th e beginning of the chapter). Note also that the joint frequencies distri bution is of unknown shape because the frequency response is non-linear in the uncer tain input parameters of Table 3-11. This means that we cannot use the empirical mean and variance-covari ance matrix to fit the frequencies distribution with a joint normal pdf. Instead, given that we assumed a uniformly distributed error we can directly calculate the likelihood function by integrating the simu lated frequencies pdf between the uncertainty bounds as shown in Equation 3-37. ,,1MC xxyxxyfEGfEGK measure measurefb measureinput_MCinput_MC faffdf (3-37) where K is a normalizing constant, f is the nine-dimensional random variable of the frequencies measurement prediction, fmeasure is the nine-dimensional vector of the measured frequencies, a and b are the nine-dimensiona l vectors of the measur ement uncertainty bounds klaand klb. Finally ,,resp klxxyfEG input_MC f p is the nine-dimensiona l random variable of the frequencies due only to uncertainty on input parameters p, and obtained by Monte Carlo simulation on the frequency expression of Equation 3-31 with the i nput uncertainties of Table 3-11. Equation 3-37 is equivalent to saying that the lik elihood of measuri ng the frequencies fmeasure is equal to the

PAGE 102

102 probability that the simulated frequencies finput_MC fall inside the measurement uncertainty bounds. The integral in Equation 337 is evaluated by counting the number of frequencies within the bounds measuremeasure f a f b out of the total number of simulated frequencies finput_MC. We used 50,000 Monte Carlo simulations. The presen t calculation approach can also be seen as implicitely using the independence of the measurement uncertainty and of the uncertainty due to input parameters, which allows to reduce cost by avoidi ng the calculation of the full histogram. The prior distribution for the Bayesian iden tification was assumed to be a truncated uncorrelated bi-normal distribution with mean 57 GPa and standard deviation 10 GPa for Ex and 4.2 GPa and 1.5 GPa, respectively for Gxy. This is again a wide distri bution to avoid that the prior gives the Bayesian method a significant adva ntage. The distribution was truncated for Ex at 55.6 GPa and 58.4GPa and for Gxy at 37.8 GPa and 46.2 GPa. The truncation bounds were chosen around the mean value of the prior and iteratively reduced so that they eventually cover about five standard deviations of the posterior pdf. Results To illustrate the benefits of the Bayesian met hod we present first the results for a particular simulation where we randomly simulated a sing le experiment (simulated according to the procedure explained previously a nd including uncertainty in mode l input parameters, model error as well as measurement uncertainty). The simulated frequencies which we assume we measure are given in Table 3-12. When re peating the simulation of measured frequencies a few times we found that the frequencies of Table 3-12 lead to rather high differences between the Bayesian and the least squares approach. This case can then be considered to be a rather extreme case, which is fine however, since we also provi de below the average performance over 100 repetitions of the simulated experimental frequencies.

PAGE 103

103 Table 3-12. Simulated expe rimental frequencies. Frequency f11 f12 f13 f21 f22 f23 f31 f32 f33 Value (Hz) 267.9 612.9 1266.3 863.6 1070.5 1594.4 1892.0 2032.5 2409.1 The identification results for this case are pr esented in Table 3-13. The Bayesian approach obtained the probability distributi on shown in Figure 3-8. The maxi mum of the distribution is in Ex = 57.9 GPa Gxy = 4.30 GPa Table 3-13. Least squares and Bayesian result s for a randomly simulated particular case. Ex (GPa) Gxy (GPa) True values 57.6 4.26 Least squares values 56.9 4.66 Bayesian most likely values 57.9 4.30 Both approaches found an Ex which is very close to the true value (about 1%). However the Gxy found by the least squares is more than 8% off the true value while the one found by the Bayesian method is only 0.9% off. Furthermore the Bayesian approach provides additional information in form of the distribution shown in Figure 3-8 (in particular information on th e standard deviation of the properties identified). Note that the distribution along Gxy is much wider than the one along Ex (note different scales in Figure 3-8). This means that the c onfidence in the most likely value of Gxy is much poorer than in the one of Ex. This reflects the well known fact that Gxy is harder to identify accurately than Ex from a vibration test. The average performance over 100 repetitions of the identifica tion with randomly simulated experiments is given in Table 3-14. The two methods are comparable for Ex but the Bayesian approach is about 1.9 times more accurate for Gxy. Table 3-14. Average performance for the plat e vibration problem with 100 repetitions. Mean value (GPa) Sta ndard deviation (GPa) Least squares Ex = 57.5 ; Gxy = 4.26 For Ex : 0.65 (1.13%) ; for Gxy: 0.15 (3.63%) Bayesian Ex = 57.5 ; Gxy = 4.26 For Ex : 0.50 (0.88%) ; for Gxy: 0.083 (1.96%)

PAGE 104

104 Figure 3-8. Posterior distribution of { Ex Gxy } found with the Bayesian approach Importance of Handling Multiple Uncertainty Sources Previous studies on Bayesian id entification of elastic constants from natural frequencies (Sol 1986, Marwala and Sibisi 2005, Daghia et al. 2007) considered that the only uncertainty in the problem is measurement error, which was assu med to be normally distributed. In our present work we seek to build a procedure that can also handle uncertainty on other input parameters of the model, such as plate dimensions or plat e density in the current example and also, if necessary, handle non-Gaussian measurement erro r. Accounting only for Gaussian measurement error allows a computationally cheap analyt ical treatment while accounting for the other uncertainty makes the problem substantially more co stly. A question that arises in this case is the following: is the more complex procedure for accounting for other uncerta inties worth it? In other words, will accounting for the other uncertainties in the problem make a substantial difference for the identified probability density function? To answer this question we treat the pres ent identification problem for three cases: i. Only Gaussian measurement noise on the natural frequencies. We assumed that the noise has a normal distribution with zero mean and a st andard deviation of 0.5% of the measured natural frequencies.

PAGE 105

105 ii. Only uncertainty on the input paramete rs to the vibration model: plate length a, width b thickness h and density The uncertainties assumed are those provided in Table 3-11. iii. Both Gaussian measurement noise and unc ertainty on the model input parameters. The uncertainty models are those of th e previous two cases respectively. Case i., with Gaussian measurement noise only, is the most common in Bayesian identification studies. The identified posterior pd f using the Bayesian ap proach are provided in Figure 3-9 for the three cases. A B C Figure 3-9. Identified posterior pdf for: A) Only Gaussian measur ement noise (case i.) B) Only model input parameters uncertainty (case ii .) C) Both Gaussian measurement noise and model input parameters uncertainty (case iii.).

PAGE 106

106 Figure 3-9 shows that the iden tified pdf differ substantiall y when only measurement noise is assumed and when only uncertainty on model input parameters is assumed. The third case, where both uncertainties are assumed, is as expe cted a mix of the two previous ones. It is important to note that the pdf obtained with th e combination of the two uncertainties differs substantially from the one with only Gaussian measurement noise thus answering the previous question and illustrating the interest in consider ing uncertainty in model input parameters as well. Considering only measurem ent noise would have led on th is problem to substantially underestimating the uncer tainty in the identified properties. To confirm the findings of this section we sought a basic physical in terpretation based on a simplified model. This interpreta tion is provided in Appendix C. As a final note it is worth recalling that th e identified probability distribution depends essentially on the physics of the problem on one hand and on the uncertainties considered in the problem on the other. The results of the present section hint towards the identified probability distribution being quite sensitive to the input uncertainties assu med. This means that if the method is to be applied to actual identifications accurate input uncertain ty quantification would be required in order to obtain results which can also provided quantitative rather than mainly qualitative information. Accurate input uncertain ty quantification would without a doubt require a significant time and resources investment on act ual experiments. If such resources are not available, the identification appr oach should be applied with cons ervative uncertainty estimates. Summary The present chapter first intr oduced with a relatively simple didactic three bar truss identification example the Bayesian approach to identification. The chapter is however more than just an introduction to Bayesian identification.

PAGE 107

107 We compared for two examples two approaches to parameter identification: a basic least squares approach and a statistical approach, th e Bayesian method with the aim of identifying situations where the two approaches lead to si gnificantly different results. Using the three bar truss example we identified the following conditi ons under which the basic least squares method, which is non-statistical, is systematically outper formed by the Bayesian statistical approach: (i) different sensitivity of the response components to th e parameters to be iden tified; (ii) different uncertainty in the measurements; and (iii) high correlation among the measurements. The ratio of the accuracy of the two approaches depends on th e specific problem but for the truss problem we illustrated that it can reach a factor of ten when all the effects add up. We then considered the identif ication of elastic constants fr om natural frequencies of a simply supported plate. Using simu lated experiments affected by un certainty in input parameters, measurement noise and model erro r, we compared the two iden tification approaches. We found that the Bayesian approach was more accurate especially for identifying the shear modulus, which is typically a harder to identify property of composite materials. Finally we showed the advantage of a Bayesian procedure that can handle not only measur ement error but also uncertainty on other input parameters. In the next two chapters we continue to an alyze the identification problem of orthotropic elastic constants from the natural frequencies of vibrating plates. We move now to an actual identification using real measured natural fre quencies. The actual measurements of plates natural frequencies are usually done on freely hanging plates, since these boundary conditions can be best approached with a numerical model. For these boundary conditions there are no exact analytical solutions so numerical models such as finite elements need to be used. The major issue then in applying a Bayesian approach is computa tional cost. The vibration problem in the present

PAGE 108

108 chapter already involved about 340 Million function evaluations for the Bayesian approach. It is thus impossible to use finite element solutions di rectly. In the next chapter we investigate two options for reducing the cost: a pproximate analytical solutions to the vibration problem and response surface methodology. Then in the following chapter we apply the Bayesian approach to the identification problem using the reduced cost so lution retained. Note that while we have seen in the last part of the chapter that the identifi ed distribution can be quite sensitive to the assumed uncertainties, we choose in accordance with the obj ective of this work to concentrate in the next chapters on making the Bayesian approach com putationally tractable for the more complex problems that are considered. The results we wo uld obtain would still pr ovide novel, plausible information, but a more accurate input uncertain ty quantification study would be required in order to fully confirm some of the findings.

PAGE 109

109 CHAPTER 4 CHOOSING AN APPROPRIATE FIDELITY FOR THE APPROXIMATION OF NATURAL FREQUENCIES FOR ELASTIC CONSTANTS IDENTIFICATION Introduction Plate vibration has been frequently used for id entifying elastic constants of a plate (Leissa 1987, Mottershead and Friswell 1993), especially composite laminates. The identification is usually done with free-hanging plat es (attached through strings only) in order to avoid difficultto-model boundary conditions. Bayesian statistical identification approaches have the advantage of handling different sources of uncertainty in the identification procedure as has been shown in the previous chapter. They can also provide c onfidence intervals and co rrelation information on the identified properties. Howe ver, the Bayesian method can require Monte Carlo simulation which implies large number of vibration calcula tions for many combinations of the uncertain parameters such as geometry, material parameters, and measurement errors. Numerical solutions for free plate vibration natural frequencies are too sl ow to be used in such a context. Accordingly there is a need for simple approximate analytical formulas that can be evaluated very quickly. A simple, closed-form approximate analytic al solution for the vi bration problem of orthotropic plates with fr ee boundary conditions was proposed by Dickinson (1978). This solution is applicable to wide ra nges of geometries and materials, but its accuracy might not be sufficient for identification purpose. The aim of the present chapter is twofold. First we seek to develop a procedure for a high fidelity, anal ytical, approximate formula for the natural frequencies of free orthotropic plates based on response surface (RS) methodology. To achieve the desired fidelity the respons e surface method is combined w ith dimensional analysis. Our second goal is to compare the effect of the appr oximation fidelity on the identification results. For this purpose we compare the results of a least squares and a Bayesian identification approach

PAGE 110

110 using the high fidelity RS approximation a nd the low fidelity closed-form frequency approximations. This will allow us to choose between these two approximations the most appropriate one for the present identification. The identification analyses are done using experimental data obtained by Pedersen and Fredri ksen (1992). Note that while we use Bayesian identification in this section we will not calculate or analyze th e variance co-variance matrix of the identified properties. This will be done with a more complex model in Chapter 5. We also do not compare the least squares and the Bayesian iden tification in the present chapter, but use them together to determine the effect of the approxima tion fidelity on the identification results. This helps us in finding what is an appropriate fide lity frequency approximation for use in Chapter 5 in a detailed Bayesian identification. In the first section we give a quick overvi ew of the approximate analytical solution developed by Dickinson. In the second section we apply dimensional analysis to determine the variables of the response surface approximations (RSA) that lead to the best accuracy. Then we construct the design of experiment for the RSAs and compare their fidelity to finite element analyses and to that of the analytical solution by Dickinson. We then introduce the least squares and Bayesian identification schemes for this problem. We present the identification results using the high and the low fidelity approximations. Finally we provide concluding remarks. Dickinsons Analytic al Approximation The only simple approximate analytical formula for free vibration of orthotropic plates the authors could find was by Dickinson (1978). He applied characteristic beam functions in Rayleighs method to obtain an ap proximate formula for the fle xural vibration of specially orthotropic plates. The formula for free boundary conditions on all four edges is provided in Equation 4-1.

PAGE 111

111 4 4 2222 111266221111 24 2m n mnmnmnGG fDHHDJJDD aababb h (4-1) where fmn is the natural frequency of the mode with wave numbers m and n; is the density of the plate; a b h its length, width respectively thickness and Dij the plate flexural rigidities (for detailed expressions of the Dij refer to Grdal et al. (1998)). Gi, Hi, Ji are constants, depending only on the mode numbers m and n whose expressions are given in Table 4-1. Note that determining the wave number of an experimentally or numerically obtained mode is not always straightforwar d. In general the mode number m respectively n can be obtained by adding one to the number of nodal lines perpendicular to the edge x respectively y (i.e. it is the number of half wave length in each direction). There are however exceptions, notably for low mode numbers. For a detailed st udy of the modes and a ssociated mode numbers of free plates refer to Waller (1939 and 1949). Table 4-1. Expression of the coefficients in Dickinsons approximate formula (Eq. 4-1) for natural frequencies of free plate Mode index i Gi Hi Ji 1 0 0 0 2 0 0 1.216 3 1.506 1.248 5.017 i ( i> 3) 3 2 i 232 1 232 i i 236 1 232 i i Dickinsons simple analytical expression is computationally inexpensive, thus a priori suitable for use in statistical methods which requi re its repeated use a large number of times. However the fidelity of the approximation must al so be acceptable for us e with the considered identification problem. Typically Dickinsons approximation was repor ted to be within 5% of the exact numerical solution (Blevins 1979). It is not clear whether this accuracy is sufficient when used for identifying accurate elastic constants from vibration experiments. Therefore, in the next

PAGE 112

112 sections we will also develop more accurate response surface approximations of the natural frequencies. Frequency Response Surface Approximation (RSA) Determining Nondimensional Vari ables for the Frequency RSA For the present problem of elas tic constants identification, we propose to construct, based on finite element simulations, polynomial response surface (PRS) approximations of the natural frequencies of the plate in terms of parameters th at may have some uncertainty in their values : a b h as well as the four Dij that involve the elastic constants that we seek to identify. We could directly construct a polynomial response surf ace as a function of these individual model parameters. However, as already mentioned in the s econd chapter, the accuracy of the RSA is generally improved and the number of required simu lations is reduced if the number of variables is reduced by using the nondimensional parameters characterizing the problem (cf. also Kaufman et al. (1996), Vignaux and Scott (1999), Lacey an d Steele (2006), Gogu et al. (2007b)). To find these parameters we nondimensionalize the equati ons describing the vibration of a symmetric, specially orthotropic laminate. Governing equation: 4442 11126622 42242220 wwww DDDDh xxyyt where w is the out of plane displacement. Boundary conditions: On edge x = 0 and x = a (denoted x = 0; a ): 22 1112 22 0;0;0 0x xaxaww MDD xy 33 111266 32 0;0;0 40xy x xaxaM ww QDDD yxxy

PAGE 113

113 On edge y = 0 and y = b (denoted y = 0; b ): 22 1222 22 0;0;0 0y ybybww MDD xy 33 221266 32 0;0;0 40xy y ybybM ww QDDD xyxy This vibration problem involves 11 variables to which we add the variable of the natural frequencies fmn that we seek, so a total of 12 variables for the problem of determining the plates natural frequency (see Table 4-2). Table 4-2. Variables involved in th e vibration problem and their units Variable fmn w x y a b t h D11 D12 D22 D66 Unit 1 s m m m m m s 2kg m 2 2kgm s 2 2kgm s 2 2kgm s 2 2kgm s These 12 variables involve 3 dimension groups ( m, kg, s ). According to the VaschyBuckingham theorem (Vaschy 1892, Buckingham 1914) we know that we can have a minimum of 12 3 = 9 nondimensional groups. Defining 4 11ha D which is a characteristic time c onstant, the 9 nondimensional groups can be expressed as in Table 4-3. Table 4-3. Nondimensional parameters characterizing the vibration problem w h t x a y b a b mnmn f 12 12 11 D D 22 22 11 D D 66 66 11 D D As function of these nondimensional variables the vibration problem can be written as follows:

PAGE 114

114 Governing equation: 4442 24 126622 42242220 Boundary conditions: On edge = 0 and = 1 (denoted = 0;1): 22 2 12 22 0;10;10 33 2 1266 32 0;10;140 On edge = 0 and = 1 (denoted = 0;1): 22 2 1222 22 0;10;10 33 3 221266 32 0;10;140 For finding an RSA of the nondimensional natural frequency mn, we are not interested in the vibration mode shapes and we do not need the nondimensional out-of-plane displacement nor the nondimensional time nor the nondimensional coordinates and This means that the nondimensional natural frequency mn can be expressed as a function of only four nondimensional parameters mn = mn ( 12, 22, 66, ). Note that rewriting the analytical approxima tion of Equation 4-1 in its nondimensional form leads to a polynomial function of the nondimensional parameters: 2 2 42244 12662224 4mnmmnmnnGHHJJG (4-2) Equation 4-2 is a cubic polynomial in 12 22 66 and 2 We therefore expressed the squared nondimensional frequency as a cubic po lynomial response surfac e (PRS) in terms of

PAGE 115

115 these four variables. Such a PRS has 31 addi tional polynomial terms beyond those in those in Equation 4-2, that can potentially increase the fidelity of the response surface approximation. RSA Construction Procedure To fit the RSA we need to sample points in the four-dimensional space of the nondimensional parameters. For details on the methodology for polynomial response surface construction the reader can refer to Appendix A. The ranges of the sampling space depend on the application, and we selected experiments carr ied out by Pedersen and Fredriksen (1992) for comparing the analytical and RS approximations. If we sample in the nondimensional variable s directly, it would be difficult however to deduce values for the dimensional va riables needed for the FE model ( E1, E2, 12, G12, a b h and ). Accordingly we chose the following proce dure to obtain the points in the nondimensional space and their corresponding dimensional parameters: i. Sample Ni points (5000 points here) in the eight dimensional-variables space { E1, E2, 12, G12, a b h, } with uniform Latin Hypercube sampling within the bounds considered for the problem. ii. Out of the Ni points extract Ns (typically 250 points here ) in the nondimensional space by maximizing the minimum (max-min) distance be tween any two points. The Matlab routines from the Surrogates ToolBox (Viana and Goel 2008) were used. These steps ensure that the points are well distributed (space-fil ling) in the nondimensional space. Figure 4-1 illustrates this procedur e in a two-dimensional case with 12 and only. The blue crosses are repr esentative of the Ni points sampled in step i. The red circles are representative of the Ns points selected in step two. Because we stopped the max-min search after 100,000 iterations (to keep computa tional cost reasonable) we might not have gotten the absolute maximum, but this is not required for good accuracy of the RSA.

PAGE 116

116 Figure 4-1. Illustration of the procedure fo r sampling points in the nondimensional space. Frequency RSA Results The frequency RSA is fitted to finite element (FE) simulations of the plate using Abaqus commercial FE software. We used 400 thin plate el ements (S8R5) to model the composite plate. The bounds on the variables given in Table 4-4 were chosen with the elastic constants identification problem in mind, based on experi ments from Pedersen and Frederiksen (1992). The plate considered was a glass-epoxy co mposite panel with stacking sequence [0,40,40,90,40,0,90,-40]s. In our case the RSA would be used to carry out least squares and Bayesian identification approach es for the material properties. The bounds ranges on the elastic constants encompass a fairly wide region in which wed expect to find the identified properties. The ranges on the other input parameters (geome try and plate density), which are somewhat narrower, were chosen such as to allow account ing for uncertainties in the Bayesian approach and to allow future applicability of the RS A to slightly different plates. We decided to construct two sets of RSAs with two different bounds. This is because we found that we can use somewhat narrower bound for the Bayesian identification RSAs without this

PAGE 117

117 compromising the results. This behaviour is most likely due, as will be shown later, to the fact that the least squares identification problem is more ill-conditioned than the Bayesian problem. Table 4-4 presents the wide bounds (denoted WB) used for cons tructing the first RSA, that will be used for least squares based identification. Table 4-4. Wide Bounds on the model input parameters (denoted WB) E1 (GPa) E2 (GPa) 12 G12 (GPa) a (mm) b (mm) h (mm) (kg/m3) Low bound 43 15 0.2 7 188 172 2.2 1800 High bound 80 28 0.3613 230 211 3.0 2450 We constructed a cubic polynomial response surface (PRS) approximation for each of the first ten squared nondimensional natural freq uencies as a function of the nondimensional parameters determined previously. We used the pr ocedure described in the previous section with Ns = 250 sampling points within the bounds WB. The response surface approximations fitted through these 250 points are denoted RSAWB. To test the accuracy of the RSAs we tested them at an additional 250 finite element points (denoted P250), sampled using the same procedure as described in the previous section, using the bounds given in Table 4-4. The re sults are given in Table 4-5, fi being the dimensional frequencies in order of increasi ng frequency values. The reader can also refer to Table 4-9 to get an idea of the order of magnitude of the different frequencies. Table 4-5. Mean and maximum relative absolu te error of the frequency RSA predictions (denoted RSAWB) compared at 250 test points Abs. Error (%) f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 mean 0.033 0.548 0.290 0.032 0.038 0.680 0.667 0.583 1.110 0.590 max 0.175 4.197 1.695 0.140 0.195 5.219 5.610 3.680 7.834 7.498 For comparison purpose we also provide in Tabl e 4-6 the error of the analytical frequency approximation of Equation 4-1 compared at these same 250 test points.

PAGE 118

118 Table 4-6. Mean and maximum relative absolute error of the analytical formula frequency predictions compared at 250 test points Abs. Error (%) f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 mean 5.98 8.50 3.47 4.28 7.22 4.68 2.77 5.72 5.75 1.22 max 6.54 16.28 8.06 9.43 23.32 21.11 18.30 10.28 12.05 10.01 The average error in the anal ytical approximation over the first ten frequencies was found to be 4.9%. This is consistent with previous studies (Blevins 1979) whic h reported the error of using the analytical formula to be about 5%. On the other hand th e errors in the RSAs are about an order of magnitude lower. The second RSA set we construct is for the na rrower bounds given in Table 4-7. These are sufficient for the Bayesian identification given the range of uncertainties th at we will consider (see next section). Table 4-7. Narrow bounds on the model input parameters (denoted NB) E1 (GPa) E2 (GPa) 12 G12 (GPa) a (mm) b (mm) h (mm) (kg/m3) Low bound 52 18 0.238.3 202 185 2.55 2000 High bound 70 25 0.3211 216 200 2.65 2240 Cubic PRS were again fitted for each of th e first ten nondimensional natural frequencies using the procedure described in the previous section with Ns = 200 finite element simulations. We used slightly less sampling points here beca use of the narrower bounds (denoted NB). The RSA fidelity was tested at 250 additional points sampled within the bounds of Table 4-7. The results are presented in Table 4-8. Table 4-8. Mean and maximum relative absolu te error of the frequency RSA predictions (denoted RSANB) compared at 250 test points Abs. Error (%) f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 mean 0.004 0.004 0.003 0.005 0.004 0.003 0.003 0.005 0.006 0.005 max 0.021 0.033 0.030 0.016 0.020 0.016 0.010 0.017 0.061 0.041

PAGE 119

119 For these narrower bounds the RSA fidelity achie ved was excellent, the mean of the error among the 250 test points being smaller than 0. 01% for all the frequencies. The maximum error among the 250 test points was found to be only about 0.06% for the 9th frequency. We need to mention at this point that in orde r to obtain the good quality of the fit for all ten frequency RSAs astute modeling was required. Indeed, initially the RSAs for the frequencies number four to seven were very poor both for the wide and the narrow bounds. Typical values for these frequencies are as in Table 4-9. We can see that frequencies four and five are relatively close as are six and seven. This is because th e corresponding modes are sy mmetric relatively to the x and y axis and the aspect ratio of the pl ate is close to one. Fo r each of the 200 sampling points the dimension parameters vary slightly and for some of these points the two symmetric modes switch, meaning that the x-symmetric mo de becomes lower in frequency than the ysymmetric mode for some points and not for othe rs. This issue of switching modes was resolved by modeling only half of the plate and using sy mmetry boundary conditions for constructing the RSA for frequencies four to seven. Using Xor Y-symmetry boundary conditions allowed to follow the same mode for varying plate parameters. Identification Schemes We use the low fidelity analytical approximate solution and high fidelity frequency RSAs in two different material properties identification schemes in order to compare the effect of the approximation error on the identified results. The identification procedure seeks the f our orthotropic ply elastic constants ( E1, E2, 12, and G12) of a glass/epoxy composite based on the first ten natural frequencies of a [0,40,40,90,40,0,90,-40]s laminate vibrating under free boundary conditions. We use the values measured by Pedersen and Frederiksen (1992) as experimental frequencie s in the identification

PAGE 120

120 procedure. For convenience these measured frequenc ies are also given in Table 4-9 and the plate properties and dimensions in Table 4-10. Table 4-9. Experimental frequencies from Pedersen and Frederiksen (1992) Frequency f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 Value (Hz) 172.5 250.2 300.6 437.9 443.6 760.3 766.2 797.4 872.6 963.4 Mode ( n m ) (2,2) (3,1) (1,3) (2,3) (3,2) (1,4) (4,1) (3,3) (2,4) (4,2) Table 4-10. Plate properties: length (a), width (b), thickness(h) and density ( ) Parameter a (mm) b (mm) h (mm) (kg/m3) Value 209 192 2.59 2120 The first identification scheme is a basic least squares approach. The identified parameters correspond to the minimum of th e following objective function: 2 1() ()nummeasure m ii measure i iff J f E E (4-3) where E = { E1, E2, 12, G12} fi measure is the ith experimental frequenc y from Table 4-9 and fi num is a numerical frequency prediction. The second identification scheme is a Bayesian approach. It seeks th e probability density function of the material properties given the test results. This distribut ion can be written as: 1 ()measureprior fEE EffKmeasure E fE (4-4) where denotes a probability density function (pdf), E = { E1, E2, 12, G12} is the four dimensional random variable of the elastic constants, f = { f1 f10} the ten dimensional random variable of the frequencies measurement prediction and fmeasure= { f1 measure f10 measure} the vector of the ten measured natural frequencies. ()prior E E is the pdf of E prior to the measurements and fEmeasuref is also called the likelihood function of E given the measurements fmeasure.

PAGE 121

121 As prior distribution for the properties we assumed a truncated, uncorrelated normal distribution characterized by the parameters in Table 4-11. This is a wide prior distribution corresponding to the fact that we have only va gue prior information about the properties. The distribution was truncated at th e bounds given in Table 4-12. Table 4-11. Normal uncorrelated prior di stribution of the material properties Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Mean value 60 21 0.28 10 Standard deviation 10 5 0.05 1.5 Table 4-12. Truncation bounds on the prior di stribution of the material properties Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Low truncation bound 42 11 0.2 7 High truncation bound 78 31 0.36 13 The mean or most likely values of the poste rior distribution given in Equation 4-4 are usually taken as the identified pr operty. We have shown in Chapte r 3 that for a similar vibration problem the Bayesian identification is generally more accurate than the basic least squares method. The difference between the two approa ches depends, however, on the problem and can range from negligib le to significant. Sources of uncertainty affecting the identification As illustrated in the Chapter 3 the Bayesian identification can account for different sources of uncertainty. We considered here that th ree sources of uncertainty are present. First we assumed normally distributed measurement uncertainty for the natural frequencies. Then we assumed epistemic uncer tainty due to modeling error. Since this uncertainty depends on the numerical model consid ered its implementation will be described in later sections.

PAGE 122

122 Finally we considered uncertainties on the inpu t parameters to the vibration model. Apart from the four material properties, the thin plat e model also involves four other parameters: the plate length, width and thickness ( a, b and h ) and the plate density These parameters are measured beforehand and are known only with a certain confidence. We assumed these uncertainties to be normally dist ributed as shown in Table 4-13. Table 4-13. Assumed uncertainti es in the plate length, wi dth, thickness and density ( a, b h and ). Normal distributions are considered. Parameter a (mm) b (mm) h (mm) (kg/m3) Mean value 209 192 2.59 2120 Standard deviation 0.25 0.25 0.01 10.6 Identification Using the Response Surface Approximation As mentioned earlier the least squares iden tification will use the RSAs with wide bounds (denoted RSAWB) while the Bayesian identification will use the RSAs with narrow bounds (denoted RSANB). Least Squares Identification The least squares (LS) optimizati on was carried out using Matlabs lsqnonlin routine without imposing any bounds on the variables and led to the optimum shown in Table 4-14. Note that Pedersen and Frederiksen (1992) applied a least squares approach coupled directly to a Rayleigh-Ritz numerical code to identify the elas tic constants. The properties that they found are denoted as literature values in Table 4-14. Table 4-14. LS identified prope rties using the frequency RSAWB Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Identified values 60.9 22.7 0.217 9.6 Literature values (from Pedersen and Frederiksen 1992) 61.3 21.4 0.279 9.8

PAGE 123

123 For our identification results the residuals between the RSA frequencies at the optimal points and the experimental fre quencies are given in Table 4-15. They are relatively small and the identified values are also re asonably close to the literature values. This means that the accuracy of the RSAWB is good enough to lead to reasonable results. Table 4-15. Residuals for LS identi fication using the frequency RSAs. J (E) = 1.7807 10-4. Frequency f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 Residual (%) 0.11 -0.60 0.09 0.88 -0.25 0.30 -0.18 0.38 -0.46 -0.30 Bayesian Identification The Bayesian model using the frequency RSAs considered the following uncertainty on the natural frequencies. Additive normal uncertain ty was assumed to stem from the inaccuracies in the experimental frequency measurement. We considered a zero mean and a standard deviation varying linearly between 0.5% for th e lowest frequency and 0.75% for the highest. This leads to the error model shown in Equation 4-5. 1RSA mmm f fu where 2(1)(10) 0,0.00750.005 101101mmm uN (4-5) The likelihood function and the pos terior probability density f unction (pdf) of the material properties were calculated using Equation 4-4. This calculation required about 130 million frequency calculations thus motivating the need for fast to evaluate analytical frequency approximations (in contrast least squares based identification usually re quires between 100 and 100,000 evaluations, depending on the conditioning of the problem). The most likely point of the posterior pdf is given in Table 4-16. The literature values by Pedersen and Frederiksen (1992) are also provided in Table 4-16.

PAGE 124

124 Table 4-16. Most likely point of the posterior pd f using the frequency RSANB Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Identified values 61.6 20.3 0.280 10.0 Literature values (from Pedersen and Frederiksen 1992) 61.3 21.4 0.279 9.8 The Bayesian identified values are fairly close to the literatu re values and also relatively close to the values identified with the RSA based least squares approach in Table 4-14. Note that the literature values by Peders en and Frederiksen (1992) are a good comparison point, but this does not mean these are the true values. The tr ue values are probably close but the reference article did not calculate any uncertainty measur e (such as confidence intervals). The Bayesian method can on the other hand provide an estimate d confidence interval based on the posterior pdf. We have shown on the vibration example pr oblem in Chapter 3 that the Bayesian most likely point is on average closer to the true values than the least squares estimate. All in all, using the response surface approximations in the identification schemes leads to reasonable results which are in agreement with the literature values, whether using the least squares or the Bayesian identification method. This is not surprising since the RSAs have good accuracy allowing both methods to unfold properly. In the next section we investigate the identifi cation results obtained with the lower fidelity analytical approximate soluti on (Dickinson 1978). This could lead to more significant differences between the tw o identification methods. Identification Using Dickinsons Analytical Approximate Solution Least Squares Identification with Bounds Using Dickinsons analytical approximate solution, the least squares (LS) optimization was carried out first while imposing bounds on the variables. We imposed on E1, E2, 12, and G12 the bounds given in Table 4-7, which seem reasonable for the properties that we are seeking. The

PAGE 125

125 results of the optimization are given in Tables 4-17 and 4-18. The norm of the residuals (i.e. the value of the objective function) is J (E) = 0.019812. Table 4-17. LS identified prope rties using the analytical approximate solution (bounded variables) Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Identified values 52.0 25.0 0.298 8.3 Literature values (from Pedersen and Frederiksen 1992) 61.3 21.4 0.279 9.8 Table 4-18. Residuals for LS iden tification using the an alytical approximate solution. Residuals norm J (E) = 0.019812. Frequency f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 Residual (%) -1.27 7.06 -6.84 1.64 -1.37 -2.91 0.72 1.77 6.19 -6.70 We can note that several variables hit the bounds. We could keep these results since the bounds we imposed are quite wide a nd from a physical point of view it is quite unlikely that the true parameters lie outside the bounds. We want ed however to also know what happens when imposing no bounds at all and the corresponding resu lts are provided in th e next subsection. Least Squares Identification without Bounds The least squares optimizati on is carried out again wit hout imposing any bounds. The optimum found is given in Table 4-19. The resi duals between the frequencies at the optimal points and the experimental freque ncies are given in Table 4-20. The norm of the residuals (i.e. the value of the obj ective function) is J (E) = 0.019709. Table 4-19. LS identified prope rties using the analytical approximate solution (unbounded variables) Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Identified values 71.1 46.2 -0.402 -17.1 Literature values (from Pedersen and Frederiksen 1992) 61.3 21.4 0.279 9.8

PAGE 126

126 Table 4-20. Residuals for LS iden tification using the an alytical approximate solution. Residuals norm J (E) = 0.019709. Frequency f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 Residual (%) -1.24 6.96 -6.77 1.69 -1.38 -3.00 0.79 1.48 6.25 -6.75 It is obvious from the identified results that the optimum found is not plausible. Not only are the parameters quite far away from the liter ature values but the Poissons ratio and shear modulus have negative values. While a negative Poissons ratio could be physically possible, negative shear modulus has no physical meaning. What is more surprising however is that in spite of the implausible optimum the residuals are not very large. All are of th e order of a few percent, which fo r recall is also the order of the accuracy of the analytical approximate solution co mpared to finite element analyses (see Table 4-6). It is also worth noting that the residuals and their nor m remain practically unchanged compared to the bounded optimization (Table 4-18) This is a sign of th e ill-conditioning of the least squares problem due to a very flat objectiv e function around the optim um. It hints that the accuracy of the frequency approxi mation has a large effect on th e identified results and while a few percent error might seem very reasonable for some application, it can lead to extremely poor results when applied to the present identification problem. Summing up, the least squares identification wi th the low fidelity analytical approximation leads to significantly worse results (independently whether bounded or unbounded) than the same identification using the high fide lity response surface approximations. Bayesian Identification The Bayesian model using the analytical a pproximate solution considered the following uncertainty on the natural frequencies. The uncer tainty was assumed to have two sources. The first is due to the inaccuracy in the analytical approximation. The error in the formula was shown

PAGE 127

127 in Table 4-6 to be typically of the order of 5% so a normally distributed uncertainty with standard deviation of 5% was assumed. A second additive uncertainty was assumed to stem from the inaccuracies in the experimental measurement of the natu ral frequencies. As before this uncertainty was assumed normal, with the standa rd deviation varying lin early between 0.5% for the lowest frequency and 0.75% fo r the highest. This leads to th e error model shown in Equation 4-6. 1RSA mmm f fu where 2 2(1)(10) 0,0.050.00750.005 101101mmm uN (4-6) The likelihood function and the pos terior probability density f unction (pdf) of the material properties were calculated using Equation 4-4. The most likely point of the posterior pdf is given in Table 4-21. Table 4-21. Most likely point of the posterior pdf using the analytical approximate solution. Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Identified values 47.2 27.6 0.29 9.4 Literature values (from Pedersen and Frederiksen 1992) 61.3 21.4 0.279 9.8 The results of the Bayesian identificati on obtained with low fidelity analytical approximations are again significantly worse th an the results obtained using the Bayesian identification with the high fidelity response su rface approximations (see Table 4-16). This illustrates again the importance of having high fi delity frequency approximations in order to obtain accurate identification result s for this vibration based problem. On a side note we see that in spite of usi ng the approximate analyti cal frequency solution, which led to very poor results with the least squares formulation, the Bayesian approach identified properties that are physically plausible, though still very far from the literature values.

PAGE 128

128 Graphical Comparison of the Identifica tion Approaches with the Low Fidelity Approximation To provide a better understanding of what is happening in the two identification approaches when using the low fidelity approximat e analytical solution we plot the posterior pdf and the least squares (LS) objective function in a representative plane. Note that both functions are four dimensional thus problematic to re present graphically. To obtain a meaningful representation of these functions we decided to plot them in th e two dimensional plane defined by the following three characteri stic points of the problem: the LS bounded optimum, the LS unbounded optimum and the most likely point of th e posterior pdf (see Tables 4-16, 4-18 and 420 respectively for the coordinate s of these points). The posterior pdf as well as the likelihood function of the material propert ies are represented in Figure 42. For comparison purposes the least squares objective function is al so represented in this same plane in Figure 4-3. Note that for representing the posterior pdf in this section we removed the truncation bounds on the prior distribution (i.e. we considered a purely Gaussian prior). The truncation would have further narrowed the posterior pdf. A B Figure 4-2. Two dimensional repres entation in a three point plane of : A) The posterior pdf. B) The likelihood function.

PAGE 129

129 Figure 4-3. Two dimensional repr esentation in a three point plane of the least squares objective function Figure 4-2-B shows that the likelihood functi on might be multimodal since the distribution function has a local peak in th e bottom half of the image. No te that we cannot claim with certainty that the function is multimodal because we do not know what is happening in the other two dimensions not represented in the plot. The small local peak is relatively far away from the area of physically reasonable properties around the points max Bayes and LS bounded The global most likely point of the likelihood function is however much closer to this area, which is reassuring. Figure 4-2-A shows the posterior pdf, that is, the distribution obtained by multiplying the likelihood function by the prior dist ribution. The prior distribution had the effect of killing the local peak and significantly narrowing down the di stribution. This is somewhat unusual because we assumed a relatively wide pr ior distribution which in a typical identification problem is expected to have little impact on the results. It is due however to the ill-posedness of the problem which manifested itself in the least square results as well. Note that on Figure 42-A the point denoted max Bayes does not perfectly correspond with the center of the distribution. This is due to th e fact that only 1000 Monte Carlo simulations were

PAGE 130

130 used in the Bayesian approach in order to keep a reduced computational cost. The effect is a relatively noisy likelihoo d function and posterior pdf, which s lightly offset the estimate of the posterior pdf maximum but dont affect the qualitative conclusi ons. If the aim would be to accurately identify the posterio r distribution more simulations would have to be used. Figure 4-3 shows the objective f unction of the least squares identification plotted in the same three points plane. As calculated in Tables 4-17 and 4-19 the two points LS bounded and LS no bounds have a very close value of the objective function. LS no bounds has however a slightly lower objective function value, thus maki ng it the minimum among the two. Of course in reality the point is physically implausible, wh ich is because the problem is ill-posed. By illconditioned we mean that the two points LS bounded and LS no bounds while being very far away from each other in the modu li space, lead to a very close va lue of the objective function. It is not a local minimum though because the LS al gorithm moves continuously from the optimum LS bounded to the optimum LS no bounds The continuous path is likel y to be in the other two dimensions that are not represente d graphically in the Figure 4-3. We can note that there are sim ilarities in shape between the least squares objective function of Figure 4-3 and the likelihood function of Figure 4-2-B. This would be expected since the two are based on the same analytical approximate solu tion for the frequency calculations, so errors in this approximation would affect the two appr oaches. However apart from being somewhat shifted, the major difference between the two is that while the LS objective function has the overall minimum in the lower lobe, the likelihood function has the most likely point in the upper lobe, which from a physical point of view is much more plausible. This shows that while the two approaches are affected by the poor accuracy of the analytical frequency approximation, the Bayesian method handles this si gnificantly better than the basic least squares method.

PAGE 131

131 Summary In the first part of this ch apter a procedure was detailed for obtaining polynomial response surface approximations (RSA) for the natural freque ncies of a vibrating orthotropic plate. The RSA achieved high accuracy, allowing them to be us ed in most applications that require fast function evaluations together with high fidelity, such as Monte Carlo simulation for Bayesian identification analysis. The RSAs constructed we re between one and two orders of magnitudes more accurate than an existing approximate analy tical formula due to Dickinson for vibration of free orthotropic plates. To achieve such high fide lity the RSAs were fitted to the nondimensional parameters characterising the vibration problem. Note that the overall procedure is applicable not only to free but any boundary conditions as long as the RSAs are refitted to the corres ponding design of experiments in terms of the nondimensional parameters characterizing the vi bration problem with the specific boundary conditions. In the second part we showed that for the material properties identification problem we consider the fidelity of the frequency approxi mation had significant impact on the identified material properties. The high fi delity nondimensional frequency RSAs led to reasonable results with both least squares and Baye sian identification schemes. The lower fidelity frequency approximations due to Dickinson (1978) led to unreasonable identification results. Using a least squares appr oach led in our case to physically implausible results. The Bayesian approach, while obtaini ng physically reasonable re sults, also performed significantly worse than the identificati on with high fidelity approximations. In the next chapter we will investigate in more details the Bayesian approach applied to the problem of orthotropic elastic constants identification from vibrating plates using the experimental data from Pedersen and Frederikse n (1992). We already used these experiments in

PAGE 132

132 the present chapter for a basic Bayesian identifi cation where all the uncerta inties in the problem (on measurement and input parameters) were normally distributed. The main goal here was to investigate the difference between low and fidelity approximations in identif ication. In the next chapter the focus will be on the additional capabilities that the Bayesian identification allows. We will thus construct a more complex uncerta inty model which handles systematic modelling errors as well as non-Gaussian uncertainties. We will also analyse in details the identified probability density function characterizing it by variance and correlation coefficients. In the entire next chapter we will use for the Bayesian identification the high fidelity response surface approximation that we constructed here.

PAGE 133

133 CHAPTER 5 BAYESIAN IDENTIFICATION OF OR THOTROPIC ELASTIC CONSTANTS ACCOUNTING FOR MEASUREMENT ER ROR, MODELLING ERROR AND PARAMETER UNCERTAINTY Introduction Accurate determination of the orthotropic m echanical properties of a composite material has always been a challenge to the composites community. This challenge is threefold: designing the most appropriate experiment for determ ining the properties s ought, handling inherent uncertainties in the experiment and modeling of the experiment and finally estimating and controlling the uncertainty with which the properties are determined. The design of an experiment for identifying th e four in-plane elastic constants of an orthotropic material as accurately as possible has been a rich ar ea of investigation (e.g. Kernevez et al. 1978, Rouger et al. (1990), Grediac and Vaut rin 1990, Arafeh et al. 1995, Vautrin 2000, Le Magorou et al. 2000, 2002). Vibrati on testing is considered as an effective experimental method for this purpose. This technique was introduced by De Wilde et al. (1984, 1986) and Deobald and Gibson (1986) in the context of determini ng the elastic constants of a composite. These studies involved measuring the natural freque ncies of a freely hanging composite plate, frequencies which were used for identifying the four elastic constants of the laminate using model updating. Some of the advantages of vibr ation testing are outlined in De Wilde et al. (1984, 1986), Deobald and Gibson (1986), Grediac and Paris (1996), Gred iac et al. (1998a), Gibson (2000). These advantages include single test of nondestru ctive nature and determination of homogenized properties as opposed to local prope rties using strain gaug es. Note however that identification from vibration usually leads to more accurate results wi th laminate properties rather than ply properties. In the present study we limit the discussion to vibration testing and focus on the two remaining points relative to handling uncertainty.

PAGE 134

134 In identifications using a single test to determine all four elastic constants it is often observed that the different material propertie s are not obtained with the same confidence. Typically the shear modulus and the Poissons ratio are known with sign ificantly less accuracy from a vibration test for example. Accordingly, estimating the uncertainty with which a property is determined can be of great interest. A possible natural representation of this uncertainty is through the joint probability density function of the properties which provides not only estimates of uncertainty in the properties (variances) but also estimates of the correlation between them (covariances). Determining the uncertainty in the output valu es (identified propertie s) requires however some knowledge of the uncertainty in the inpu t (measurement errors, model errors). Indeed multiple sources of uncertainties are present whic h have an effect on the identification. First there is the uncertainty in the measured freque ncies. Furthermore there are uncertainties on the input parameters of the model (e.g. dimensions and density of the plate). Finally there is uncertainty in the ability of the model to pr edict the actual experiment. These uncertainties should be taken into account when id entifying the material properties. The aim of the present chapter is then to id entify the probability distribution of the four orthotropic ply elastic consta nts of a thin composite laminate from natural frequency measurements using a Bayesian approach whic h can handle all three previously discussed uncertainties: measurement uncertainties on the natural frequencies, unc ertainty on the other input parameters involved and modeling uncertainty. In a first section we describe the vibration problem that se rves for material properties identification. We then provide a detailed descri ption of the Bayesian id entification procedure.

PAGE 135

135 Finally we give the Bayesian identification re sults. We close the ch apter with concluding remarks. Vibration Problem The vibration problem is the one presented in the previous Chapter using data from Pedersen and Frederiksen (1992). The problem is briefly summarized again in this section for convenience. We seek the four in-plane ply-el astic constants of a th in composite laminate: E1, E2, G12 and 12. As in the previous chapter we use agai n the experimental m easurements obtained by Pedersen and Frederiksen (1992) who measured the first ten natural frequencies of a thin glass/epoxy composite laminate with a st acking sequence of [0,-40,40,90,40,0,90,-40]s, which for convenience are provided again in Table 51. The rectangular plate dimensions (length a width b and thickness h ) are given again in Table 5-2. Th e plate was attached by two strings which were assumed to be modeled appropriately by free boundary conditions. Table 5-1. Experimental frequencies from Pedersen and Frederiksen (1992). Frequency f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 Value (Hz) 172.5 250.2 300.6 437.9 443.6 760.3 766.2 797.4 872.6 963.4 Table 5-2. Plate properties: length (a), width (b), thickness(h) and density ( ). Parameter a (mm) b (mm) h (mm) (kg/m3) Value 209 192 2.59 2120 Pedersen and Frederiksen (1992) had applied a basic least squares approach for identifying the four ply-elastic constants: E1, E2, G12 and 12. This involved minimizing the following objective function: 2 1() ()respmeasure m ii measure i iff J f E E (5-1)

PAGE 136

136 where E = { E1, E2, 12, G12} fi measure is the ith experimental frequenc y from Table 5-1 and fi resp is the response prediction of a nu merical model (a Rayleigh-Ritz method was used by Pedersen and Frederiksen (1992)). Bayesian Identification Bayesian Formulation The Bayesian formulation for identifying the prob ability distribution of the four ply-elastic constants is similar to the ones introduced in th e previous chapters. The main difference is that we assume a more complex error model which is detailed in the next section. For convenience we provide again the Bayesian formulation in th e current section. Readers who are familiar with the previous chapter can skip this section. The Bayesian formulation can be written as follows for the present vibration problem: 1 ()measureprior fEE EffKmeasure Ef E (5-2) where E = { E1, E2, 12, G12} is the four dimensional random va riable of the el astic constants, f = { f1 f10} the ten dimensional random va riable of the frequencies measurements prediction and fmeasure= { f1 measure f10 measure} the vector of the ten meas ured natural frequencies. Equation 5-2 provides the joint probability density function (pdf) of the four elastic constants given the measurements fmeasure. This pdf is equal to a normalizing constant times the likelihood function of th e elastic constants E given the measurements fmeasure times the prior distribution of the elastic constants E. The prior distribution of E reflects the prior knowledge we have on the elastic constants. This knowledge can come from manufactur er specifications for example. It has a regularization purpose by reducing the likelihood of values of E which are too far off from reasonable values defined by the prior knowledge on the composite studied. In our case we assumed that we only have relatively vague prior knowledge by defining a truncated

PAGE 137

137 joint normal prior distribution with relatively wide standard deviations as defined in Table 5-3. The distribution was truncated at the bounds give n in Table 5-4, which were chosen in an iterative way such that eventually the posterior pdf is approximately in the center of the bounds and their range covers approximately four standard deviations of the posterior pdf. Table 5-3. Normal uncorrelated prior distri bution for the glass/epoxy composite material. Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Mean value 60 21 0.28 10 Standard deviation 10 5 0.05 1.5 Table 5-4. Truncation bounds on the prior di stribution of the material properties Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Low truncation bound 53.5 18.5 0.22 9 High truncation bound 64.5 25.2 0.34 11.5 The other term on the right hand side of E quation 5-2 is the likelihood function of the elastic constants given the measurements fmeasure. This function measures the probability of getting the test results for a give n value of the elastic constants E, and consequently, it provides an estimate of the likelihood of different E values given the test results. Note that there is uncertainty in f, since for a given value of E we need to calculate the probability of obtaining the measurements. This uncertainty can have several causes which are detailed next. Sources of Uncertainty A typical cause of uncertainty in the identification proce ss is measurement error on the natural frequencies. Previous studies by Sol (1986), Papazoglou et al. (1996), Lai and Ip (1996), Hua et al. (2000), Daghia et al. (2007) assumed this to be the only uncertain ty and also made the assumption that it is normally distributed. Thes e assumptions allow a purely analytical treatment of the likelihood function, which has the advantag e of reduced computational cost. This might not always be the most realistic assumption t hough. For example Gaussi an distributions have

PAGE 138

138 infinite tails, that for elastic co nstants are devoid of any physical meaning. To prove the ability of our approach to handle any type of distribution we assume here a uniformly distributed measurement error. Treating uniform distributions is possible because we do not use the previous analytical approach but use instead Monte Carl o simulation for the calc ulation of the likelihood function. Note that presently we do not have enough information to determine the exact error structure of the measurement uncertainty. A detailed uncertainty propagation study and test campaign would be necessary to determine it. Wh atever the uncertainty structure though, our procedure can incorporate it in the Bayesian appr oach unlike the analytical approaches that can consider only Gaussian measurement noise. This numerical treatment based on Monte Carl o simulation also allows us to consider errors that do not stem from the measurem ents, such as the errors presented next. Another uncertainty in the identification pro cess is modeling error. One potential modeling error is due to the use of thin plate theory, which is more se vere for higher modes due to the lower wave length of the higher modes. Other mode ling errors, such as discretization errors also increase for higher modes because of the more complex mode shapes. Finally, yet another uncertainty in the identification process is due to uncertainty in the other input parameters of the vibration mode l: dimensions of the plate and its density. The next section develops the implementati on of the error model corresponding to these uncertainties. Monte Carlo simulation is then used to propagate the uncertainty effect to the natural frequencies and finally to the likelihood function. Error and Uncertainty Models We chose to model the different sources of uncertainty described in the previous sub section as shown in Eq. 5-3.

PAGE 139

139 ,thinplatethickthin mmmm f ffuEp (5-3) where fm is the random variable of the fre quency measurement prediction for the mth natural frequency of the plate, fm thin plate is the frequency response obtained from a thin plate theory model and which depends on the material properties E and the other model input parameters p, which might be known with some uncertainty, fm thick-thin is the modelling error due to the difference between thin and thick plate theory and um is a uniformly distributed random variable modelling measurement error. Note that other type s of modelling errors coul d also be considered in a similar way, such as discretization errors, er rors in the numerical so lving of the problem or errors due to an imperfect model represen tation of the actual e xperiment. In our case discretization error was found to be small compared to the other sources of error and uncertainty we considered in the problem. Also note that fm thick-thin is not considered here to be a random variable but is precisely define d as described in the next para graphs. Other types of modelling errors which might be less well known, coul d be defined as random variables though. We chose to consider a modeling error betwee n thin and thick plate theory because, even though we used a thin plate (see dimensions in Ta ble 5-2), transverse shear effects can become non-negligible for higher modes. The differen ce between the two model predictions remains small however and will change little as th e elastic constants change during the updating procedure. Accordingly we evalua ted this difference, which was assumed not to vary during the updating, using the mean values of the prior di stribution of the four in-plane properties (see Table 5-3). For the transverse shear values we considered G13 = G12 and G23 = 0.9 G12, which is typical for such a glass/epoxy composite. Using an Abaqus model with thick plate elements we found the absolute differences given in Table 5-5.

PAGE 140

140 Table 5-5. Absolute difference between the frequencies ob tained with thin plate theory and thick plate theory for the mean values of the a pr iori material propertie s (see Table 5-3) and G13 = G12 and G23 = 0.9 G12. Frequency f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 Difference (%) 0.31 0.03 0.04 0.29 0.29 0.10 0.07 0.42 0.32 0.23 Note that thick plate theory could have b een used directly in the numerical model. However we chose thin plate theory for the numerical model because we needed to construct response surface approximations of the frequencies in order to reduce computational cost. As has been shown in the previous chapter, the thin pl ate theory natural frequencies can be expressed with great accuracy in a relatively simple polynomial form. The more complex governing equations of the thick plate theory would have required more complex response surface approximations. So we chose to use thin plat e theory during the Baye sian updating procedure and model the difference to thick plate theory by the correction term fm thick-thin defined through Table 5-5. To define all the terms of Eq. 5-3 we need to define the bounds of the uniformly distributed random variable um, modelling the measurement error. We consider that the bounds of the distribution increase with the mode number, which can be explained as follows: due to noise it is harder to numeri cally extract the higher frequencies the higher modes, due to thei r higher frequencies and more complex mode shapes may start involving effects which were considered negligible for the fundamental frequency (surrounding air effects, damping, more complex material behavior) Accordingly we considered that the upper/ lower bound is +/0.5% of the measured experimental frequency for the fundamental mode ( m =1) and +/2% for the highest mode measured ( m =10). The bounds vary linearly with m in-between. In summary, the error model is given in Equation 5-4.

PAGE 141

141 ,mmmuU where (1)(10) 0.020.005 101101mmmm f (5-4) Note that there is also a modeling error due to the use of response surface approximations instead of the finite element analyses. We have shown however in the previous chapter that this error is very small (maximum of 0.06%, see Tabl e 4-8). Compared to the measurement errors of the order 0.5 to 2% this has a negligible impact in terms of total variance. We thus did all the calculation with the error model of Equation 5-4. The final source of uncertainty in the identifi cation process is related to the uncertainties on the other input parameters p to the thin plate numerical m odel (see Eq. 5-3). As in the previous chapter we assumed these uncertainties to be normally distribu ted, characterized as shown in Table 5-6. The standard deviations considered are reasonable uncertainties for measuring these parameters for the present experiment. Table 5-6. Assumed uncertainties in the pl ate length, width, thickness and density ( a, b h and ). Normal distributions are used. Parameter a (mm) b (mm) h (mm) (kg/m3) Mean value 209 192 2.59 2120 Standard deviation 0.25 0.25 0.01 10.6 Bayesian Numerical Procedure The Bayesian identification involves evalua ting the posterior di stribution defined by Equation 5-2. The prior distributi on in the right hand side is an analytical expression (normal distribution characterized in Tabl e 5-3) so the major challenge is in constructing the likelihood function. We chose to construct th is function point by point: we eval uate it at a grid in the fourdimensional space of the material properties E = { E1, E2, 12, G12}. We chose here a 164 grid, which is determined by computational cost cons iderations. The grid is bounded by the truncation bounds on the prior distribution. A convergence study is given at the end of this chapter.

PAGE 142

142 At each of the grid points E is fixed and we need to ev aluate the probability density function (pdf) of the frequencies measurement prediction, fixedfEE f at the point f = fmeasure. The pdf of the frequencies measurement prediction for a fixed E is determined by propagating through Monte Carlo simulation the uncertainties in the input pa rameters and the error model defined in the previous subsection. This is done in two steps for each E point on the grid: Propagate the normal uncertainties on the input parameters (see Table 5-6) to the natural frequencies using Monte Ca rlo simulation. We denote finput_MC the random variable of the frequencies due only to the uncertainty in input parameters and the thick-thin plate correction term but not to the measurement uncertainty. Calculate the point of the likelihood function using Equati on 5-5 which accounts for the measurement uncertainty u (see Equation 5-4). _1ub fixedinputMCfixed lbfEEfEEK measure measurefu measureinput_MCinput_MC fuffdf (5-5) where K is a normalizing constant. The use of Equation 5-5 is possible because th e measurement error is uniformly distributed within the bounds [ ulb, uub]=[m, m] (see Equation 5-4). Equation 55 is equivalent to saying that the likelihood of m easuring the frequencies fmeasure is equal to the probability that the simulated frequencies finput_MC fall inside the measurement uncertainty bounds [fmeasure ulb, fmeasure + uub]. The integral in Equation 5-5 is evalua ted by counting the number of frequencies within the bounds [fmeasure ulb, fmeasure + uub] out of the total number of simulated frequencies finput_MC. We used 50,000 Monte Carlo simulations. In all the cases, since the Bayesian up dating procedure used involves Monte Carlo simulation we are confronted with significan t computational cost. For each of the 164 grid points 50,000 Monte Carlo simulations of the first 10 natu ral frequencies are carried out. This means that in total we have about 2 billion frequency ev aluations. In such a contex t it is obvious that a

PAGE 143

143 numerical model (such as finite elements) is by far too expensive for calculating the natural frequencies. To reduce the cost we chose to construc t response surface approximations of the frequencies. The response surface method allows to reduce the cost of a frequency evaluation from about 4s using the finite element model to about 0.3ms with the surrogate model (on an Intel Core2 Duo 2GHz machine). This brings the computational cost of the Bayesian procedure down to tractable levels. Applicability and Benefits of Separable Monte Carlo Simulation to Bayesian Identification At this point we would like to note that in spite of the significant reduction in computational cost, the approach remains at the e dge of what is usually considered reasonable, an identification taking up to a week. This coul d become problematic if the Bayesian procedure that we developed cannot be direct ly applied due to a different nature of the uncertainty structure for example. Indeed in our procedure we used to our advantage the fact that the measurement noise was considered uniformly distributed in order to further reduce computational cost. In a more general case where the measuremen t uncertainty is not uniformly distributed such as to allow the use of Equation 5-5 and the total uncertainty can also not be approximated by a joint normal distribution, then following pr ocedure can potentially be applied instead. For each fixed E point on the grid sample values fo r the uncertain input parameters a, b, h and propagate these by Monte Carlo simulation to th e natural frequencies a nd add a sampled value for the modeling and measuring uncertainty u. This will lead to a sample of simulated frequencies. A distribution needs to be fitte d to these frequencies and be evaluated at f = fmeasure, which would lead to the likelihood value we seek fixedfEEmeasuref. To fit the sampled frequencies we can construct the empirical hi stogram and fit it using a kernel method for

PAGE 144

144 example. The major issue with this approach ho wever would be greatly increased computational cost leading to no longer r easonable computation times. A possible way to address this issue is by using separable Monte Carlo sampling (Smarlsok 2009), which can decrease by several orde rs of magnitude the number of simulations required. Separable Monte Carl o simulation uses to its adva ntage the independence of the random variables involved. To show its applicability we use the Bayesian identification formula tion of Eqs. 3-8 and 39. The two random variables that appear in D are on one hand the true value of the response based on the measurement meas trueY and on the other side the true value based on the model model trueY. The independence of these random variables means that the two can be sampled separately. The conditional expectation method (A yyub and Chia 1992) allows then to calculate the likelihood function as shown in Equation 5-6, by using N samples of model trueY into the pdf ofmeas trueY. 11 ()()meas trueN Y il N modelmeas truetrueimodelmeas truetrue yyxydy (5-6) Since for a same estimation accuracy the method requires much fewer samples of the independent variables than wh at is required when sampling D directly, a significant cost reduction is thus possible. While we will not explor e this approach in the present work, we still wanted to mention its potentia l applicability and benefits. Bayesian Identification Results All the results that are presented in this chapter were obtained using the high fidelity frequency RSA constructed in Chapter 4 with the bounds given in Table 4-7. This RSA was tested in Chapter 4 and was shown to have very good accuracy (see Table 4-8) over the domain for which we will use it here for Bayesian identification.

PAGE 145

145 To serve as a comparison point for the Bayesian results we provide fi rst the least squares identified values. The least squares results are given in Table 5-7 and were obtained using the least square formulation describe d at the beginning of the chapte r and the high fidelity frequency RSA obtained in Chapter 4. Table 5-7. Least squares identified properties. Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Value 60.9 22.7 0.217 9.6 The identified posterior distri bution by the Bayesian approach is represented graphically in Figure 5-1, by fixing two of the material properties at the mean values of the posterior distribution and plotting the di stribution as a function of th e remaining two properties.

PAGE 146

146 Figure 5-1. Identified posterior jo int distribution plots. Since the joint probability distribution is four-dimensional it is plotted in 2D by fixi ng the two variables not plotted at their mean values. The plots are then six cuts through the distribution mean value point. We can characterize the identified posterio r distribution by the mean values and the variance-covariance matrix as shown in Tables 58 and 5-9. Note that before we had used the most likely value to characterize the identified properties. We use the mean value here because our grid is relatively sparse, t hus the highest point of the grid might not be a good estimate of the true maximum. Considering the shape of the distri butions we considered that the mean value is a better estimate of the most likely point as well. Interpreting the variance-covariance matrix is not very easy so we also provided the coefficient of variation in Table 5-8 and the co rrelation matrix in the Table and 5-10, which facilitate the interpretation of the results. Table 5-8. Mean values and coefficient of vari ation of the identified posterior distribution. Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Mean value 60.8 21.3 0.27 9.87 COV (%) 3.05 5.46 12.2 5.96 Table 5-9. Variance-covariance matrix (symmetr ic) of the identified pos terior distribution. E1 (Pa) E2 (Pa) 12 (Pa) G12 (Pa) E1 (Pa) 3.45E+18 -3.05E+17 -2.31E+07 -6.85E+17 E2 (Pa) 1.36E+18 -2.28E+07 -2.46E+17 12 1.10E-03 1.49E+07 G12 (Pa) 3.47E+17

PAGE 147

147 Table 5-10. Correlation matrix (symmetric) of the identified poste rior distribution. E1 E2 12 G12 E1 1 -0.141 -0.378 -0.626 E2 1 -0.593 -0.358 12 1 0.768 G12 1 It is worth noting that while for the three bar truss problem the prior distribution had a negligible effect on the identification, the prior would have more significant effect here in the identification of 12 notably. The plys Poissons ratio is identified with relatively high uncertainty (about 12% COV), which is no l onger small enough compared to the prior distributions COV of about 18%. So while the prio r will affect the identi fication in this case, this is reasonable since the prior has a relativel y high COV in absolute terms, which is generally available to the experimentalist (based on fibe r/matrix properties homogenization for example). The prior allows in this case to avoid implausible results. A convergence study was carried out in order to estimate the error in the parameters characterizing the identified probability density function. Mean values and variance-covariance matrix are indeed dependent on the pdf domain used to calculate them. In our case this domain is defined by the truncation bounds on the prior dist ribution. Since tight truncation bounds reduce the computational cost we initially started with an 84 grid. This grid was then increased by two points at each step of the convergence study. Note that the step size remained the same, that is we did not refine the domain but enlarge it dur ing the convergence study. The final domain used a 164 grid with the domain bounds being those given in Table 5-4. The convergence results ar e illustrated in Figure 5-2. As far as mean values convergence (Figure 5-2A) we can say that the results are rela tively stable and the estimates are very close to

PAGE 148

148 convergence. The highest error seems to be in the shear modulus, which has an estimated truncation error of less than 0.4%. 0 0.2 0.4 0.6 0.8 1 1.2 4681012141618Grid pointsDistribution mean values E1 in 1011 Pa E2 in 1011 Pa nu12G12 in 1010 Pa A 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 4681012141618 Grid pointsStandard deviations std(E1) in 1011 Pa std(E2) in 1011 Pa std(nu12) std(G12) in 1010 Pa B -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 4681012141618 Grid pointsCorrelation coefficients cE1E2 cE1nu12 cE1G12 cE2nu12 cE2G12 cG12nu12 C Figure 5-2. Convergence of the: A) mean value. B) standard deviat ion. C) correlation coefficient of the identified distribution. The variance-covariance matrix on the othe r hand seems less converged. Regarding standard deviations the highest e rror seems to be again relative to G12 which has an estimated truncation error in the standard deviation of less than 8%. The tr uncation error in the correlation coefficients is estimated to be less than 5% except for the correlation coefficient between E1 and

PAGE 149

149 E2 which seems poorly converged. Note however th at these two parameters have the lowest correlation among all so the accuracy on th is coefficient is less critical. The truncation errors could have been fu rther reduced by continuing to enlarge the calculation domain. Since we ar e in a four dimensional space however, adding points quickly becomes computationally prohibitive. In our case moving from a 16 to an 18 points grid would have required an additional week of calculati ons on an Intel Core2 Duo 2GHz machine in spite of the use of response surface approximations Accordingly we decided to stop at a 164 grid. It is however important to note that only the variance co-variance matrix is estimated with somewhat larger errors. The mean values are estimated very accurately. Consider ing that the variancecovariance matrix is rarely provided at all in identification studies we consider important to estimate it even if only with about 10% accuracy. Comparing Tables 5-6 a nd 5-7 we can note that the least square identified values and the mean values of the posterior di stribution are relatively far apar t for some properties. However, considering the uncertainties char acterizing the posterior distribu tion and since the least squares values also have an uncertainty at least as high as the Bayesian values, typically higher (see Chapter 3), the results of the two methods seem plausible. The results of Table 5-8 show that the four ma terial properties are identified with different confidence from the vibration test we used. Th e longitudinal Youngs modul us is identified most accurately while the Poissons ratio is identified with a high uncertainty. This trend has been often observed in the composites community, since repeated test s on a same specimen typically lead to much higher dispersion on some propert ies than on others (Poi ssons ratio and shear modulus are typically known more poorly). However it is rarer that the difference in uncertainty is quantified and the Bayesian id entification approach provides a natural tool for this purpose.

PAGE 150

150 Note that the uncertainties depend on the e xperiment and a different lay-up or another experimental technique would lead to different uncertainties. Note al so that the identified pdf is not a representation of the variability of the mate rial properties, it just provides the uncertainty with which they are identified from this particular experiment. Table 5-10 shows that there is also non-negligible correl ation between the different properties. This is an import ant result and we could not fi nd any previous study giving the correlation matrix of the orthotropic constants id entified. Ignoring the co rrelation would lead to significantly overestimating the uncertainty in th e identified properties. On a final note, the results of Table 5-8 and 5-9 can provide a realistic model of expe rimental uncertainty which can be used in probabilistic studi es instead of simple uncorrelated uncertainty models. Uncertainty propagation through least squares identification At this point it is worth looking into the quest ion if the uncertainty model identified for the material properties using the Bayesian me thod could have been obtained using other identification methods, least squa res identification in particular Indeed even though the least squares identification provides a single value for the identified properties it is possible to account for the uncertain input parameters by simula ting them with Monte Carlo simulation and repeating the least squares identification for each set of simulated parameters. The uncertain parameters to be propagated th rough the least squares identification are in our case the four model input para meters length, width, thickness and density (see Table 5-6) as well as the measurement errors on the 10 natural fr equencies, which were assumed distributed as described in Eq. 5-3. To carry out this least squares uncertain ty propagation we simulated by Monte Carlo 1000 values for the 14 uncertain parameters: four input parameters and a measurement error for each of natural freque ncy, which was added to the experimentally measured frequencies of Table 5-1. For each of these 1000 simulations a least squares

PAGE 151

151 identification was carried out using the formul ation of Eq. 5-1. Based on the 1000 sets of identified properties we calculated their st atistics, which are given in Table 5-11. Table 5-11. Mean values and coefficient of variation obtained by uncertainty propagation through the least squa res identification. Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Mean value 60.7 22.7 0.22 9.59 COV (%) 2.93 8.27 21.2 8.92 Since we used the same uncertainty model as for the Bayesian identification these results can be compared to those obtained through the Baye sian method in Table 5-8. First we note that the mean values obtained through least squares ar e different from those obt ained using Bayesian identification. We have discussed in Chapter 3 se veral effects that can cause this difference, in particular different sensitivity, uncertainty or correlation among the responses. Second we note that while the coefficient of variation (COV) of E1 is slightly underestimated using least squares, the COV of the other thr ee properties is well overestimated. These differences can be explained both by the effect of the prior and by the differe nt effects we investigated in Chapter 3 and notably the effect of diff erent sensitivity of the response to the properties. E1 is indeed the most sensitive of the four properties, thus it is identi fied most accurately, while the other properties are identified with higher error due to their lower sensitivity. We have seen that this effect is unwanted since it increases the error with which the properties are identified and we have also seen in Chapter 3 that it affect s the basic least squares identification substantially more than Bayesian identification. This leads us to affirm ing that the Bayesian identified uncertainty model is more realistic than the one obtained th rough least squares uncertainty propagation. We need to point out again at this point that we used the basic least squares formulation here. As seen in Chapter 3 there is also a generalized least squares formulations that considers the variance-covariance matrix of the response. We expect that, in the case where we can

PAGE 152

152 approximate the frequencies uncertainty as a joint normal distribution, performing the uncertainty propagation using the generalized least squares formul ation would lead to about the same results as the Bayesian identification (m inus the effect of the prior). However the generalized least squares formul ation requires calculating the variance covariance matrix of the frequencies as a function of the material proper ties, which is computationally expensive, and would thus also require cost reduction techniques to make th e approach tractable. The same dimensionality reduction, response surface and Monte Carlo simulation approach that we developed for the Bayesian identification could th en be applied as well to obtain at reasonable cost the variance-covariance matrix for generalized least squares. Identification of the plat es homogenized parameters Up to this point we identified the elastic c onstants of the ply of the composite laminate. This is a more challenging problem than id entifying the plates homogenized orthotropic constants because the plys prope rties affect the vibration in directly through the homogenized properties which can potentially reduce the sensitivity of some parameters to the natural frequencies. The advantage of identifying ply prop erties is however that they can provide both extensional and bending homogenized properties. In order to have a better understanding of th e effect of uncertainties we did also the identification of the homogenized bending elastic constants { Ex, Ey, xy, Gxy} and of the bending rigidities { D11, D22, D12, D66} of the plate. Note that for th e bending rigidities we kept the standard notations with the numerical subscripts, but the rigidities are those in the directions x and y of the plate. Since this serves only as a comparative study we reduced the computational cost of the identification to about 12 hours by considering no rmally distributed measurement uncertainty on the frequencies instead of a unifo rm distribution. We ke pt the same standard deviations for the

PAGE 153

153 normal distribution as for the uniform one. 10,000 Monte Carlo simulations were used and the grid size was 64. The results of the identifications are presented in Table 5-11 to 5-12. Table 5-11. Least squares and Bayesian results for ply elastic constants identification with normally distributed measurement error. Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Least squares Value 60.9 22.7 0.217 9.6 Bayesian Mean value 60.8 20.5 0.27 9.86 COV (%) 2.58 4.74 10.7 5.07 Table 5-12. Least squares and Bayesian resu lts for homogenized bending elastic constants identification with normally distributed measurement error. Parameter Ex(GPa) Ey (GPa) xy Gxy (GPa) Least squares Value 43.7 30.3 0.21 13.8 Bayesian Mean value 43.6 30.3 0.22 13.9 COV (%) 1.61 1.39 3.59 1.23 Table 5-13. Least squares and Bayesian results fo r bending rigidities iden tification with normally distributed measurement error. Parameter D11(N.m) D22 (N.m) D66 (N.m) D12 (N.m) Least squares Value 64.0 45.3 20.8 14.6 Bayesian Mean value 65.2 45.3 20.2 14.0 COV (%) 1.49 1.49 1.20 3.85 These results show that the homogenized prop erties are indeed identified with a lower uncertainty than the ply properties. This is mainly due to the fact that the homogenized properties are more insensitive to variations of some pl y properties (the Poisson s ratio of the ply for example). Summary An implementation of the Bayesian approach fo r identifying the probabi lity distribution of the orthotropic constants of a composite from natural frequency measurements of a plate was presented. The proposed approach can handle measurement as well as modeling uncertainty through the use of Monte Carlo simulation. Due to the high computational cost of the simulation,

PAGE 154

154 response surface approximations of the frequencie s were required. Polynomial response surfaces of the natural frequencies as a function of nondimensional parame ters were used, which brought down the computational cost by a factor of a bout 1,000 making the procedure computationally tractable. However, in spite of the substantial cost reduction, the procedure remains at the limit of reasonable computation time. The identified Bayesian posteri or distribution was found to have different uncertainties in the different orthotropic consta nts as well as non-negligible correlation between them. These uncertainties and correlations were quantified and allow the constructi on of an experimental uncertainty model for the particular experiment used.

PAGE 155

155 CHAPTER 6 REDUCING THE DIMENSIONALITY OF FULL FIELDS BY THE PROPER ORTHOGONAL DECOMPOSITION METHOD Introduction Optical full field measurement methods for displacements or strains typically provide large quantities of information since each pixel of th e image represents a measurement point. This has the great advantage of allowing to measure field heterogeneities. The sheer size of the data can however also pose problems in some situations. Such a situation can arise if Bayesian iden tification needs to be applied to full field measurements. Due to its statistical nature it ha ndles probability density f unctions. In the case of full field methods we can have between thous ands and millions of measurement points. Thousands-dimensional joint probab ility density functions for the measurements are however out of the realm of what can be numerically handle d. A reduced dimensional representation of the measurements would thus be useful. A possible method for achieving such a dime nsionality reduction is the principal components analysis (PCA). This method has its or igin in statistical analysis and depending on the field of application it is also known as proper orthogonal decomposition (POD) or the Karhunen-Loeve expansion. The proper orthogonal decomposition method was initially used in computational fluid dynamics to represent complex, turbulent flows (Berkooz et al. 1993). For a review of different uses of POD in computational fluid dynamics th e reader can refer to Lucia et al. 2004. Other uses of POD include representing the structur al response of non linear finite element under stochastic loading (Schenk et al. 2005) or real-time nonlinear mechanical modeling for interactive design and manufacturing (Dulong et al. 2007). POD was also applied to a multidisciplinary optimization problem of an airc raft wing (Coelho et al 2009). Usually cited as

PAGE 156

156 Karhunen Loeve, the POD method is also used in stochastic finite elements (Ghanem and Spanos 2003) or more generally to re present arbitrary random fields (Sakamoto and Ghanem 2002, Schenk and Schueller 2003). The POD method seeks a reduced dimensional ba sis for the representation of the response (full fields in our case). This ba sis is constructed from a set of fields, called snapshots, obtained with input parameters within a certain domain. The decomposition ensures that any field within the corresponding domain can be written as a line ar combination of fixe d POD fields (usually called POD modes). This is similar to ot her types of modal d ecompositions (Fourier decomposition, vibration modes), the major specif icity of POD being that it is sampled based leading to error minimization properties as we w ill see in the next section. The accuracy of the field representation increases as expected with the number of POD mode s used. Typically we seek to represent the fields as a linear co mbination of less than a dozen POD modes while keeping reasonable accuracy in the reduced dimens ional representation. It is important to note that for a full field with thousand measurement points this represents a dimensionality reduction in the field representation of one thousand to only a few dozen. The POD method can potentially allow to achieve this goal without losing any significant amount of information. The rest of the chapter is organized as follows. In a first section, we present the theoretical basis of proper orthogonal decomposition. Next we present the simulated experiment for obtaining full fields. We then describe the dime nsionality reduction problem applied to our specific case. Finally we present the proper orthogonal decomposition results. Proper Orthogonal Decomposition Let us consider a set of N vectors {Ui}i=1..N. A vector n iU, also called snapshot, can be the vector representation of a displacement fi eld for example. The aim of the proper orthogonal

PAGE 157

157 decomposition (POD) method is to construct an optimal, reduced dimensional basis for the representation of the snapshots. For the POD met hod to work, it is necessary that the snapshot vectors have zero mean. If this is not the case the mean value needs to be subtracted from each vector. We denote 1.. i iK the vectors of the orthogonal ba sis of the reduced dimensional representation of the snapshots. The POD method seeks to find the i that minimize the representation error: 2 11 21 min 2NK ikk ik LiU (6-1) Because 1.. i iK is an orthogonal basis the coefficients i,k is given by the orthogonal projection of the snapshot s onto the basis vectors: ,,i ikkU (6-2) As a result we have the following reduced dimensional representation iU of the vectors of the snapshot set: 11,KK i ikkkk kkiUU (6-3) The reduction in dimension is from N to K The truncation order K needs to be selected such as to maintain a reasonably small error in the representations iU of Ui. Selecting such a K is always problem specific. The POD approach provides however a c onstruction method for obtaining the optimal basis vectors that minimize the error defined in Equation 6-1. This means that for a given truncation order we cannot find a ny other basis that better repr esents the snapshots subspace.

PAGE 158

158 The basis 1.. i iK is constructed using the following matrix: 1 11 1 N N nnUU X UU (6-4) The vectors 1.. i iK are then obtained by the singular values decomposition of X or equivalently by calculating the eigenvectors of the matrix XXT The singular values decomposition allows writing that: T X (6-5) where is the matrix of the column vectors i Standard singular value decomposition routines (such as LAPACK (Anderson et al. 1999)) typically provide the matrix The svd() function in Matlab also implements this routine, which was used here. In the rest of the chapter we will apply th e POD decomposition to full field displacement measurements. A truncation error criterion can be obtained by the following proce dure. This criterion is defined for the sum of the error norms as shown in Equation 6-6. 2 2 2 111 2 NKN ikk L iki LiiUU (6-6) With 2 1 2 11K i i N i i where i are the diagonal terms of the diagonal matrix For a derivation of this criterion th e reader can refer to Filomeno Coelho and Breitkopf (2009).

PAGE 159

159 Simulated Experiment Experiment Description The experiment we model is a tensile test on a composite plate with a hole. The laminate is made of graphite/epoxy with a stacking sequence of [45,-45,0]s. The plate has the dimensions given in Figure 6-1. The app lied tensile force is 1200 N. Figure 6-1. Specimen geometry. The specimen ma terial is graphite/epoxy and the stacking sequence [45,-45,0]s. The tensile force is 1200 N. Numerical Model The plate is modeled using Abaqus finite element software. A total of 8020 S4R elements (general purpose, four nodes per element, redu ced integration) were used. The simulated measurement is assumed to take place on a refere nce area of 20 mm x 20 mm at the center of the plate. This would be a typical area of an optical full field measurement (Moir interferometry for example). The finite element mesh in the area of interest is represente d in Figure 6-2 and the simulated measurement area highlighted in red. No te that Figure 6-2 does not represent the entire mesh. Since the hole plate is modeled in Abaqus there is a transition us ing triangular elements towards a larger mesh at the grip edges of th e plate where the stress are relatively uniform compared to the area around the hole.

PAGE 160

160 A finite element mesh convergence study was carried out and it was found that with the present mesh the discretization error in the area of interest was of the order of 6 10-4 % of the average absolute value of the field, which was considered acceptable. To illustrate the applicability of the dimensionality reduction method we used here finite element generated fields instead of actually measured fields, thus the term simulated measurement. Note however that this does not m ean that only finite el ement fields could be represented in the reduced dime nsional space. Once the POD modes are determined, measured fields can very well be projected onto these m odes and expressed in th e corresponding basis. This can even have some advantages, such as noise filtering as will be shown in the last section of the chapter. Figure 6-2. Finite element mesh. The area of th e simulated measurements (20 mm x 20 mm) is highlighted.

PAGE 161

161 Dimensionality Reduction Problem Problem Statement The general framework of the problem is in our case the following. We vary a certain number of model parameters such as elastic constants or plate dimensions and obtain each time by finite element simulation the corresponding full field. The field is described here by the displacement values at the 4569 nodes within the reference area (see highl ighted area in Figure 6-2). If such a field calculation needs to be repe ated a large number of times (for statistical sampling for example) or needs to be used within statistical frameworks it is not practical to describe the fields by their va lue at each point (at the 4569 no des here). A major reason is because if statistical me thods (such as Bayesian identificatio n) need to be used on the fields, thousands dimensional probability density function s required to describe the correlation between the different measurement points are far outside the realm of wh at the statistical methods can handle with reasonable co mputational resources. The problem statement can then be formul ated as follows. Can we find a reduced dimensional representation of the full fields fo r whatever combination of input parameters (elastic constants, plate dimensions in our case) within a certain domain? To address this problem we propose to use the proper orthogonal decomposition method described in the first section, which will provide us the optimal basis to represent field samples within a certain domain of input parameters. POD Implementation For the open hole plate identification problem we will consider we are interested in accounting for variations of the followi ng parameters: ply elastic constants E1, E2, 12, G12 and ply thickness t (we are looking at variations of the homogenized propertie s here and not at spatial

PAGE 162

162 variations within the plate). A ccounting for variations in the el astic constants is typically of interest in material properties identification problems. We added here the ply thickness to illustrate a typical case for Bayesian identification. Bayesian identification allows to identify the elastic constants while accounting for other sources of uncertainty. The ply thickness was considered here to be one of such sources of uncer tainty that potentially affects the identification. The Bayesian identification itself will be carried out in the next chapter. We are interested in variations of the parameters E1, E2, 12, G12 and t within the bounds given in Table 6-1. Again these could be typi cal values for an identification problem. Table 6-1. Bounds on the input parameters of inte rest (for a graphite/epoxy composite material). Parameter E1 (GPa) E2 (GPa) 12 G12 (GPa) t (mm) Lower bound 126 7 0.189 3.5 0.12 Upper bound 234 13 0.351 6.5 0.18 We obtain the snapshots required for the P OD approach by sampling 200 points within the bounds of Table 6-1. The points are obtained by Latin hypercube sa mpling, which consists in obtaining the 200 sample points by dividing the range of each para meter into 200 sections of equal marginal probability 1/200 and sampling once from each section. Latin hypercube sampling typically ensures that the points are re asonably well distributed in the entire space. At each of the 200 sampled points we run the finite element analysis, which gives the corresponding horizontal and vertical displacement fields U and V respectively. Each of the 200 fields of U (and 200 of V) represents a snapshot and is stored as a column vector that will be used for the POD decomposition. The simulated measurement area (see highlighted area in Figure 6-2) covers 4569 fi nite element nodes so we obtain sn apshots vectors of size 4569 x 1. The snapshots matrix X has then a size of 4569 x 200. Note that as mentioned in the POD theory section the snapshots need to have zero mean. In our case this was true for the U field but not for

PAGE 163

163 the V field, so we needed to subtract the mean va lue of each snapshot as shown in Equation 6-7, where the bar notation denotes th e mean value of the field. 1 1X 1N 11N 1N nnNVVVV VVVV (6-7) The POD modes of the 200 fields are then calculated using the singular value decomposition as shown in Equation 6-5. Note th at there are two potential ways to do the POD decomposition: on U and V independently or on U and V together (i.e. a single vector of size 9138 x 1). With U and V together we have for a given trunca tion order half as many degrees of freedom as with U and V independently. While for a given truncation order the error using U and V together is smaller, we found that it is more difficult in this case to construct response surface approximations (RSA) of the POD coefficients du e to higher errors in the RSA. Since for the identification we will need to construct RSA we chose to do the POD decomposition on U and V independently. Two snapshots (snapshot 1 and 199) that led to substantially different fields are illustrated next. The input parameters of the two snapshots are provided in Table 6-2. Table 6-2. Input parameters for snapshots 1 and 199. Parameter E1 (GPa) E2 (GPa) 12 G12 (GPa) t (mm) Snapshot 1 191.2 12.67 0.231 5.160 0.1446 Snapshot 199 129.1 11.67 0.294 6.039 0.1343 The fields of snapshot 1 are represented in Fi gure 6-3, which allows to get an idea of the spatial variations of the fields and have an order of magnitude of the different fields for the parameters of interest. Snapshot 199 has displaceme nt and strain fields th at are similar to those

PAGE 164

164 of snapshot 1. Among the notable differences are Max(U)= 0.0148 mm and Max(V)= 0.0105 mm for snapshot 199. POD Results POD Modes In total we obtained 200 POD modes. The firs t six are represented graphically in Figures 6-4 and 6-5. We can note that th e first modes are rela tively close (but not identical even though the differences cannot be seen by naked eye) to the typical U and V displacement fields (see Figure 6-3). Furthermore we can see that the modes have a more complicated shape with increasing mode number, as expected for a moda l decomposition basis. Note also that the POD decomposition is applied twice, once for the U fields and once for the V fields, thus the different modes for the two displacement components. In Figure 6-6 we also provide th e strain equivalents of the P OD modes. Note that these are not calculated using the POD deco mposition of the strain fields. Instead the displacement POD modes are differentiated to obtain the strain equivalents of the P OD modes. In this study we are interested in the displacements POD decompos ition so the strain equivalent POD modes are provided here mainly to facilita te understanding and qualitative comparison. If one would work with strain fields rather than displaceme nt fields than he should carry out the POD decomposition directly on the finite element strain s which would lead to mo re accurate results. In order to differentiate the displacement POD modes two additional steps were required. The displacements were interpolated on a 256 x 256 grid using cubic polynomials (Matlab gridata command). They were then differentia ted using the BibImages toolbox (Molimard 2007) that fits a polynomial locally around each point and computes the derivative based on the polynomial. These two steps can introduce numerical derivation artifacts (a s we will see later) so they are not advised if accurate strain POD modes are sought.

PAGE 165

165 Note that we stop graphical representation at 5 strain equivalent POD modes (Figure 6-6) precisely because of numerical artifacts. For mode 5 some artifacts can already be noticed. POD Truncation For recall, the POD decomposition allows to represent a displacement field obtained with any combination of input paramete rs within the bounds of Table 61 as a linear combination of the POD modes. Even though we obtained 200 POD modes, t ypically substantially fewer modes are required to represent the fields with reasonable accur acy. In the present section we seek the truncation order that still allows keeping reas onable accuracy. Table 6-3 provides the truncation error criterion defined in Equation 6-6. Table 6-3. Error norm truncation criterion ( is defined in Eq. 6-6). K 2 3 4 5 for U fields 2.439 10-7 4.701 10-9 7.280 10-11 1.211 10-11 for V fields 1.054 10-6 2.900 10-9 4.136 10-10 3.517 10-11 Since is an overall error criterion, it is not easy to interpret in terms of errors at different points of the field. Accordingly we will also seek to visualize what happens with the errors for individual snapshots. Figure 6-7 provides the truncation error maps in the reconstruction of snapshot 1 when using only the first 3, 4 and 5 POD modes. Similarly for snapshot 199 in Figure 6-8. We can note that already with 3 modes the maximum displacement error in the field is about 1000 times less than the maximum value of the fiel d. The error further decreases by one to two orders of magnitudes when reaching 5 modes. We can also note that the truncation error using K modes looks close to the mode K+1, which is al so what would be expected. Furthermore we can note that with 5 POD modes we star t seeing artifacts in the error ma ps that are not likely to have a physical meaning. Since for the displacement ma ps we dont use any numerical differentiation

PAGE 166

166 these artifacts are likely to be due to the finite element discre tization. We also note a slight asymmetry in the V displacement errors. This asymmetry is due to a slight asymmetry of the mesh. Indeed the finite element V displacements have their origin at the edge of the plate. In order to carry out the POD decomposition we ha d to subtract the mean values from the snapshots. In case of slight mesh dissymmetry the numerical estimate of the mean is slightly off the true value which causes a dissymmetry in th e POD modes and thus subsequent error maps. Note that the dissymmetry of the mesh is qui te small (see Figure 6-2). The effects of the dissymmetry are noticeable only from mode 5 on meaning that their corresponding error is of the same order of as th e discretization error. Finally in Figures 6-9 and 6-10 we provide the strain equivalent truncation error. This is to check that the POD modes have not missed any impor tant features that have small characteristics on the displacement fields but larg er characteristics on the strain maps. As for the displacement truncation errors we can see that the strain e quivalent truncation error field using K modes has shape similarities with the strain equivalent mode K+1 field. Note that the equivalent strain error maps were again obtained using num erical differentiation thus exhi bit more numerical artifacts. We note however the same trend as for the displ acements as far as the truncation error behavior, the error for K=5 being of the orde r of 0.1 microstrains or less. Cross Validation for Truncation Error The error norm truncation criter ion (cf. Table 6-3), while bei ng a global error criterion, is relatively hard to interpret physi cally. Furthermore the criterion is based only on the convergence of the snapshots that served for the POD basis c onstruction. However in most cases we will want to decompose a field that is not among the snapsh ots, so we also want to know the convergence of the truncation erro r in such cases.

PAGE 167

167 Accordingly we chose to construct a different error measure based on cross validation. The basic idea of cross validation is the following: if we have N snapshots, instead of using them all for the POD basis construction we can use only N -1 snapshots and compute the error between the actual fields of the snapshot that was left out and its truncated POD decomposition. By successively changing the snapshot that is left out we can thus obtain N errors. The root mean square of these N errors, which we denote by CVRMS, is then a global error criterion that can be used to assess the truncation inaccuracy. In order to use the cross valid ation technique we need to de fine how to measure the error between two strain fields (the actual strain field and its truncated POD decomposition). We chose the maximum error between two fields, that is the maximum among all the points of the field of the absolute value of the difference between the two fields. This maximum error is computed at each of the N ( N =200 here) cross validation steps and the root mean square leads to the global error criterion CVRMS. Table 6-4 provides these values for different truncation orders and is illustrated in Figure 6-11. The relative CVRMS error with respect to the value of the field where the maximum error occurs is also given in Table 6-4. Table 6-4. Cross validation CVRMS truncation error criterion. K 2 3 4 5 CVRMS (mm) 9.35 10-6 1.05 10-6 1.65 10-7 7.83 10-8 U field CVRMS (%) 9.96 10-2 1.13 10-2 2.37 10-3 9.49 10-4 CVRMS (mm) 1.00 10-5 6.30 10-7 3.05 10-7 7.32 10-8 V field CVRMS (%) 1.10 10-1 4.71 10-2 3.71 10-3 1.84 10-3 At this point we want to make the follow ing remark. Truncating at K=5 means that the POD decomposition achieved a dime nsionality reduction from 4569 to 5. The error maps for snapshots 1 and 199, as well as th e global error criterions show that this is achieved without losing any significant displacement information. It is important to note as well that the obtained

PAGE 168

168 reduction does not depend on the number of meas ured points (4569 here). As with other modal decompositions the reduction reli es on expressing the fields as a linear combination of the determined POD modes. As long as the discre tization error is reasona bly converged, whether describing each field and POD mode using 4569 or 1 Million points is irrelevant to the fact the fields variations can be expressed with good a ccuracy as a linear combination of only five modes. Material Properties Sensitivities Truncation Error Our final goal is to use the POD decompositi on for the identification of the orthotropic elastic constants. Based on the convergence behavi or from Tables 6-3 and 6-4 it is somewhat hard to determine what the most appropriate tr uncation order is for iden tification purposes. The most relevant quantities for identification are the sensitivity fields of U and V with respect to the four different elastic constants. Th e sensitivity of the generic field X to the generic elastic constant Y is given by Equation 6-8. 0()YYX SensitivityY Y (6-8) The sensitivity is calculated at the point Y0 which is the one defined in Table 6-5. Note that since X is a field the sensitivity will also be a field. Table 6-5. Input parameters to the finite elem ent simulation for the sensitivity study and the simulated experiment. Parameter E1 (GPa) E2 (GPa) 12 G12 (GPa) t (mm) Value 155 11 0.3 5 0.16 Keeping in mind the identification goal, we ar e looking for the truncatio n order that allows to represent with enough accuracy the sensitivities to the four el astic constants. For this purpose we calculated the errors between the actual sensitivities based on finite element simulations and

PAGE 169

169 the sensitivities obtained based on the truncated POD decompositions of the FE simulations. The sensitivities were calculated in both cases by fin ite difference. The results are presented in Table 6-6 in terms of average error. Note that since the error is also a field we defined the average relative error in percent as being the average of the errors in th e sensitivity field divided by the average value of the sensitivity field. We can see that while from Table 6-4 the U and V fields are calculated already with a relatively small error for K=2, this truncation order is not sufficient for describing the sensitivities to al l four properties with enough accuracy. To determine a truncation order for the identification we set ourselves a threshold of 0.5% or less average error on the four sensitivities. We can see from Table 6-6 that K=4 for both fi elds allows to be significantly below this threshold. Table 6-6. Truncation errors for the se nsitivities to the elastic constants. K 2 3 4 5 Average error (%) in Sensitivity( E1) 0.28 1.4 10-2 2.5 10-3 1.4 10-3 Average error (%) in Sensitivity( E2) 11 0.14 4.7 10-2 4.1 10-2 Average error (%) in Sensitivity( 12) 5.2 4.6 5.4 10-2 2.9 10-2 U field Average error (%) in Sensitivity( G12) 7.3 10-2 3.0 10-2 1.6 10-2 3.4 10-3 Average error (%) in Sensitivity( E1) 0.81 1.8 10-2 1.3 10-2 7.5 10-3 Average error (%) in Sensitivity( E2) 10 0.15 0.12 8.8 10-2 Average error (%) in Sensitivity( 12) 5.3 0.74 9.0 10-2 8.3 10-2 V field Average error (%) in Sensitivity( G12) 0.71 3.2 10-2 2.0 10-2 1.3 10-2

PAGE 170

170 POD Noise Filtering Before concluding we also want to draw the readers attention on the fact that the POD reduction acts also as a filter. This can have beneficial e ffects in some situations. The filter effect is obvious when analyzing the truncation error ma ps. When truncating at K=4 for example, the corresponding field reconstruction will obviously not include any of the higher modes. If the higher modes start representing mainly finite el ement discretization erro rs, then truncating at K=4 will filter out these errors. A different filtering can arise when constr ucting the POD modes using finite element results, then projecting noisy fields (such as actual measurements) on the POD basis. The projected representation will then filter out the noise. To illustrate this behavior we simulated a noisy experimental field. This was done by adding white noise to a finite element field, obtained for a composite plate with the properties given in Table 6-5. A normally distributed noise value with zero m ean and a standard deviation of 2.5% of the maximum value of the field was added to each of the 4569 points of the field. The obtained displacement maps are represente d graphically in Figure 6-12. Projecting the simulated experiment U and V fields onto the first five POD basis vectors filters out a significant part of the noise. This is shown in Table 6-7, which gives the maximum difference in the fields between the simulated e xperiment and the finite element fields with parameters of Table 6-5 (i.e. the fields before the noise was added). While the first column gives the difference in mm, the second gives the differen ce relative to the value of the field at which the maximum difference occurs. The projection on the POD basis achieved here a reduction in the noise level by a factor of more than 6 for the V field and a factor of more than 12 for the U field, illustrating the noise filtering capabilities of the approach.

PAGE 171

171 Table 6-7. Difference (in absolute value) between the finite element fiel d and the projection of the noisy field onto the first 5 POD basis vectors. Max. difference Relative max. difference U field 2.03 10-5 mm 0.203 % V field 2.54 10-5 mm 0.426 % Summary In the present chapter we sought a reduced dimensional representation of full field displacement maps from a plate with a hole. For our problem we were interested in the variations of five parameters (four elastic constants and one plate dimension) w ithin +/-30% bounds. We wanted to represent any field stemming from with in this domain in a reduced dimensional basis of only a few vectors (less than a dozen). This was achieved by using the proper orthogonal decomposition (POD) method. We used finite elem ent based displacement fi elds to calculate the POD modes required for the reduced dimensional representation. We then showed that using only the first five POD modes allo ws a representation of two of th e field snapshots with an error of less than 0.1%. We thus achieved in our cas e a dimensionality reduction of 4569 to 5 without losing any significant information in the fields representation. Since the POD decomposition is not directly affected by the di scretization used, reductions from several tens or hundreds of thousands of points to only a few POD mode coe fficients are potentially possible. Finally we illustrated the filtering capabilities of the POD pr ojection, which is of great interest for actual experimental fields th at are usually noisy.

PAGE 172

172 Max(U)= 0.0101 mm Max(V)= 0.00806 mm Figure 6-3. Displacement and strain maps for snapshot 1

PAGE 173

173 i=1 i=2 i=3 i=4 i=5 i=6 Figure 6-4. First six POD mode s for U displacement fields

PAGE 174

174 i=1 i=2 i=3 i=4 i=5 i=6 Figure 6-5. First six POD mode s for V displacement fields

PAGE 175

175 i=1 i=2 i=3 i=4 i=5 Figure 6-6. First five strain equivalent POD modes for x (first column), y (second column) and xy (third column)

PAGE 176

176 K=3 Max error: 8.991e-007 mmMax error: 6.329e-007 mm K=4 Max error: 9.479e-008 mmMax error: 2.526e-007 mm K=5 Max error: 1.698e-008 mmMax error: 3.132e-008 mm Figure 6-7. Displacements truncation error in sn apshot 1 using 3, 4 and 5 modes. The maximum error can be compared to the maximum of the U and V fields, Max(U) = 0.0101 mm, Max(V)= 0.00806 mm.

PAGE 177

177 K=3 Max error: 1.353e-006 mmMax error: 1.552e-006 mm K=4 Max error: 6.929e-007 mmMax error: 1.355e-006 mm K=5 Max error: 1.623e-007 mmMax error: 1.257e-007 mm Figure 6-8. Displacements truncation error in snapshot 199 using 3, 4 and 5 modes. The maximum error can be compared to the maximum of the U and V fields, Max(U)= 0.0148 mm, Max(V)= 0.0105 mm.

PAGE 178

178 K=3 Max error: 1.397e-006Max error: 6.728e-007Max error: 8.579e-007 K=4 Max error: 1.567e-007Max error: 3.119e-007Max error: 1.201e-007 K=5 Max error: 2.328e-008Max error: 1.206e-007Max error: 4.482e-008 Figure 6-9. Strain equi valent truncation error in snaps hot 1 using 3, 4 and 5 modes for x (first column), y (second column) and xy (third column)

PAGE 179

179 K=3 Max error: 1.377e-006Max error: 1.007e-006Max error: 1.097e-006 K=4 Max error: 1.138e-006Max error: 1.082e-006Max error: 4.508e-007 K=5 Max error: 2.683e-007Max error: 2.589e-007Max error: 1.689e-007 Figure 6-10. Strain equivalent truncation error in snapshot 19 9 using 3, 4 and 5 modes for x (first column), y (second column) and xy (third column)

PAGE 180

180 0E+0 2E 6 4E 6 6E 6 8E 6 1E 5 1E 5 2345CVRMS(mm)Truncation order U field V field Figure 6-11. Cross va lidation error (CVRMS) as a function of truncation order. Figure 6-12. Noisy U and V fields of the simulated experiment.

PAGE 181

181 CHAPTER 7 BAYESIAN IDENTIFICATION OF ORTHOTRO PIC ELASTIC CONSTANTS FROM FULL FIELD MEASUREMENTS ON A PLATE WITH A HOLE Introduction Identification of the four orthot ropic elastic constants of a composite from a tensile test on a plate with a hole was carried out by several authors in the past (Lecompte et al. 2005, Silva et al. 2007, Silva 2009) using finite element mode l updating based on a least squares framework. Some of the advantages of doing the identification from measurements on a plate with a hole are the ability to identify all four properties at th e same time from a single experiment. This is possible because of the heterogeneous strain fiel d exhibited during this test which involves all four elastic constants, unlike te nsion tests on simple rectangular specimen which exhibit uniform strain fields and usually involve only two of the elastic constants. A further advantage is that the heterogeneous fields can provide information on spat ial variations of the material properties as well. In order to capture the strain and displ acement non-uniformity on the specimen used for identification, the measurements need to be obt ained with high spatial resolution over a large part of the specimen. Techniques that allow to achieve this are called full field measurement techniques. A large variety of op tical full field measurement tec hniques exist and we can divide them into two main categories: white light tech niques and interferometric techniques. The most common white light techni ques are the grid method (Surrel 20 05) and digital image correlation (Sutton et al. 2000). Interferometric full field techniques for displacement measurements are Moir interferometry (Post et al. 1997) and speckle interferom etry (Cloud et al. 1998). An excellent review of identification of material pr operties based on different full field techniques is provided by Avril et al. (2008).

PAGE 182

182 The objective of the present chapter is to a pply the previously de veloped approach to Bayesian identification to the case of identify ing the four ply-elastic constants from Moir interferometry full field displacem ent measurements on a plate with a hole. Note that as in the previous chapters we seek here to identify the properties of the ply of the composite laminate. This is more challenging than identifying the ho mogenized orthotropic el astic constants of the plate because the ply properties affect the di splacements in a more indirect way than the homogenized properties, which usually implies that the sensitivity of the displacements to the ply properties is smaller than to the homogenized pr operties as has been shown in Chapter 5. The interest of identifying ply propert ies is that it allows to obtain both the extensional and the bending stiffnesses of the laminate. Due however to the varying sensitivity of the strain and displacement fields to the differe nt ply properties, it is of primary importance to estimate the uncertainty with which these properties are identi fied. The Bayesian approach that we developed in the previous chapters will allow us to do this by taking into account the physics of the problem (i.e. the different sensitivities of the fields to the di fferent properties), measurement uncertainty as well as uncertainty on other input parameters to the model such as the specimen geometry. The rest of the chapter is organized as follo ws. First we give a quick overview of the open hole tension test and the way it was modeled. Second we give the Bayesian formulation for the present problem. We then provide details on th e response surface construction. This is followed by the Bayesian identification resu lts using a simulated experiment. In the final parts of this chapter we carry out Bayesian identification usin g actual experimental results on a plate with a hole. We present the Moir inte rferometry measurement setup we used, followed by the results of the identification. Finally we provide concluding remarks.

PAGE 183

183 Open Hole Plate Tension Test The experiment and numerical model used fo r identification are the same as the ones described in the previous chapter for the P OD decomposition. A reader familiar with the previous chapter can thus skip this section. The experiment considered for the identificati on is a tensile test on a composite plate with a hole. The laminate is made of graphite /epoxy with a stacking sequence of [45,-45,0]s. The plate has the dimensions given in Figure 7-1, with a ply thickness of 0.16 mm. The applied tensile force is 1200 N. The full field measurem ent takes place on a square area 20 x 20 mm2 around center of the hole. Figure 7-1. Specimen geometry. The specimen ma terial is graphite/epoxy and the stacking sequence [45,-45,0]s. The tensile force is 1200 N. The plate is modeled using Abaqus finite element software using a total of 8020 S4R (general purpose, four nodes, reduced integration) elements. The center part of the mesh used is represented in Figure 6-2. The U and V displacements fields from the finite element simulations are described by their POD coefficients k. These coefficients are obtained by projecting the fields on the POD basis obtained in the previous chapter. Accordingly a given field U is described approximately by the truncated representation U as shown in Eq. 7-1, where k are the POD modes (i.e. basis vectors) calculated in the previous chapter.

PAGE 184

184 11,KK kkkk kkUU (7-1) The truncation order used in this chapter is K=4, which we have shown in the previous chapter to lead to a small enough truncation error on the sensitivity fields of the displacements with respect to the material properties to al low potential use in the present identification problem. Bayesian Identification Problem Formulation The Bayesian formulation can be written as follows for the present plate with a hole problem: 1 ()measureprior EE EK measureE E (7-2) where E = { E1, E2, 12, G12} is the four dimensional random vari able of the ply-elastic constants. 1414,...,,...UUVV is the eight dimensional random variab le of the POD coefficients of the U and V field. ,,,, 1414,...,,...UmeasureUmeasureVmeasureVmeasuremeasureis the vector of the eight measured POD coefficients, obtained by projecti ng the measured full field onto the POD basis. Equation 7-2 provides the joint probability density function (pdf) of the four elastic constants given the coefficients measure. This pdf is equal to a normalizing constant times the likelihood function of th e elastic constants E given the coefficients measure times the prior distribution of the elastic constants E. The prior distribution of E reflects the prior knowledge we have on the elastic constants based on manufacturer s specifications. In our case we assumed that we have relatively vague prior knowledge by de fining a joint normal pr ior distribution with relatively wide standard deviati ons (10%) as defined in Table 7-1. The prior distribution was truncated at the bounds given in Ta ble 5-4, which were chosen in an iterative way such that

PAGE 185

185 eventually the posterior pdf is approximately in the center of the bounds and their range covers approximately four standard deviations of the posterior pdf. Table 7-1. Normal uncorrelated prior distributi on of the material properties for a graphite/epoxy composite material. Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Mean value 155 11.5 0.3 5 Standard deviation 15.5 1.15 0.03 0.5 Table 7-2. Truncation bounds on the prior di stribution of the material properties Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Lower truncation bound 148 9 0.25 4.7 Upper truncation bound 162 12 0.35 5.3 The other term on the right hand side of E quation 7-2 is the likelihood function of the elastic constants given the POD coefficients measure. This function measures the probability of getting the test results for a give n value of the elastic constants E, and consequently, it provides an estimate of the likelihood of different E values given the test results. The uncertainty in the POD coefficients can have several causes which are detailed next. Sources of Uncertainty A typical cause of uncertainty in the problem is measurement error. In the case of full field measurements we usually obtain a noisy field, that can possibly be d ecomposed into a signal component and a white noise component. Another uncertainty in the identification process is due to uncertainty in the other input parameters of the plate model such as uncertainty in the thickness of the plate. Other sources of uncertainty, such as misalignment of the cente r of the hole or misalignment of the loading direction can also be present. These could also be accounted for in the Bayesian identification by a more complex parameterization of the numerical finite element model. For simplicity we

PAGE 186

186 parameterized in the present chapter only uncertainty in the thickness of the plate h which was assumed to be distributed normally with a mean value of 0.96 mm (the prescribed specimen thickness) and a standard devi ation of 0.005 mm (the typical accuracy of a microcaliper). Alignement uncertainty as well as other types of uncertainty can however s till be considered with somewhat decreased fidelity through a generi c uncertainty term on the POD coefficients. The next section develops the implementati on of the error model corresponding to these uncertainties. Monte Carlo simulation is then used to propagate the uncertainty effect to the POD coefficients and finally to the likelihood function. Error Model In the present section we describe the pr ocedure we developed for moving from the measurement noise on the full field displa cement fields to corresponding measurement uncertainties on the POD coefficients measure. This involves understand ing the way the noise is projected onto the POD basis. We make here the assumption that the measur ement noise is a white Gaussian noise. In this case a measured field can be decomposed into the signal component and the white noise component as shown in Equation 7-3. measuremsUU where 2~0, (7-3) Umeasure is the vector of the measured displacements and has a size of N x 1, with N being the number of points (pixels) wher e a measurement is available. Ums is the vector of the signal part in the measured field, while is the vector of the noise with each of the N components being normally distributed with a mean valu e of 0 and a standard deviation of

PAGE 187

187 We look now at what happens if we projec t the measured field onto the POD basis. Denoting by the matrix of the k basis vectors, =[ 1, K], which has a size of N x K the projection can be written as shown in Equation 7-4. TTTmeasuremsmeasuremsUU (7-4) At this point we seek to characte rize the noise on the POD coefficients T which results from the noise on the displacement field. Since is a linear combination of Gaussian random variables it is also Gaussian. Accordi ngly we only need to calculate its mean and variance-covariance matrix, provided in Eq. 7-5 and 7-6 respectively. ()()()0TTEEE (7-5) 2()()()()()()T TTTTTTVCovEEEEEE (7-6) Because the POD vectors form an orthonormal basis we have T K I where IK represents the identity matrix of size K x K This means that the variance covariance matrix can be written simply as: 2() K VCovI (7-7) Equations 7-5 and 7-7 imply that the uncertain ty on the POD coefficients resulting from the white noise on the displacement fields is also normally distributed and has a zero mean, zero correlation and the same standard deviation as the white gaussian noise on the displacement field. Note that since we use a truncated POD project ion of the fields there is also truncation error involved in the modeling. Since we have s hown in the previous chap ter that the truncation error was smaller than 0.01% using four POD mode s, it would have a negligible effect compared to the other sources of uncertainty in the problem.

PAGE 188

188 Bayesian Numerical Procedure As in the previous chapters the major challenge is in cons tructing the likel ihood function, which we construct point by point again. We evalua te it at a grid in th e four-dimensional space of the material properties E = { E1, E2, 12, G12}. We chose here an 174 grid, which is determined by convergence and computational cost considerations. At each of the grid points, E is fixed and we need to ev aluate the probability density function (pdf) of the POD coefficients, fixedEE, at the point = measure. The pdf of the POD coefficients is determined by propagati ng through Monte Carlo w ith 4000 simulations the uncertainties in the plate thickness and adding a sampled value of the normally distributed uncertainty on the POD coefficients resulting fr om the measurement noise, as described in the previous section. The resulting samples were found to be close to Gaussian, they were thus fitted by such a distribution, by calculating mean a nd variance-covariance matrix. This Gaussian nature is due to the fact that the uncertainty resulting from the measurement noise is Gaussian and the uncertainty due to thickness is proportional to 1/ h which can in this case be well approximated by a normal distribution. The distribution fixedEE was then evaluated at the point = measure, leading to fixedEEmeasure. Efixed is then changed to the ne xt point on the grid and the procedure repeated. Response Surface Approximations of the POD coefficients Even though we reduced the dimensionality of the full field using the POD decomposition, the calculation of the POD coefficients is up to now still based on finite element results. Since about 700 million evaluations need to be used fo r the Bayesian identification procedure, finite element simulations remain prohibitive so we will seek to construct response surface

PAGE 189

189 approximations (RSA) of the POD coefficients, k, as functions of the four elastic constants to be identified and the thickness of th e plate, which has some uncertainty that we want to account for. We seek polynomial response surface (PRS) approximations under the form k=PRS( E1, E2, 12, G12, h ). Note that we will not construct nondimensional RSA in the present chapter because the time that had to be spent on going thr ough the process did not seem to be justified by the advantages of doing so. The two main adva ntages of nondimensionalization are either a significantly reduced number of variables or increased accuracy due to the response having a more easily expressible form in terms of the nondimensional variables. For the plate with hole problem the maximum reduction in the number of vari ables could have been only from 5 to 4. As far as accuracy goes we found that the dimens ional response surface already have very good accuracy as will be shown a few paragraphs later. Accordingly, we chose to keep dimensional response surface approximations in this chapter. Third degree polynomial response surface approxim ations were constructed from the same 200 samples that were used in the previous chap ter to construct the P OD basis. These 200 points were sampled using Latin hypercube with in the bounds given in Table 6-1. The error measures used to assess the quality of the RSA fits are given in Table 7-3 for the first four POD coefficients of the U fields and in Table 7-4 for those of the V fields. The second row gives the mean value of the POD coefficien t across the design of experiments (DoE). The third row provides the standard de viation of the coefficients acro ss the DoE, which gives an idea of magnitude of variation in the coefficients. Row four provides R2, the correlation coefficient obtained for the fit, while row five gives the ro ot mean square error among the DoE points. The final column gives the cross validation PRESS error (Allen 1971, see also Appendix A) which was also used in the previous chapters.

PAGE 190

190 Table 7-3. Error measures for RSA of the U-field POD. POD coefficient RSA 1 2 3 4 Mean value of i -4.04 10-1 -3.40 10-5 -2.20 10-5 -8.35 10-7 Standard deviation of i 8.19 10-2 6.92 10-4 2.01 10-4 2.80 10-5 R2 0.99999 0.99993 0.99992 0.99951 RMS error 2.77 10-4 6.32 10-6 2.01 10-6 6.75 10-7 PRESS error 3.61 10-4 7.92 10-6 2.67 10-6 9.33 10-7 Table 7-4. Error measures for RSA of the V-field POD. POD coefficient RSA 1 2 3 4 Mean value of i -2.97 10-1 -9.51 10-5 -2.14 10-5 9.76 10-7 Standard deviation of i 5.40 10-2 2.26 10-3 3.10 10-4 1.50 10-5 R2 0.99999 0.99992 0.99987 0.99830 RMS error 1.69 10-4 2.26 10-5 3.88 10-6 6.89 10-7 PRESS error 2.45 10-4 3.05 10-6 5.27 10-6 1.04 10-6 Comparing the error measures to the standard de viations of the coefficients we considered that the RSA have good enough quality to be used in the identificati on process, with the approximation error being small enough to be cons idered negligible compared to the other sources of uncertainty. As a summary, a flow chart of the cost and dimensionality reduction procedure used for the likelihood function constructi on is provided in Figure 7-2. Th e cost reduction achieved by the RSA is shown in green while the dimensiona lity reduction achieved by the POD in red. The construction of the POD basis from finite element samples was described in the previous chapter. The same finite element simulations also serv ed for the construction of the response surface approximations in terms of the P OD coefficients as described in the present section. Finally the experimental displacement fields are also pr ojected onto the POD basi s to obtain the terms measure.

PAGE 191

191 Figure 7-2. Flow chart of the cost (in green) and dimensionality reduction (in red) procedure used for the likelihood function calculation. Bayesian Identification on a Simulated Experiment In order to test our procedure we carried out the Bayesian identification on a simulated experiment first. The U and V fields of the first simulated ex periment were obtained by running the Abaqus finite element model of the plate with the parameters gi ven in Table 7-5. Coming from a finite element simulation the fields are relatively smooth, thus not representative of true experiment al fields that are usually noisie r. Thus we added white noise on top of the finite element fields with a standard de viation of 2.5% of the mean value of the field. For a graphical representation of the noisy fields of the simulated experiment see Figure 6-12. The results of the identification ar e provided in Tables 7-6 and 7-7. Table 7-5. Material pr operties used for the simulated experiments. Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Value 155 11 0.3 5 Table 7-6. Mean values and coefficient of variat ion of the identified poste rior distribution for the simulated experiment. Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Mean value 155 10.7 0.29 5.01 COV (%) 2.23 5.67 9.33 3.02

PAGE 192

192 Table 7-7. Correlation matrix (symmetric) of the identified posterior distribution for the simulated experiment. E1 E2 12 G12 E1 1 0.43 -0.18 0.60 E2 1 -0.22 -0.07 12 1 0.62 G12 1 The mean value of the identified distribution wa s found to be very close to the true values used in the Abaqus simulation, which is reassuring. As far as the coefficients of variation, there are, as for the vibration based identification, significant differences between the parameters. While the longitudinal Youngs modulus E1 of the ply is identif ied most accurately, the Poissons ratio 12 of the ply is again identified with the highest uncertainty. Unlike for the vibration problem, E2 is identified here with a higher uncertainty than G12. This is due to the stacking sequence [45,-45,0]s which does not have a 90 ply, thus making it more difficult to identify E2 from this traction test in the 1-direction. As for the vibration based identifi cation, the uncertainty with which 12 is identified is relatively high, much higher than what would be obtained from a tensile test on a unidirectional laminate. The high uncertainty in this case, results from the fact that we identify the ply properties of a laminate that is not unidirectional, which, as me ntioned on previous occasions, is a more challenging problem than identifying the pr operties of a unidirectional laminate or the homogenized properties of a more complex laminate As we have shown in the last section of Chapter 5 for the vibration problem, identifyi ng the homogenized properties of the laminate results in a significantly reduced uncertain ty in the Poissons ratio of the plate xy.

PAGE 193

193 Bayesian Identification on a Moir Full Field Experiment To apply the Bayesian identification to actual experimental data we carried out full field measurements on an open hole tension test usi ng the Moir interferometry technique. This measurement technique was chosen because of its good resolution, its insensitivity to out-ofplane displacements, its ability to directly compensate for rigid body rotations and its very high spatial resolution. Furthermore significant firstha nd experience was available at the University of Florida through the Experimental Stress Analysis Laboratory of Dr. Peter Ifju, which facilitated the setup of the experiment. Fo r details on the Moir interferom etry technique and associated theory refer to Appendix D. Full Field Experiment Initially, specimens according to the specifi cations provided at the beginning of this chapter were thought to be fabr icated. Due to various manufactur ing constraints and tolerances however, the final specimen used for the measurem ents had slight differences for some of the dimensions: the width of the specimen was 24.3 mm, the hole radius was 2.05 mm and the total laminate thickness was 0.78mm. The plate was made out of a Toray T800/3631 graphite/epoxy prepreg. The manufacturers specifi cations for this material are given in Table 7-8 together with the properties obtained by Noh (2004). Noh obtai ned the material properties based on a four points bending test at the Univ ersity of Florida on a compos ite made from the exact same prepreg roll that we used. Table 7-8. Manufacturers specifications a nd properties found by Noh (2004) based on a four points bending test. Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Manufacturers specifications 162 7.58 0.34 4.41 Noh (2004) values 144 7.99 0.34 7.78

PAGE 194

194 A picture of our specimen with the diffracti on grating (1200 lines/mm) is shown in Figure 7-3. The specimen was loaded at 700 N for the measurements. The experiment was carried out together with Weiqi Yin from the Experime ntal Stress Analysis Laboratory. An ESM Technologies PEMI II 2020-X Moir interferom eter using a Pulnix TM-1040 digital camera were utilized. The traction machine was an MTI-30K. Rotations of the grips holding the specimen were allowed by using a lubricated ball bearing for the bottom gr ip and two lubricated shafts for the top grip. This allowed to reduce pa rasitic bending during the tension test. A picture of the experimental setup is given in Figure 7-4. The fringe patterns observed for a force of 700 N are shown in Figures 7-5. The two smaller holes and other parasiti c lines are due to imperfections in the diffraction grating. The phase shifting method (see Appendix D) was used to extract the displacement fields from the fringe patterns. A Matlab automated pha se extraction tool developed by Yin (2008) was utilized and the corresponding displacement ma ps are provided in Figures 7-6. The two displacement maps will be used in th e Bayesian identification procedure. At this point it is worth noting that the identification procedure will use the POD projection of these maps, which will filter out some information present in the initial fields. This can have both positive and negative effects. Obvious negative effects are that the identification procedure will not be able to account for any information that was filtered out and that might have been useful to the identification or th e propagation of uncertainties. Figure 7-3. Specimen with the Moir diffraction grating (1200 lines/mm).

PAGE 195

195 Figure 7-4. Experimental setup for the open hole tension test. A B Figure 7-5. Fringe patterns in the: A) U di rection. B) V direction for a force of 700N.

PAGE 196

196 On the other hand if the information filtered out is mainly related to the analysis tools used (e.g. phase extraction algorithm) it can be useful to leave out these ar tifacts since they do not have physical meaning in relation to the material properties. An investiga tion of the errors left out is presented in Appendix E, where we found that while the difference between the experimental fields and their POD projection is not necessarily negligible, a significant part of it seems to be related to phase the extraction algorit hm, thus not having a physical meaning directly related to the experiment. Displacement: U field (m) 100 200 300 400 500 600 700 100 200 300 400 500 600 700 -6 -4 -2 0 2 4 A Displacement: V field (m) 100 200 300 400 500 600 700 100 200 300 400 500 600 700 -6 -4 -2 0 2 4 6 B Figure 7-6. Displacement fields obtained from the fr inge patterns (no filteri ng was used at all) in: A) The U direction. B) The V direction. Bayesian Identification on Moir Full Field Displacements The dimensions of the specimen for the Moir experiment were slightly different than those used to construct the response surface a pproximations (RSA) in the previous sections. Accordingly new RSA of the POD coefficients were constructed for the dimensions of the Moir specimen. A similar quality of the fit was obt ained which was again considered good enough to use the RSA in the Bayesian procedure.

PAGE 197

197 The experimental fields obtai ned in the previous section were projected onto the POD basis providing the coefficients i measure. Same Bayesian procedure was used as for the simulated experiment. However, an additional source of unc ertainty had to be accounted for, representing various sources of modeling erro rs. This is because our Abaqus model does not perfectly model the actual experiment and additi onal possible error sources are mi salignment of the grips or of the traction force or variation in the position of th e hole. These potential errors are modeled as an additional normally distributed uncertainty on th e POD coefficients. Zero mean is assumed and the magnitude of the uncertainty (standard deviation) is determined iteratively in an empirical way. A quick Bayesian identification is carried ou t using a high standard de viation, which is then reduced to the value of the difference between th e experimental POD coefficients and the mean value of the identified POD coefficients (the noise uncertainty component on the POD coefficients is also subtracted). The difference was found to be of the order of 0.4% of the POD coefficients. The prior distribution is assumed to have th e parameters given in Table 7-9 based on the Toray prepreg specificati on (see Table 7-8). The prior distri bution was truncated at the bounds given in Table 7-10, which were chosen in an itera tive way such that eventually the posterior pdf is approximately in the center of the bounds and their range c overs approximately four standard deviations of the posterior pdf. The results of the identification are provided in Tables 7-11 and 7-12. We will also provide later the results using a prior ba sed on Nohs values. Note that we assumed a relatively wide prior with a 10% COV due to the relatively la rge differences between the manufacturers specifications and Nohs values.

PAGE 198

198 Table 7-9. Normal uncorrelated prior distribution of the material properties. Mean values are based on Toray material specifications. Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Mean value 162 7.58 0.34 4.41 Standard deviation 16 0.75 0.03 0.5 Table 7-10. Truncation bounds on the prior di stribution of the material properties Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Lower truncation bound 126 6 0.26 4.25 Upper truncation bound 151 9.5 0.36 5.75 Table 7-11. Mean values and coefficient of variat ion of the identified poste rior distribution from the Moir interferometry experiment. Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Mean value 138 7.48 0.33 5.02 COV (%) 3.12 9.46 10.3 4.29 Table 7-12. Correlation matrix (s ymmetric) of the identified pos terior distribution from the Moir interferometry experiment. E1 E2 12 G12 E1 1 0.020 -0.045 0.52 E2 1 -0.005 -0.17 12 1 0.24 G12 1 Overall the mean values of the identifie d distribution are in agreement with the manufacturers specifications. The largest difference is in lo ngitudinal Youngs modulus, which could seem somewhat surprising. However Noh (2004) found a similar value on the exact same prepreg roll that we used. The mean values of E2, 12 and G12 are close to the specification values. G12 is far however from Nohs values but it should be noted that the four point bending test is relatively poor for identifying G12. In terms of uncertainty we find the same trend as with the simulated experiment, namely E2 and 12 being identified with the largest uncerta inties. It seems however that the added

PAGE 199

199 uncertainty accounting for misalignment does not affect the uncertainties of the four properties in the same way, E2 being the most affected. In terms of correlation between the material pr operties we find that the same general trend is observed as for the simulated experiment. Ho wever the current correlations are somewhat lower in absolute value, which is due to the added normal uncorrelated uncertainty on the POD coefficients compared to the simulated experiment. In Table 7-12 we also provide the mean valu es and COV of the post erior distribution that was identified using a prior centered in the pr operties found by Noh (2004), see Table 7-8, with a 10% standard deviation again a nd the same truncation bounds (see Table 7-10). We can see that the identified parameters change very little. This is because the propertie s where there is a big difference in the prior mean values can be de termined quite accurately from this experiment, meaning that the prior will have relatively little effect on them. Table 7-12. Mean values and coefficient of variat ion of the identified poste rior distribution from the Moir interferometry experiment us ing a prior based on Nohs (2004) values. Parameter E1(GPa) E2 (GPa) 12 G12 (GPa) Mean value 139 7.69 0.33 5.22 COV (%) 3.05 9.31 10.3 4.10 It is worth noting however that the prior distribution had a significant effect on the uncertainty with which E2 and 12 were identified. If for exam ple we only had a uniform prior (with bounds of +/-20% around the mean value of the likelihood functio n) instead of the Gaussian one, then 12 would have been identified with a 14.5% COV instead of 10.3%. This is again due to the relative insensitivity of the fields to 12. So while the prior had a large effect on the uncertainty of some of the properties, we di d not assume a particularly narrow prior. Based on tests on unidirectional laminates, prior estimates of E2 and 12 with a standard deviation of about 5% or less can probably be obtained. The applicability of priors obtained from tests on

PAGE 200

200 unidirectional laminates to the identification of ply properties from more complex laminates needs however to be investigated, since differe nces in the manufacturing of the two kinds of laminates might increase the uncertainty. As a final remark, it might seem surprising that the property that is identified with the lowest uncertainty ( E1) is also the one which is the fart hest away from the manufacturers specifications. However it is worth recalling that the identification does not account for interspecimen variability of the materi al properties. The identified di stribution of the properties is specific to the specimen and the experiment. Thus if the specimen deviates somewhat from the manufacturers specification it is not contradictory that while identifying a property far away from the specifications this can still be the prop erty identified with the lowest uncertainty. The interspecimen variability would then have to be estimated by repeating test on multiple specimens. Summary In the present chapter we implemented the Bayesian identification for full field displacements on a plate with a hole. This required the combined use of response surface methodology and dimensionality reduction base d on proper orthogonal decomposition to make the approach computationally tractable. A test case on a simulated experiment showed that the proposed approach can accurately identify the orthotropic elastic constants of the ply from noisy displacemen t fields. A real test case was then used based on Moir interferometry measurements. This allowed, as for the vibration based iden tification, to quantify the uncertainties with which the material properties can be identified from the given experiment. At this point it is worth to remind that the uncertainty with which the properties are identi fied is specific to the specimen and the experiment and this can be easil y verified by noting that the uncer tainties identified from the

PAGE 201

201 displacement fields is different than the one iden tified from the natural frequencies in the earlier chapters.

PAGE 202

202 CHAPTER 8 SUMMARY AND FUTURE WORK Summary Estimating the uncertainties with which material properties are identifi ed can be of great interest in the context of reliability analys is. Bayesian identification can estimate these by providing a probability density f unction for the identified propert ies. The approach can account for measurement and modeling uncertainty as well as the intrinsic equations of the problem when determining uncertainty (variance) and correlation (covariance) of the identified properties. On simple analytical test problems we have iden tified situations where the statistical nature of the Bayesian approach allows it to systematic ally lead to more accurate results compared to a deterministic identification approach, the traditio nal least squares method. Such situations are when the responses have different sensitivity to the parameters to be identified or when the responses have different uncertain ties or are correlated. The im pact of these effects on the identified properties is problem dependent but can reach significant levels as we have shown on the three bar truss test problem. A major issue in using the Bayesian identific ation to its full capabilities is however computational time. To be applicable in the ge neral case, the Bayesian method requires Monte Carlo simulation, which substantially increases computational cost. While on the previous test problems the analytical nature of the response al lowed reasonable computation times, this would no longer be the case if the Bayesian method was applied to more complex identifications that require numerical models for the response. To deal with the computational cost issue we chose to apply the response surface methodology to cons truct a surrogate model of the expensive numerical simulator. In order to obtain improved accuracy and reduced construction cost we developed a procedure that combines nondimensi onalization and global sensitivity analysis for

PAGE 203

203 the response surface approximation (RSA) constructi on. This procedure was first introduced and illustrated on a material selection problem fo r an integrated thermal protection system. We then applied the procedure to construct nondimensional response surface approximations of the natural freque ncies of free composite plates. These were to be used for the Bayesian identification of orthotropic elastic cons tants from plate vibrati on experimental results. We tested the accuracy of the approximations and showed that they had excellent fidelity. We also compared the use of these RSA and of an alytical approximate vi bration solutions in identification. We showed that only the hi gh fidelity response surface approximations have a high enough accuracy to correctly identify the elastic constants for the vibration problem we considered. To close the chapters on vibr ation based identification we applied the Bayesian method employing the developed frequency surrogates to the identification of orthotropic elastic constants using actual vibration measurements We developed an error model accounting for measurement uncertainty, modeling error and uncertainty in other model input parameters. The identified probability density function for th e elastic constants showed that the different properties are identified with different confidence and that some are strongly correlated. In the last two chapters we c onsidered the identification of elastic constants based on full field displacement measurements. This problem introduced an additional challenge due to the very high number of measurements (each pi xel of the full field images representing a measurement point). We addressed this issue by reducing the dimensionality of the fields through modal reduction. Proper orthogonal decom position was used to find the optimal modes based on a given number of samples and we have shown that only four modes for each of the two displacement components are sufficient to accurately represen t the fields and carry out the

PAGE 204

204 identification. It is worth noting that this approach allowed a dimensionality reduction from several thousands to only four parameters. Finally, using the proper orthogonal decompositi on in combination with response surface methodology, we applied the Bayesian identificati on approach to identifying orthotropic elastic constants from full field displacement measuremen ts on a plate with a hole under tension. The approach was first validated on a simulated e xperiment. An actual Moir interferometry experiment was then carried out, which provide d the displacement full field measurements. The Bayesian identification was applied to these, pr oviding the uncertainty structure with which the elastic constants are identified fo r this particular experiment. Future Work In this section we will provide a few direc tions that we think would be worth pursuing either in order to further enhance the Bayesian id entification method or in order to make the best use of it. In the present work we applied techniques such as response surf ace methodology or modal reduction in order to decrease the cost of Bayesian identification in cases that are more general than the usually considered Gaussian uncertainty structure. In spite of the drastic cost reduction achieved with these technique s the approach remains though on the verge of reasonable computational time. Since our technique is base d on Monte Carlo simulati on we think that using the independence of the different sources of uncertainty can allow to further decrease computational time by several orders of magnit udes. As mentioned in the Bayesian procedure section of Chapter 5 this involves using the expe cted improvement for multivariate distributions in order to sample the uncertainties separately. A further interesting future work direction is in the possible combination of several experiments and experimental techniques for iden tifying the material properties. In the absence

PAGE 205

205 of statistical information it is relatively difficult to combine identified properties coming from different experiments because there is no simple way to determine what confidence to affect to each experiment when combining them. Since th e Bayesian approach identifies a probability density function it includes a measure of the c onfidence in the properties. This makes combining properties from multiple experiments or experiment al techniques straightforward as long as the various sources of uncertainty in each can be accurately assessed. The potential of combining multiple experiments on the same specimen resides in narrowing the uncertainties for all four orthotropic elastic constants, since from one single experiment some properties can only be identified with relatively poor confidence. A firs t attempt in these direct ions is already planned by carrying out the Moir experiment again at the laboratory in Saint Etienne. The use of digital image correlation of digital image correlation on th e reverse side of the specimen is also being planned. Another point worth further i nvestigating would be determining the best way to use all available uncertainty information from tests in re liability based design. The uncertainty identified through Bayesian identification is specimen specific, thus represen ts only one part of the total uncertainty affecting design. The other part is inter-specimen va riability and would have to be estimated from multiple tests. Since this variab ility is usually estimated from a usually low number of tests, the confidence in the statistica l estimates will play a la rge role in the total uncertainty that would then be used in reliabi lity based design. Initial work in combining the various uncertainties in a reliability based design framework has been started by Smarslok (2009).

PAGE 206

206 APPENDIX A POLYNOMIAL RESPONSE SU RFACE APPROXIMATIONS Background Response surface approximations also known as surrogate models are widely accepted as an extremely useful tool for approximating the response of computationally expensive problems. A polynomial response surface approximation is the simplest form of surrogate model. Yet in spite of its relative simplicity it has been su ccessfully applied to engineering problems in numerous fields. Kaufman et al. (1996), Balaba nov et al. (1996, 1999), Papila and Haftka (1999, 2000), and Hosder et al. (2001) constructed polynomia l response surface (PRS) approximations for multidisciplinary design optimization studies of high speed civil transport aircraft. Hill and Olson (2004) applied PRS to noise models for th e conceptual design of transport aircrafts. Rikards et al. (2001, 2003, 2004) and Ragaus kas and Skukis (2007) applied PRS to the identification of elastic constants from natural frequency measurements. Polynomial response surface approximation is a type of surrogate modeling employing polynomial functions of the variables to approxim ate the desired response. The aim is to fit a polynomial function to a limited number of simulations of the response. The basic idea is illustrated in Figure A-1 fo r a simple one-dimensional case. The circles are the available simulations yi of the response y at a given number of design points x(i). A polynomial function in x (2nd degree polynomial here) is fitt ed through these points using classical regression techniques.

PAGE 207

207 Figure A-1. One dimensional polyno mial response surface approximation Polynomial Response Surface Approximation Modeling General Surrogate Modeling Framework The general steps in the cons truction of a polynomial response surface, as for all surrogate models, are shown in the flowchart of Figure A-2. Figure A-2. Steps in surrogate modelling y x Design of Experiments (DoE) Numerical simulations at the points of the DoE Construction of the polynomial response surface Response surface validation

PAGE 208

208 Design of Experiments (DoE) The first setp is the design of experiments. Th is consists of sampling points in the original design space, points at which the numerical si mulations will be carried out. Well known design of experiment strategies incl ude factorial designs, central composite designs, Box-Behnken designs, variance optimal designs, orthogonal arrays latin hypercube sampling. For a detailed overview of these techniques the reader can refer to Myers and Montgomery (2002). In the present work we will mostly us e latin hypercube sampling. The ba sic idea consists in obtaining Ns sample points by dividing the range of each input variable into Ns sections of equal marginal probability 1/ Ns and sampling once from each section. Figure A-3. Latin hypercube sampling with sampling size Ns=6. Numerical Simulation The next step in surrogate modeling consists in running the numerical simulations at the sampled points of the design of experiments. This step is straight forw ard and usually involves automatizing the running of the simulations. Polynomial Response Surface Construction The third step is the constr uction of the polynomial response surface approximation using regression techniques. Let us denote y the function of interest that we want to approximate based

PAGE 209

209 on yi samples of its response. We seek to express y as a function of 1N monomial basis functions fj. In order to determine the coefficients of the approximation in this monomial basis (i.e. determine the coefficients of the polynomial) we can write for each observation i a linear Equation A-1: 1()2 1();()0,(),iiN i ijji jyfEV fx (A-1) where the errors i are considered independent with exp ected value equal to zero and variance 2 Vector represents the quantitative rela tion between basis functions and ()fxis the vector of the monomial basis functions. The set of equa tions specified in Equation A-1 can be expressed in matrix form as: 2;()0,(),y X EVI where X is an 1sNN matrix of the basis functions take n at the values of sampled design variables points from the DoE. This matrix is also known as Gramian design matrix. A Gramian design matrix for a full quadratic polynomial in two variables (12;6vNN ) is shown in Equation A-2. (1)(1)(1)2(1)(1)(2)2 121122 (2)(2)(2)2(2)(2)(2)2 121122 ()()()2()()()2 121122 ()()()2()()()2 1211221 1 1 1ssssssiiiiii NNNNNNXxxxxxx xxxxxx xxxxxx xxxxxx (A-2) The polynomial response surface appr oximation of the observed response () y x can then be expressed as

PAGE 210

210 11 ()()jN j jybfxx where bj is the estimated value of the coefficient associated with the jth basis function ()jfx(i.e. the jth polynomial coefficient). The error in approximation at a design point x is then given as ()()() eyy xxx. The coefficients vector b can be obtained by minimizing a loss function L defined as 1sN p i iLe where ie is the error at ith data point, p is the order of loss function, and s N is the number of sampled design points. A quadratic loss function ( p = 2) is usually thus leading to least squares minimization. Such a loss function, that minimi zes the variance of the error in approximation, has the advantage that we can estimate coefficient vector b using an analytical expression, as shown in Equation A-3. 1()TT X XXb y (A-3) The estimated polynomial coefficients b (Equation A-3) are unbiased estimates of that minimize variance. For additional details on regre ssion the reader can refer to Draper and Smith (1998, Section 5-1). Response Surface Verification Once the polynomial response surface approximati on is constructed several tools exist for assessing its fidelity to the original response. The regression analysis di rectly provides a certain number of basic tools such as correlation coefficient R2 and root means square (RMS) error to asses the quality of the fit. Anot her frequently used index of th e quality of the fit have been proposed by Allen (1971): the pr edicted sum of squares (PRESS) error. The PRESS error is the

PAGE 211

211 root mean square of the errors obtained by fitting a polynomial to each combination of Ns-1 samples and calculating the prediction e rror at the point th at was left out.

PAGE 212

212 APPENDIX B GLOBAL SENSITIVITY ANALYSIS Global sensitivity analysis (G SA) was initially formalized by Sobol (1993). The method is a variance based technique used to estimate th e effect of different variables on the total variability of the function. Global sensitivity analysis is usually used to: assess importance of the variables fix non-essential variables (which do not aff ect the variability of the function) thus reducing the problem dimensionality Applications of GSA were pr esented by Homma and Saltelli (1996) (analytical functions and study of a chemical kinetics model), Sa ltelli et al. (1999) (a nalytical functions), Vaidyanathan et al. (2004b) (li quid rocket injector shape design) Jin et al. (2004) (piston shape design), Jacques et al. (2004) (flow parameters in a nuclear reactor), and Mack et al. (2005a) (bluff body shape optimization). We provide next the theoretical formulation of GSA. Let us assume a square-integrable function f (x) as a function of a vector of indepe ndent uniformly distributed random input variables x in domain [0, 1]. The function can be decomposed as the sum of functions of increasing dimensionality as 01212()()(,)(),,,iiijijN iijvvNfffxfxxf x xxx (B-1) where 0d f f1 x0x. If the condition 11 ... 00iiskfdx is imposed for k = i1, is then the decomposition described in Equation B-1 is unique. In the context of global sensitivity analysis, the total variance denoted as V ( f ) can be expressed as shown in Equation B-2. 1... 11,()...vN iij iijv v N NVfVVV (B-2)

PAGE 213

213 where 2 0()(()) VfEff and each of the terms in Equa tion B-2 represents the partial contribution or partial variance of the independent variables ( Vi) or set of variables to the total variance, and provides an indicati on of their relative importance. The partial variances can be calculated using the following expressions: ([|]), ([|,]), ([|,,]),ii ijijij ijjijij ijkikjkkVVEfx VVEfxxVV VVEfxxxVVVVVV and so on, where V and E denote variance and expected valu e respectively. Note that the expected values and variances with respect to xi are computed by 1 0|iiiEfxfdx and 1 2 0([|])iiiVEfxfdx This formulation facilitates th e computation of the sensitiv ity indices corresponding to the independent variables and set of variables. Fo r example, the first and second order sensitivity indices can be computed as ()()iji iijV V SS VfVf Under the independent model inputs assumption, the sum of all the sensitivity indices is equal to one. The first order sensitivity index for a given variable represents the main effect of the variable, but it does not take into account the effect of the interaction of the variables. The total contribution of a variable to the total variance is given as the sum of all the interactions and the main effect of the variable. The total sensi tivity index of a variable is then defined as ,,,... ()iij ijk jjijjikki total iVVV S Vf

PAGE 214

214 Note that the above referenced expressions ca n be easily evaluated us ing surrogate models of the objective functions. Sobol (1993) has prop osed a variance-based non-parametric approach to estimate the global sensitivity for any comb ination of design variables using Monte Carlo methods. To calculate th e total sensitivity of any design variable xi, the design variable set is divided into two complementary subsets of xi and Z ,1,;jv Z xjNji The purpose of using these subsets is to isolate the influence of xi from the influence of the remaining design variables included in Z The total sensitivity index for xi is then defined as ()totaltotal i iV S Vf Z total iiiVVV where iV is the partial variance of the objective with respect to xi, and Z iV is the measure of the objective variance that is depe ndent on interactions between xi and Z Similarly, the partial variance for Z can be defined as Vz. Therefore the total objective variability can be written as Z ZiiVVVV While Sobol had used Monte Carlo simulations to conduct the global sensitivity analysis, the expressions given above can be easily computed analytically if f (x) can be represented in a closed form (e.g., polynomial response surface approximation).

PAGE 215

215 APPENDIX C PHYSICAL INTERPRETATION OF THE BAYE SIAN IDENTIFICATION RESULTS WITH EITHER MEASUREMENT OR INPUT PARAMETERS UNCERTAINTIES In the present Appendix we provide an interpre tation of the results obtained in the last section of Chapter 3, which led to Figure 3-9. For convenience we provide this figure again, denoted Figure C-1. A B C Figure C-1. Identified posterior pdf for: A) Only Gaussian measurement noise (case i.) B) Only model input parameters uncertainty (case ii .) C) Both Gaussian measurement noise and model input parameters uncertainty (case iii.). Figure C-1 shows that the iden tified pdf differ substantiall y when only measurement noise is assumed and when only uncertainty on model input parameters is assumed. The third case,

PAGE 216

216 where both uncertainties are assumed, is as expe cted a mix of the two previous ones. It is important to note that the pdf obtained with th e combination of the two uncertainties differs substantially from the one with only Gaussian measurement noise t hus illustrating the interest in considering uncertainty in model input parameters as well. Since these are important results we provide next a basic phy sical interpretation based on a simplified model to confirm our findings. First we wa nt to interpret the results of Figure C-1-B. For this purpose we rewrite the freque ncy formula of Equation 3-23 knowing that Ex= Ey and assuming that xy 0 (in reality xy = 0.05). Note that these assumptions are only done to facilitate the interpretation and not to obtain an y results. Under these assumptions Equation 3-23 can be written as follows: 42 22 2422 44 48klxxyhaa fEklGkl abb (C-1) For the interpretation lets assume first that there is only uncertainty in and h which appear factorized in front of all the rest in Equation C-1. In this case uncertainty on and h will have the exact same effect on all the frequencie s meaning that it will lead to perfect positive correlation when Ex and Gxy are identified. Now we also have uncertainty in the aspect ratio a/b which leads to variable uncertainty depending on the frequency. It turns out that when propagating this uncertainty on aspect ratio, the 7th natural frequency ( k =3, l =1) has an uncertainty almost an order of magnitude lower than that on the other frequencies. This means that when going back to Ex and Gxy, they will be identified with low uncertainty since the Bayesian approach mainly considers the meas urement with the lowest uncertainty. When combining all the uncertainties on the input parameters it turns out that the uncertainties on the frequencies due to aspect ratio are smaller than that due to the other parameters, which means

PAGE 217

217 that the major factor of uncertain ty is that coming from the fact or in front of the brackets in Equation C-1. As mentioned before, this lead s to perfect positive correlation between the Ex and Gxy as well as same relative uncertainties on them. This is approximately what we see in Figure C-1-B (N.B. perfect correlation m eans that the axes of the ellipse are at 45 angle from the component axes). We now move to the interpretation of Fi gure C-1-A, where we have only Gaussian measurement noise on the frequencies. Since we assume that the noise is decorrelated, it is not as easy to interpret the resu lts by looking at all nine frequenc ies at a time (through Equation C-1). Instead, we isolated a small segment and consid ered that we have only two noisy frequency measurements ( f1 and f3) from which we want to determine Ex and Gxy. Using again the simplified equation C-1 the problem boils down to solving a linear system of two equations (in f1 and f3) with two unknowns ( Ex and Gxy). The uncertainty on f1 and f3 was thus easily propagated to Ex and Gxy and we found that Ex has about 9 times smaller uncertainty than Gxy and that they are negatively highly correlated This is approximately what we see in Figure C-1-A. Figure C-1-C is consistent with it being a combination of Figures C-1-A and C-1-B which what we would expect.

PAGE 218

218 APPENDIX D MOIRE INTERFEROMERTY FULL FIELD MEASUREMENTS TECHNIQUE Moir interferometry is measurement tech nique using the fringe patterns obtained by optical interference off a diffraction grating in or der to obtain full field displacement or strain maps. A comprehensive description of the method and its applications is provided by Post et al. (1997). Among the main advantages of Moir interfer ometry are its high signal to noise ratio, its excellent spatial resolution a nd its insensitivity to rigid body rotations (Walker 1994). Applications of Moir interferometry include the mapping of displacements of a tooth (Wood et al. 2003) and characterization of advanced fa bric composites (Lee et al. 2006). Additional applications are also provid ed by Post et al. (1997). The schematic of a four beam Moir interferom etry setup, such as the one used for our experiment, is given in Figure D-1. It uses f our collimated light beams thus provide both the horizontal and vertical displacement fields. The interference is obtained by choosing the angle such that it corresponds to the first order refraction angle. Figure D-1. Schematic of a Mo ir interferometry setup.

PAGE 219

219 The fringe patterns that result from the interf erence of two of the beams can be described by either intensity or phase information. While intensity methods have been developed first, a major issue limiting their accuracy resides in the determination of the exact maximum intensity locations. To address this issue, methods based on phase information were developed, such as phase shifting Moir. All these methods use a carrie r fringe pattern or a phase ramp in order to extract the phase due to the fact that the cosine function is not bij ective. Using a phase ramp the intensity I can then be expressed as shown in Equation D-1. mod(,)(,)(,)cos(,)1...backlight I xyIxyIxyxynnN (D-1) Obtaining N fringe patterns (typically N =4) shifted by the imposed phase ramp allows to calculate the phase (x,y) The displacement fields are then determined as follows: (,) (,) 2x x y Uxy f (D-2) (,) (,) 2y x y Vxy f (D-3) where is the phase difference between the in itial and the final loading step and f is the frequency of the grating (1200 lines/mm in our case). An automated phase extraction procedure was developed under Matlab by Yin (2008) at the Experimental Stress Analysis Laboratory at the University of Florida. This toolbox carries out the phase extraction and unwra pping from the four phase shifte d Moir fringe patterns. It then provides the corresponding displacement maps.

PAGE 220

220 APPENDIX E COMPARISON OF THE MOIRE INTERFERO METRY FIELDS AND THEIR PROPER ORTHOGONAL DECOMPOSITION PROJECTION In this appendix we investigate the differe nce between the displacement fields obtained from the Moir interferometry fringe patterns and their POD projection that is used for the Bayesian identification. Figure E-1 presents th e maps of these differences for the U and V displacements. A B Figure E-1. Difference between the displacement fi elds obtained from the Moir fringe patterns and their POD projection for the: A) U field. B) V field. We can see that the POD projection filters out any dissymmetries in the field. The order of magnitude of the variations that are filtered out is about an orde r of magnitude smaller than the field values. An important question at this point is to know if the variations that are filtered out by the POD projection would influence the identified pa rameters. To have a rough estimate of the resulting error on the material properties we move to the strain fields that correspond to the error maps shown in Figure E-1. By numerically diffe rentiating the fields of Figure E-1 using the

PAGE 221

221 BibImages toolbox (Molimard 2006) we obtain the stra in equivalent maps representing the difference between the strains and their POD projection. These are shown in Figure E-2. A B C Figure E-2. Strain equiva lent difference maps between the fields obtained from the Moir fringe patterns and their POD projection for A) x. B) y. C) xy. We can note that while the displacement erro r fields in Figure E-1 had a non-negligible signal component, the major remaining component wh en calculating the strain s is noise. We can also note that the noise is act ually not random but seems to follow relatively well the Moir fringe patterns on the initial images. This means that the noise is mainly induced by the fringe patterns themselves and the phase shifting algor ithm that extracts the displacement fields from

PAGE 222

222 these fringe patterns. In that case the filtered out components of the fields would in large parts independent of the material properties, thus almost not affecting the identification. In order to have nevertheless an idea of the error committed on the estimation of the material properties by filteri ng out these variations we calculated the average value of x and y over the fields and used these averages to have a rough estimate of the errors on the properties Ex and xy using the following formulas. x x average xE (E-1) average y xy average x (E-2) Using the values given in Table E-1 we found th at the error due filteri ng would be of the order of 1.6% on Ex and 0.5% on xy. These errors are not negligible but it is important to keep in mind that due to the very high noise levels and the average errors on x and y that are close to zero, these average values are affected by the noise which we have shown is related to the fringe patterns and displacement extraction algorithm rather than to the phys ics of the experiment itself. The error induced by filtering and wh ich is not due to this type of noise would then be somewhat smaller. Table E-1. Average values of the experimental fi elds and of the components of the fields that were filtered out by POD projection. Average(experimental x) Average( f ilteredout x) Average(experimental y) Average( f ilteredout y) 6.29 10-4 1.02 10-5 -4.87 10-4 2.19 10-6

PAGE 223

223 LIST OF REFERENCES Acar, E., Kale, A., Haftka, R. T., and Stroud, W. J. (2006). "Structural Safety Measures for Airplanes." J.Aircr., 43(1), 30-38. Allen, D. M. (1971). "Mean square error of pred iction as a criterion for selecting variables." Technometrics, 13, 469-475. Alvin, K. F. (1997). "Finite element model update via Bayesian estimation and minimization of dynamic residuals." AIAA J., 35(5), 879-886. Amiot, F., Hild, F., and Roger, J. P. (2007). "Iden tification of elastic property and loading fields from full-field measurements." Int.J.Solids Structures, 44(9), 2863-2887. Anderson, E., Bai, Z., Bischof, C., Blackford, S., Demmel, J., Dongarra, J., Du Croz, J., Greenbaum, A., Hammarling, S., McKe nney, A., and Sorensen, D. (1999). LAPACK User's Guide 3rd edition, SIAM, Philadelphia. Arafeh, M. H., Knopf-Lenoir Vayssade, C., and Rouger, F. (1995). "Conception optimale d'essais de flexion de plaques orthotropes et identification." Comptes Rendus De l'Acadmie Des Sciences, 321(II b), 351-354. Arajo, A. L., Mota Soares, C. M., Moreira de Freitas, M. J., Pedersen, P., and Herskovits, J. (2000). "Combined numericalexperimental mode l for the identification of mechanical properties of laminated structures." Compos.Struct., 50(4), 363-372. ASTM D 3039. (2002). Standard Test Method for Tensile Properties of Polymer Matrix Composite Materials. ASTM International, West Conshohocken. Avril, S., Bonnet, M., Bretelle, A. S., Grdiac, M., Hild, F., Ienny, P., La tourte, F., Lemosse, D., Pagano, S., and Pagnacco, E. (2008). "Overvie w of Identification Me thods of Mechanical Parameters Based on Full-field Measurements." Exp.Mech., 48(4), 381-402. Ayorinde, E. O., and Yu, L. ( 1999). "On the Use of Diagonal Mode s in the Elastic Identification of Thin Plates." J. Vib. Acoust., 121, 33. Ayyub, B. M., and Chia, C. Y. (1992). "Genera lized conditional expectation for structural reliability assessment." Struct.Saf., 11(2), 131-146. Balabanov, V., Kaufman, M., Knill, D. L., Haim, D., Golovidov, O., Giunta, A. A., Haftka, R. T., Grossman, B., Mason, W. H., and Wats on, L. T. (1996). "Dependence of optimal structural weight on aerodynamic shape for a high speed civil transport." Proc., 6th AIAA/NASA/USAF Multidisciplinary Analysis & Optimization Symp., 96-4046. Balabanov, V. O., Giunta, A. A ., Golovidov, O., Grossman, B., Ma son, W. H., Watson, L. T., and Haftka, R. T. (1999). "Reasonable de sign space approach to response surface approximation." J.Aircr., 36(1), 308-315.

PAGE 224

224 Bapanapalli, S. K., Martinez, O. M., Gogu, C., Sa nkar, B. V., Haftka, R. T., and Blosser, M. L. (2006). "Analysis and Design of Corrugated-Core Sandwich Panels for Thermal Protection Systems of Space Vehicles." Proc., 47th AIAA/ASME/ASC E/AHS/ASC Structures, Structural Dynamics, and Ma terials Conf., Newport, RI, AIAA Paper 2006-1942. Bayes, T. (1763). "An essay towards solvi ng a problem in the doctrine of chances." Philosophical Transactions of the Royal Society, 53, 370-418. Beck, J. L., and Katafygiotis, L. S. (1998). "Updating Models and Their Uncertainties. I: Bayesian Statistical Framework." J. Engrg. Mech., 124(4), 455-461. Berger, J. O. (1985). Statistical Decision Theory and Bayesian Analysis. Springer, New York. Berkooz, G., Holmes, P., and Lumley, J. L. (1993) "The proper orthogonal decomposition in the analysis of turbulent flows." Annu.Rev.Fluid Mech., 25(1), 539-575. Bjrck, (1996). Numerical methods for least squares problems. SIAM, Philadelphia. Blevins, R. D. (1979). Formulas for natural frequency and mode shape. Van Nostrand Reinhold, New York. Blosser, M. L., Chen, R. R., Schmidt, I. H., Dorsey J. T., Poteet, C. C., Bird, R. K., and Wurster, K. E. (2004). "Development of advanced me tallic thermal-protection-system prototype hardware." J.Spacecraft Rockets, 41(2), 183-194. Buckingham, E. (1914). "On Physically Similar Syst ems; Illustrations of the Use of Dimensional Equations." Physical Review, 4(4), 345-376. Cloud, G. L. (1998). Optical methods of engineering analysis. Cambridge University Press, Cambridge. Cugnoni, J. (2004). "Identification par recalage modal et frquen tiel des proprits constitutives de coques en matriaux composites." Ph.D. Thesis no. 3106, School of Engineering, Swiss Federal Institute of Technology, Lausanne Daghia, F., de Miranda, S., Ubertini, F., and Viol a, E. (2007). "Estimation of elastic constants of thick laminated plates within a Bayesian framework." Compos.Struct., 80(3), 461-473. Dascotte, E. (2004). "Identifica tion of Random Material Propertie s using a Mixed DeterministicProbabilistic Method." International Seminar on Modal Analysis, 20-22. De Wilde, W. P., Narmon, B., Sol, H., and Roov ers, M. (1984). "Determi nation of the material constants of an anisotropic lami na by free vibration analysis." Proc., 2nd International Modal Analysis Conf., Orlando, FL, 44-49. De Wilde, W. P., Sol, H., and Van Overmeire, M. (1986). "Coupling of Lagrange interpolation, modal analysis and sensitivity analysis in the determination of anisot ropic plate rigidities." Proc., 4th International Modal A nalysis Conf., Los Angeles, CA, 1058-1063.

PAGE 225

225 Deobald, L. R., and Gibson, R. F. (1988). "Deter mination of elastic constants of orthotropic plates by a modal analysis/R ayleigh-Ritz technique." J.Sound Vibrat., 124 269-283. Dickinson, S. M. (1978). "The buckling and freq uency of flexural vibr ation of rectangular isotropic and orthotropic plat es using Rayleigh's method." J.Sound Vibrat., 61(1), 1-8. Draper, N. R., and Smith, H. (1998). Applied regression analysis. John Wiley & Sons, New York. Elishakoff, I. (1991). "Convex versus probabilistic modelling of uncertainty in structural dynamics." Structural Dynamics: Recent Advances, London, Keynote Lecture, 3. Elseifi, M. A., and Irvine, C. (2002). "Bayesia n models for updated cure d composite properties." Proc., 43rd AIAA/ASME/ASCE/AHS/ASC Structur es, Structural Dynamics, and Materials Conf., Denver, CO, AIAA Paper 2002-1273. FAA. (2005). Airworthiness Standards: Trans port Category Airplanes. Department of Transport, Washington. Filomeno Coelho, R., Breitkopf, P., and Knopf-L enoir Vayssade, C. (2008). "Model reduction for multidisciplinary optimization-application to a 2D wing." Struct. Multidisc. Optim., 37(1), 29-48. Filomeno Coelho, R. and Breitkopf, P. (2009). Optimisation multidisciplinaire en mcanique: Rduction de modles, robustesse, fi abilit, ralisations logicielles Hermes Science Publications, Paris. Fishman, G. S. (1996). Monte Carlo: concepts, algorithms, and applications. Springer, New York. Frederiksen, P. S. (1997). "Experimental Procedur e and Results for the Identification of Elastic Constants of Thick Orthotropic Plates." J.Composite Mater., 31(4), 360. Galilei, G. (1638). Discorsi e dimostrazioni matematic he intorno a due nuove scienze. Elsevirii, Leyden. Genovese, K., Lamberti, L., and Pappalettere, C. (2005). "Improved gl oballocal simulated annealing formulation for solving non-sm ooth engineering optimization problems." Int.J.Solids Structures, 42(1), 203-237. Geymonat, G., Hild, F., and Pagano, S. (2002) "Identification of elastic parameters by displacement field measurement." Comptes Rendus-Mcanique, 330(6), 403-408. Geymonat, G., and Pagano, S. (2003). "Identifi cation of mechanical properties by displacement field measurement: a variational approach." Meccanica, 38(5), 535-545. Ghanem, R. G., and Spanos, P. D. (2003). Stochastic finite elemen ts: a spectral approach. Dover Publications, Dover.

PAGE 226

226 Gibson, R. F. (2000). "Modal vibration response m easurements for characterization of composite materials and structures." Composites Sci.Technol., 60(15), 2769-2780. Gogu, C., Bapanapalli, S. K., Haftka, R. T ., and Sankar, B. V. (2007a). "Comparison of Materials for Integrated Thermal Prot ection Systems for Spacecraft Reentry." Proc., 48th AIAA/ASME/ASCE/AHS/ASC Structures, Stru ctural Dynamics, and Materials Conf., Honolulu, HI, AIAA Paper 2007-1860. Gogu, C., Haftka, R. T., Bapanapalli, S. K., a nd Sankar, B. V. (2007b). "Reducing the number of variables in a response surface approxima tion Application to thermal design." Proc., ASME IDETC/CIE 2007 Conf., Las Vegas, NV, Paper DETC2007-34674. Granta Design. (2005). CES Selector Edupack 2005 Grediac, M. (1989). "Principe des tr avaux virtuels et identification." Comptes Rendus De l'Acadmie Des Sciences. Srie 2, Mcanique, Physique, Chimie, Sciences De l'Univers, Sciences De La Terre, 309(1), 1-5. Grdiac, M., and Vautrin, A. (1990). "A New Me thod for Determination of Bending Rigidities of Thin Anisotropic Plates." J. Appl. Mech., 57(1), 964. Grdiac, M., and Paris, P. A. (1996). "Direct Id entification of Elastic C onstants of Anisotropic Plates by Modal Analysis: Theoretical and Numerical Aspects." J.Sound Vibrat., 195(3), 401-415. Grdiac, M., Fournier, N., Paris, P. A., and Surre l, Y. (1998a). "Direct Id entification of Elastic Constants of Anisotropic Plates by M odal Analysis: Experimental Results." J.Sound Vibrat., 210(5), 643-659. Grediac, M., and Pierron, F. (1998b). "A T-shap ed specimen for the direct characterization of orthotropic materials." Int J Numer Methods Eng, 41(2), 293-309. Grdal, Z., Haftka, R. T., and Hajela, P. (1999). Design and Optimization of Laminated Composite Materials. Wiley-Interscience, New York. Hill, G. A., and Olson, E. D. (2004). "Applicat ions of Response Surface-Based Methods to Noise Analysis in the Conceptual Desi gn of Revolutionary Aircraft." Proc., 10th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization Conf., AIAA Paper 20044437. Homma, T., and Saltelli, A. ( 1996). "Importance measures in gl obal sensitivity analysis of nonlinear models." Reliability Engineeri ng & Systems Safety, 52(1), 1-17. Hosder, S., Watson, L. T., Grossman, B., Mason, W. H., Kim, H., Haftka, R. T., and Cox, S. E. (2001). "Polynomial response surface approxi mations for the multidisciplinary design optimization of a high speed civil transport." Optim. Eng., 2(4), 431-452.

PAGE 227

227 Hua, H., Sol, H., and De Wilde, W. P. (2000) "On a statistical optim ization method used in finite element model updating." J.Sound Vibrat., 231(4), 1071-1078. Isenberg, J. (1979). "Progr essing from Least Squares to Bayesian Estimation." Proc., ASME Winter Annual Meeting, New York, NY, Paper 79-A/DSC-16. Jacques, J., Lavergne, C., and Devictor, N. (2006) "Sensitivity analysis in presence of model uncertainty and correlated inputs." Reliab. Eng. Syst. Saf., 91(10-11), 1126-1134. Jin, R., Chen, W., and Sudjianto, A. (2004). "Analytical Metamodel Based Global Sensitivity Analysis and Uncertainty Propagation for Robust Design." SAE Transactions, 113(5), 835846. Kaipio, J. P., and Somersalo, E. (2005). Statistical and computati onal inverse problems. Springer, New York. Kaufman, M., Balabanov, V., Grossman, B., Mason, W. H., Watson, L. T., and Haftka, R. T. (1996). "Multidisciplinary optimizati on via response surface techniques." Proc., 36th Israel Annual Conf. on Aerospace Sc iences, Tel Aviv and Haifa, Israel, A-57-A-67. Kernevez, J. P., Knopf-Lenoir Vayssade, C ., Touzot, G., and Verchery, G. (1978). "An identification method applied to an or thotropic plate bend ing experiment." Int. J. Numer. Meth. Eng., 12(1), 129-139. Lacey, D., and Steele, C. (2006). "The use of dimensional analysis to augment design of experiments for optimization and robustification." J.Eng.Des., 17(1), 55-73. Lai, T. C., and Ip, K. H. (1996). "Parameter estimation of orthotropic plates by Bayesian sensitivity analysis." Compos.Struct., 34(1), 29-42. Lauwagie, T., Sol, H., Roebben, G., Heylen, W., Shi, Y., and Van der Biest, O. (2003). "Mixed numerical-experimental identifi cation of elastic pr operties of orthotropic metal plates." NDT and E International, 36(7), 487-496. Lauwagie, T., Sol, H., Heylen, W., and Roebben, G. (2004). "Determination of the in-plane elastic properties of the different layers of laminated plates by means of vibration testing and model updating." J.Sound Vibrat., 274(3-5), 529-546. Lawson, C. L., and Hanson, R. J. (1974). Solving Least Squares Problems. Prentice Hall, Englewood Cliffs. Le Magorou, L., Rouger, F., and Bos, F. (2000) "Identification of c onstitutive laws of wood based panels using inverse method." Proc., 12eme Journes Nationales Sur Les Composites, Cachan, France, 385-394. Le Magorou, L., Bos, F., and Rouger, F. (2002). "Identification of cons titutive laws for woodbased panels by means of an inverse method." Composites Sci.Technol., 62(4), 591-596.

PAGE 228

228 Lecompte, D., Sol, H., Vantomme, J., and Habrak en, A. M. (2005). "Identification of elastic orthotropic material parameters based on ESPI measurements." Proc., SEM Annual Conf. and Exposition on Experiment al and Applied Mechanics Lecompte, D., Smits, A., Sol, H., Vantomme, J., and Van Hemelrijck, D. (2007). "Mixed numericalexperimental techniqu e for orthotropic parameter identification using biaxial tensile tests on cruciform specimens." Int.J.Solids Structures, 44(5), 1643-1656. Lee, J., Haftka, R. T., Griffin, O. H., Watson, L. T., and Sensmeier, M. D. (1994). "Detecting delaminations in a composite beam using anti-optimization." Struct. Multidisc. Optim., 8(2), 93-100. Lee, J. R., Molimard, J., Vautrin, A., and Surre l, Y. (2006). "Diffraction grating interferometers for mechanical characterisations of advanced fabric laminates." Opt. Laser Technol., 38(1), 51-66. Leissa, A. W. (1987). "Recent studies in plat e vibrations, 1981, part II: complicating effects." Shock Vib. Dig., 19(3), 10. Lucia, D. J., Beran, P. S., and Silva, W. A. (2004). "Reduced-order modeling: new approaches for computational physics." Prog.Aerospace Sci., 40(1-2), 51-117. Mack, Y., Goel, T., Shyy, W., Haftka, R. T., a nd Queipo, N. V. (2005). "Multiple surrogates for the shape optimization of bluff body-facilitated mixing." Proc., 43rd AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV, AIAA Paper 2005-0333. Maletta, C., and Pagnotta, L. (2004). "On the de termination of mechanical properties of composite laminates using genetic algorithms." Int. J. Mech. Mater. Des., 1(2), 199-211. Marwala, T., and Sibisi, S. (2005). "Finite El ement Model Updating Usi ng Bayesian Framework and Modal Properties." J.Aircr., 42(1), 275-278. Matheron, G. (1963). "Princ iples of ge ostatistics." Econ. Geol., 58, 1246-1266. Mauvoisin, G., Bremand, F., and Lagarde, A. (1994). "Three-dimensiona l shape reconstruction by phase-shifting shadow moir." Appl.Opt., 33 2163-2169. Molimard, J., Le Riche, R., Vautrin, A., and L ee, J. R. (2005). "Identification of the four orthotropic plate stiffnesses using a single open-hole tensile test." Exp.Mech., 45(5), 404411. Molimard, J. (2007). BibImages Toolbox v0.2 Users Manual Mottershead, J. E., and Friswell, M. I. (1993) "Model Updating In Structural Dynamics: A Survey." J.Sound Vibrat., 167(2), 347-375.

PAGE 229

229 Myers, D. E., Martin, C. J., and Blosser, M. L. (2000). "Parametric Weight Comparison of Advanced Metallic, Ceramic Tile, and Cera mic Blanket Thermal Protection Systems." NASA/TM-2000-210289, 1-44. Myers, R. H., and Montgomery, D. C. (2002). Response Surface Methodology: Process and Product in Optimization Usi ng Designed Experiments. John Wiley & Sons, New York. Noh, W. J. (2004). "Mixed Mode Interfacial Fracture Toughness of Sandwich Composites at Cryogenic Temperatures." Master's Thesis, University of Florida Norden, R. H. (1972). "A survey of maximum likelihood estimation." International Statistical Review/Revue Internationale De Statistique, 40(3), 329-354. Orr, M. J. L. (1996). "Introduction to radial basis f unction networks." Center for Cognitive Science, Edinburg University, Scotland, UK Orr, M. J. L. (1999). "Recent advances in radial basis f unction networks." Technical Report, Institute for Adaptive and Neural Comput ation Division for Informatics, Edinburg University, Scotland, UK Papazoglou, V. J., Tsouvalis, N. G., and Lazarid is, A. G. (1996). "A non-destructive evaluation of the material properties of a composite laminated plate." Appl. Compos. Mater., 3(5), 321-334. Papila, M. H., and Haftka, R. T. (1999). "Uncertainty and Wing Structural Weight Approximations." Proc., 40th AIAA/ASME/ASCE/AHS/ ASC Structures, Structural Dynamics andMaterials Conf. and Exhibit, St. Louis, MO, 988-1002. Papila, M. H., and Haftka, R. T. (2000). "Response surface approximations: Noise, error repair, and modeling errors." AIAA J., 38(12), 2336-2354. Pederson, P., and Frederiksen, P. S. (1992). "Ide ntification of orthotr opic material moduli by a combined experimental/numerical approach." Measurement, 10(3), 113-118. Post, D., Han, B., and Ifju, P. (1997). High sensitivity moir: ex perimental analysis for mechanics and materials. Springer, New York. Poteet, C. C., Abu-Khajeel, H., and Hsu, S. Y. (2004). "Preliminary thermal-mechanical sizing of a metallic thermal protection system." J.Spacecraft Rockets, 41(2), 173-182. Qu, Z. Q. (2004). Model Order Reduction Techniques: With Applications in Finite Element Analysis. Springer, New York. Ragauskas, P., and Skukis, E. (2007). "Materia l properties identificati on. Comparison of two techniques." Mechanika, 6(68), 39-44.

PAGE 230

230 Rebba, R., and Mahadevan, S. (2006a). "Sta tistical Methods for M odel Validation under Uncertainty." Proc., 47th AIAA/ASME/ASCE/AHS/ASC St ructures, Structural Dynamics, and Materials Conf., Newport, RI Rebba, R., Mahadevan, S., and Huang, S. (2006b). "Validation and error estimation of computational models." Reliab. Eng. Syst. Saf., 91(10-11), 1390-1397. Rieutord, E. (1985). "Analyse dimensionell e Similitudes et essais sur modle." Mcanique Des Fluides, INSA, Lyon. Rikards, R., Chate, A., and Gailis, G. (2001). "Ide ntification of elastic properties of laminates based on experiment design." Int.J.Solids Structures, 38(30-31), 5097-5115. Rikards, R., Abramovich, H., Green, T., Auzins, J., and Chate, A. (2003) "Identification of Elastic Properties of Composite Laminates." Mech. Adv. Mater. Struct., 10(4), 335-352. Rikards, R., and Auzins, J. (2004). "Respons e surface method for solution of structural identification problems." Inv. Prob. Sci. Eng., 12(1), 59-70. Rouger, F., Khebibeche, M., and Govic, C. (1990) "Non-determined tests as a way to identify wood elastic parameters: the finite element approach." Proc., Euromech Colloquium 269 Mechanical Identifica tion of Composites, 82-90. Rust, S. W., Todt, F. R., Harris, B., Neal, D ., and Vangel, M. (1989). "Statistical methods for calculating material allowables for MIL-HDBK-17." Test Methods and Design Allowables for Fibrous Composites, C. C. Chamis, ed., ASTM International, 136-149. Sacks, J., Schiller, S. B., and Welch, W. J. (1989). "Designs for computer experiments." Technometrics, 31 41-47. Sakamoto, S., and Ghanem, R. (2002). "Simul ation of multi-dimensional non-Gaussian nonstationary random fields." Prob.Eng.Mech., 17(2), 167-176. Saltelli, A., Tarantola, S., and Chan, K. (1999) "A quantitative model-independent method for global sensitivity analys is of model output." Technometrics, 41(1), 39-56. Schenk, C. A., and Schueller, G. I. (2003). "Buck ling analysis of cylindric al shells with random geometric imperfections." Int.J.Non-Linear Mech., 38(7), 1119-1132. Schenk, C. A., Pradlwarter, H. J., Schueller, G. I. (2005). "Non-stationary response of large, nonlinear finite element systems under stochastic loading." Comp. & Struct ., 83 (14), 10861102. Shi, Y., Sol, H., and Hua, H. (2005). "Transve rse shear modulus identification by an inverse method using measured flexural re sonance frequencies from beams." J.Sound Vibrat., 285(1-2), 425-442.

PAGE 231

231 Silva, G., Le Riche, R., Molimard, J., Vautrin, A., and Galerne, C. (2007). "Identification of Material Properties Using FEMU: Applica tion to the Open Hole Tensile Test." Adv. Exp. Mech., 7-8 73-78. Silva, G. H. C. (2009). "Identification of materi al properties using finite elements and full-field measurements with focus on the characterizati on of deterministic experimental errors." Ph.D. Thesis, Ecole des Mine s de Saint Etienne, France Smarslok, B. P. (2009). "Measuring, using a nd reducing experimental and computational uncertainty in reliability analysis of composite laminates." Ph.D. Dissertation, University of Florida Sobol, I. M. (1993). "Sensitivity estimates for nonlinear mathematical models." Math. Model. Comput. Exp., 1(4), 407-411. Sol, H. (1986). "Identification of anisotropic plate rigidities using free vibration data." Ph.D. Thesis, Free University of Brussels, Belgium Sonin, A. A. (1997). The physical basis of dimensional analysis. Massachusetts Institute of Technology, Cambridge, MA. Surrel, Y. (2005). "Les techniques optiques de mesure de champ: essai de classification." Instr. Mes. Metrol., 4(3-4), 11-42. Sutton, M. A., McNeill, S. R., Helm, J. D ., and Chao, Y. J. (2000). "Advances in twodimensional and three-dimensional computer vision." Topics Appl. Phys., 77 323-372. Szirtes, T. (1997). Applied dimensional analysis and modeling. McGraw-Hill, New York. Tarantola, A. (2004). Inverse Problem Theory and Methods for Model Parameter Estimation. SIAM, Philadelphia. Vaidyanathan, R., Goel, T., Shyy, W., Haftka, R. T., Queipo, N. V., and Tucker, P. K. (2004). "Global sensitivity and trade-off analyses for mu lti-objective liquid rocket injector design." Proc., 40th AIAA/ASME/SAE/ASEE Joint Propu lsion Conference and Exhibit, Fort Lauderdale, FL, AIAA Paper 2004-4007. Vapnik, V. (1998). Statistical learning theory. Wiley, New York. Vaschy, A. (1892). "Sur les lois de similitude en physique." Annales Tlgraphiques, 19 25-28. Vautrin, A. (2000). "Mechanica l identification of composite materials and structures." Proc., 2nd Asian-Australasian Conf. on Compos ite Materials, Kyongju, Korea, 1305-1323. Venter, G., Haftka, R. T., and Starnes Jr, J. H. (1998). "Construction of Response Surface Approximations for Design Optimization." AIAA J., 36(12), 2242-2249.

PAGE 232

232 Viana, F. A. C., and Goel, T. (2008). "Surrogates Toolbox 1.1 Users Manual." Http://fchegury.Googlepage s.com/surrogatestoolbox [last consulted 8.17.2009]. Vignaux, V. A., and Scott, J. L. (1999). "S implifying Regression Models Using Dimensional Analysis." Aust. N.Z. J. Stat., 41(1), 31-41. Waller, M. D. (1939). "Vibrations of free square plates: part I. Normal vibrating modes." Proc. Phys. Soc., 51 831-844. Waller, M. D. (1949). "Vibrations of Free Rectangular Plates." Proc. Phys. Soc., B, 62(5), 277285. Walker, C. A. (1994). "A historical review of moir interferometry." Exp. Mech., 34(4), 281-299. Wood, J. D., Wang, R., Weiner, S., and Pashley, D. H. (2003). "Mapping of tooth deformation caused by moisture change using moire interferometry." Dent. Mater., 19(3), 159-166. Yin, W. (2008). "Automated strain analysis system: development and applications." Ph.D. Proposal, University of Florida

PAGE 233

233 BIOGRAPHICAL SKETCH Christian Gogu obtained in 2006 the Ingnieur Civil des Mines degree as well as a masters degree in Mechanical Engineering from the Ecole des Mines de Saint Etienne in France. The same year he started in a joint PhD program between the Ecole des Mines de Saint Etienne in France and the University of Florida. Hi s research interests in clude response surface methodology, dimensionality reduction methods, mu ltidisciplinary optimization, probabilistic approaches and composite materials.