UFDC Home  myUFDC Home  Help 



Full Text  
PAGE 1 1 SENSITIVITY ANALYSIS OF PERIODIC ERRORS IN HETERODYNE INTERFEROMETRY By VASISHTA P. GANGULY A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2010 PAGE 2 2 2010 Vasishta P. Ganguly PAGE 3 3 To my parents PAGE 4 4 ACKNOWLEDGMENTS I would like to thank my parents, Parthasarathy and Lalitha Ganguly, and my sister, Surabhi, for their love and support. A special thanks to my advisor, Dr. Schmitz, for his patience and support and for the freedom extended to me over the course of my graduate studies. Also, thanks to my committee members, Dr. Kim and Dr. Greenslet, for their guidance in my research. PAGE 5 5 TABLE OF CONTENTS page ACKNOWLEDGMENTS ...................................................................................................... 4 LIST OF TABLES ................................................................................................................ 7 LIST OF FIGURES .............................................................................................................. 9 ABSTRACT ........................................................................................................................ 11 CHAPTER 1 INTRODUCTION ........................................................................................................ 13 2 LITERATU RE REVIEW .............................................................................................. 18 3 PERIODIC ERROR IN HETERODYNE INTERFEROMETRY ................................. 22 Heterodyne Interferometer Description ...................................................................... 22 Sources of Error in Heterodyne Interferometry .......................................................... 23 Sources of Periodic Error ..................................................................................... 23 Non periodic Sources of Error ............................................................................. 24 An alytical Model for Heterodyne Interferometry ........................................................ 25 Experimental Verification of Analytical Model ............................................................ 28 4 SENSITIVITY AND UNCERTAINTY ANALYSIS OF PERIODIC ERROR MODEL ....................................................................................................................... 34 Local Sensitivity Analysis ........................................................................................... 35 Global Sensitivity Analysis ......................................................................................... 36 The Sobol Method ............................................................................................... 39 Calculation of Sobol sensitivity indices ........................................................ 40 Characteristics of Sobol sensitivity indices .................................................. 41 Linear Regression Method for Global Sensitivity ................................................ 41 Computation of Maximum Permissible Input Variation ............................................. 44 Uncertainty Analysis ................................................................................................... 47 5 PREDICITNG INTERFEROMETER SETUP MISALIGNMENTS ............................. 53 6 RESULTS AND DISCUSSION ................................................................................... 61 Experimental Measurement of Periodic Error ............................................................ 61 Local Sensitivity .......................................................................................................... 61 Global Sensitivity ........................................................................................................ 62 Sobol Sensiti vity Indices for Cosijns et al. Model ............................................... 62 Effect of Variation of Input Uncertainty on Global Sensitivity Indices ................ 63 PAGE 6 6 Linear Regression Sensitivity Analysis. .............................................................. 64 Maximum Permissible Input Uncertainty ................................................................... 65 Uncertainty Analysis ................................................................................................... 66 Predicting Interferometer Setup Misalignments ......................................................... 66 7 CONCLUSIONS .......................................................................................................... 93 APPEN D IX : MATLAB CODE ........................................................................................... 96 Cosijns et al. Model .................................................................................................... 96 Code to evaluate fast Fourier transform .............................................................. 97 Local Sensitivity .......................................................................................................... 97 Sobol Global Sensitivity Indices ................................................................................ 98 Linear Regression Method for Global Sensitivity ...................................................... 99 Maximum Permissible Input Uncertainty ................................................................. 101 NSGA II ........................................................................................................... 101 Objective and Constraint Function .................................................................... 106 Particle Swarm Optimization .................................................................................... 107 LIST OF REFERENCES ................................................................................................. 110 BIOGRAPHICAL SKETCH .............................................................................................. 113 PAGE 7 7 LIST OF TABLES Table page 6 1 Errors between analytical and experimental periodic error. ................................. 70 6 2 Uncertainty ranges for input parameters ............................................................... 70 6 3 Sobol method sensitivity indices for first order error. ........................................... 70 6 4 Sobol method sensitivity indices for second ord er error. ..................................... 70 6 5 Variation in input uncertainty range (proportional). ............................................... 71 6 6 Individual effect Si for first order periodic error with varying uncertainty ranges (proportionally varying). ............................................................................. 71 6 7 Total effect STi for first order periodic error with varying uncertainty ranges (proportionally varying). .......................................................................................... 71 6 8 Interactive effects for first order periodic error with varying uncertainty ranges (proportionally varying). .......................................................................................... 72 6 9 Individual effect Si for second order periodic error with varying uncertainty ranges (proportionally varying). ............................................................................. 72 6 10 Total effect STi for second order periodic error with varying uncertainty ranges (proportionally varying). ............................................................................. 72 6 11 Interactive effects for second order periodic error with varying uncertainty ranges (proportionally varying). ............................................................................. 73 6 12 Variation in uncertainty range (angles only). ......................................................... 73 6 13 Individual effect Si for first order periodic error with varying and ................... 73 6 14 Total effect STi for first order periodic error with varying and ........................ 73 6 15 Interactive effects for first order periodic error with varying and ................... 74 6 16 Individual effect Si for second order periodic error with varying and ............. 74 6 17 Total effect STi for second order periodic error with varying and .................. 74 6 18 Interactive effects for second order periodic error with varying and ............. 74 6 19 Permissible input uncertainty (first and second order periodic error). .................. 70 PAGE 8 8 6 20 Permissible input uncertainty (first order periodic error). ...................................... 75 6 21 Permissible input uncertainty (second order periodic error). ................................ 75 6 22 Output uncertainty. ................................................................................................. 75 PAGE 9 9 LIST OF FIGURES Figure page 3 1 Schematic representation of ideal heterodyne interferometer .............................. 30 3 2 Actual interferometer setup with frequency leakage ............................................. 31 3 3 Data analysis and periodic error computation ....................................................... 32 3 4 Experimental heterodyne interferometer setup ..................................................... 33 4 1 Scatter plots for first order periodic error ............................................................... 48 4 2 Scatter plots for second order periodic error ......................................................... 49 4 3 Random sampling .................................................................................................. 50 4 4 Latin hy percube sampling ...................................................................................... 51 4 5 Flow chart for NSGA II optimization algorithm ...................................................... 52 5 1 PSO process .......................................................................................................... 58 5 2 Data Flow ch art for PSO with periodic error input ................................................. 59 5 3 Data flow chart for PSO with experimental data input .......................................... 59 5 4 Data flow chart for PSO with analytical data input ................................................ 60 6 1 First and second order periodic error (analytical and experimental) .................... 76 6 2 Error in first and second order periodic errors ....................................................... 76 6 3 Local sensitivity with respect to .......................................................................... 77 6 4 Local sensitivity with respect to ............................................................. 77 6 5 Sensitivity with respect to ........................................................... 78 6 6 Local sensitivity with respect to 1 ....................................................................... 78 6 7 Local sensitivity with respect to 2 ....................................................................... 79 6 8 Local sensitivity with respect to ............................................................. 79 6 9 Local sensitivity with respect to .................................................. 80 6 10 Local sensitivity with respect to ............................................................. 80 PAGE 10 10 6 11 Local sensitivity with respect to .................................................. 81 6 12 Local sensitivity with respect to err ....................................................................... 81 6 13 Sobol sensitivity indices for first order error (proportional variation) ................... 82 6 14 Sobol sensitivity indices for se cond order error (proportional variation) ............. 82 6 15 Sobol sensitivity indices for first order error with varying and ....................... 83 6 16 Sobol sensitivity indices for second order error with varying and ................. 83 6 17 Regression analysis sensitivity indices for periodic error (proportional variation) ................................................................................................................. 84 6 18 ... 84 6 19 Scatter plots for first order error ............................................................................. 85 6 20 Scatter plots for second order error ....................................................................... 86 6 21 PSO output for optimized variable data (all seven parameters considered) ........ 87 6 22 PSO objective function values ............................................................................... 87 6 23 ...................... 88 6 24 PSO objective function values ............................................................................... 88 6 25 Scatter plots of and data with respect to periodic error .................................. 89 6 26 Periodic error (experimental and analytical) .......................................................... 90 6 27 Errors in periodic error measurement .................................................................... 90 6 28 PSO algorithm results with periodic error input ..................................................... 91 6 29 Error is PSO algorithm results ( and values) with periodic error input ............ 91 6 30 Propagation of errors in optimization process ....................................................... 92 PAGE 11 11 Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science SENSITIVITY ANALYSIS OF PERIODIC ERRORS IN HETERODYNE INTERFEROMETRY By Vasishta Ganguly May 2010 Chair: Tony Schmitz Major: Mechanical Engineering Non linearities in displacement measurement when using heterodyne interferometry arise due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first and second order p eriodic errors which cause a cyclical (non cumulative) variation in the reported displacement about the true value. First order periodic error has a spatial frequency of one cycle per displacement fringe, while second order periodic error has a frequency o f two cycles per fringe. An analytical model for these nonlinearities was suggested by Cosijns et al. which takes into account rotational misalignments of the polarizing beam splitter, mixing polarizer, nonorthogonality of the two laser frequencies, elli pticity in the polarizations of the two independent laser beams, and different transmission coefficients in the beam splitter. This study implements the Cosijns et al. model in order to identify the sensitivities of the periodic errors with respect to the input parameters. A local sensitivity analysis is conducted to examine the sensitivities of the first and second order periodic errors with respect to each input parameter about the nominal input values. Also, a variance based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol sensitivity indices using a Monte Carlo approach. A Latin hypercube sampling PAGE 12 12 technique was employed for sampling the input parameter space. This analysis assists in factor prioritization in order to rank the input parameters according to their importance in influencing the output. The study also examines the effect of variation in the input uncertainty on the computed sensitivity indices. It is seen that the first order periodic error is highly sensitive to nonorthogonality of the two laser frequencies, while second order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. Further, a multi objective optimization problem is sol ved using a genetic algorithm, where the permissible uncertainty in input parameters was maximized while minimizing the probability of periodic error exceeding a predetermined value (0.5 nm). This revealed that, in order to maintain low periodic error, str ingent constraints on uncertainties in half wave plate angle and orthogonality of the two frequencies is required. A particle swarm optimization technique is used to predict the possible setup imperfections based on experimentally generated values for peri odic error. The study explores the possibility of using this information as a corrective tool in order to realize the desired measurement accuracy. PAGE 13 13 CHAPTER 1 INTRODUCTION Heterodyne displacement measuring interferometry offers high resolution, high accurac y displacement measurement for noncontact applications. However, the accuracy of the displacement measurement is limited by nonlinearities, or periodic errors, which depend on the optical setup [119]. An ideal heterodyne interferometer consists of a las er source which emits two coherent, collinear, orthogonally polarized beams of different frequencies. These beams are separated at a polarizing beam splitter and are directed into the two separate arms of the interferometer: the reference arm and the measurement arm. These independent beams reflect from targets in each arm (retroreflectors are applied in this research) so that, ideally, a single frequency is transmitted to each target. The measurement arm retroreflector is mounted on the moving axis, while the reference retroreflector position is fixed. Axis motion results in a Doppler shift in the measurement arm frequency. This change in frequency is measured as a phase shift by the phase measuring electronics and is converted into displacement. The accura cy of heterodyne laser interferometers is limited due to the presence of periodic errors [1 7]. In practice, there is leakage of the each frequency into both arms. As a result, there is a contamination of the interference signal produced at the phase measuring electronics. This frequency mixing is caused by the nonideal performance of the polarization dependent optics and setup misalignments. The resulting errors are periodic in nature, i.e., they repeat each fraction of a wavelength of slide displacement and are noncumulative. Cosijns et al. [9] suggested an analytical model for calculating periodic error. The model included the effects of ellipticity of polarizations (i.e., nonplanar polarizations) of the laser beams, nonorthogonality of the PAGE 14 14 polarizati ons of the laser beams, rotational misalignment between the orthogonally polarized laser beams and the polarizing beam splitter axes, rotational misalignment error of the mixing polarizer, and different transmission coefficients in the beam splitter. Apart from periodic errors, there are also other sources of error which are not periodic in nature. Cosine error occurs due to misalignment between the measurement beam and motion axes. Abbe error is present when there is an offset between the measurement and m otion axes coupled with rotational errors in the single axis motion. Deadpath error occurs due to a difference in the measurement and reference path lengths at initialization accompanied by an uncompensated variation in the refractive index of air in these paths. Phase errors may also occur due to photonic noise. Thermal deformations, beam shear, nonplanar wavefronts, wavelength instability of the laser beam source, and changes in refractive index due to air turbulence are also sources of errors in heterodyne interferometry [8]. In this work, only the effects of periodic errors on displacement measurement accuracy are considered. Chapter 3 discusses the sources of periodic error, as well as the experimental setup used to verify the Cosijns et al. model [9] The purpose of this study is to complete a sensitivity analysis using the Cosijns et al. [9] periodic error model. Sensitivity analysis is used to study how the uncertainty in model output is related to the uncertainty in the input parameters [2022]. Se nsitivity for factor prioritization provides an estimate of which input parameter, if fixed, would result in the greatest reduction of output variation to be obtained [2022]. In this way, the input parameters may be sorted in an order of importance. By ranking the input parameters, it is possible to concentrate efforts on accurately measuring the important parameters PAGE 15 15 while ignoring the less important ones [2022]. Another application for sensitivity analysis is factor fixing. If the effect of a certain i nput variable, over its range of uncertainty, on the output is negligible, the value of this variable can be fixed at any value over its range. This indicates that this input variable is completely non influential on the model output [2022]. A local sens itivity analysis is carried out to estimate the effect of input variation on first and second order periodic error independently. A gradient based approach is used where the output is differentiated with respect to a single input variable while all the oth er variables are kept fixed at their nominal values. However, the issue with calculating local derivatives with respect to a certain input variable at a single point is that the result is only informative about the variation of the model output with respec t to the selected input about a single point. This issue is not significant in linear models, but for non linear models, where the output varies nonlinearly with respect to the input(s), this approach does not provide an accurate estimate of output sensit ivity [20,23]. Also, if the output is a function of a large number of variables, then the number of data points at which local sensitivities are to be calculated grow exponentially and this approach is no longer feasible. This study shows the local sensitivities of first and second order periodic error with respect to all input variables taken independently with all other variables fixed at their nominal values. Scatter plots provide a good initial estimate of sensitivity, since they provide an immediate vi sual depiction of the relative importance of the input parameters [20]. The sensitivity of the model output can be judged from the shape of the scatter plot. Plots with little shape indicate that that parameter has little influence over output. Scatter PAGE 16 16 plots do not, however, provide a good estimate of combined effects of two or more input parameters on output variation. Also, in cases where there are several input parameters, scatter plots cannot be used to rank theses parameters according to their infl uence on the model output. This study briefly discusses the sensitivity using scatter plots. In order to provide a better estimate of model sensitivity, global sensitivity techniques are used [2022]. Variance based global sensitivity analysis methods explore the space of input parameters by selecting a judicious number of randomly selected data p oints from the input parameter space and provide a more informative and robust estimate of the behavior of the model output with respect to variation in the model input [20]. In this study, a variance based Monte Carlo approach is used to estimate both the individual and total effect Sobol sensitivity indices for each of the model inputs [20,2425]. A Latin hypercube sampling technique is used to select individual data points for the Monte Carlo evaluation in order to obtain efficient sampling of the entir e input parameter space. The study reveals that rotational misalignment of the polarizing beam splitter and nonorthogonality in the polarizations of the two laser frequencies are most influential on the output uncertainty. The study also examines the vari ation in the sensitivity indices with variation in the input variables uncertainties. Once the sensitivity indices are calculated, a multi objective optimization problem is solved to estimate the maximum permissible input uncertainty for each parameter wh ile keeping the periodic error below a predetermined value. The permissible input uncertainty is maximized while minimizing the probability of failure. A genetic algorithm (NSGAII) for solving multi objective problems [29,30] is used in the optimization PAGE 17 17 p roblem, where the variables are only allowed to take certain discrete preset values. This study validates the results of the sensitivity analysis as the optimized solution applies stringent restrictions on uncertainties in rotational misalignment of the polarizer beam splitter and non orthogonality in polarization of the two laser frequencies while allowing higher uncertainties on the other parameters. Further, an uncertainty analysis is performed to study the propagation of input uncertainties to output uncertainty in periodic error. The uncertainties obtained from the optimization process are used in the uncertainty analysis. Chapter 4 describes the techniques used to compute sensitivity and to find the permissible uncertainty. The study also explores the utility of optimization algorithms as a predictive tool in estimating setup misalignment errors in order to achieve desired performance levels. A particle swarm optimization approach [3134] is used to determine the model input parameters (setup misalignme nts) given the periodic error data. These results are validated against experimental data and it is shown that the optimization was generally successful in predicting setup misalignments. The particle swarm optimi zation process is discussed in C hapter 5. F inally, the results of the analysis and the conclusions are discussed in C hapters 6 and 7, respectively. PAGE 18 18 CHAPTER 2 LITERATURE REVIEW Laser interferometry has found multiple applications in systems which require high accuracy, high resolution displacement measurement. However, the accuracy of heterodyne interferometry is limited due to leakage induced nonlinearities (i.e., non cumulative periodic errors). Quenelle [1] first described this nonlinearity for heterodyne interferometers and predicted a worst c ase peak to peak error of 5 nm with a period of one wavelength change of optical path. Sutton [2] verified this prediction experimentally by using a pressure scanning technique to measure periodic error. Over the last couple of decades most research effort s in this field have focused on identifying the sources of periodic error, including the corresponding development of analytical models as well as demonstrating measurement and compensation techniques. In an ideal heterodyne interferometer the polarizing b eam splitter separates the two source frequencies into the measurement path and the reference path. The leakage of the two frequencies into both paths, however, results in periodic error. Early investigations attributed this frequency leakage to nonorthog onality between the polarizations of the two laser beams and the ellipticity in the polarization of each beam [3]. Misalignment in the polarization axis of the two laser beams with the polarizing beam splitter, different transmission coefficients between t he two arms of the interferometer and mixing polarizer orientation errors were also found to produce periodic error [4]. Other investigators have studied the effect of ghost reflections which can occur at every interface the laser beams traverse [5]. More recently, researchers have studied the influence of high velocity target motion and non linearities in the phase measuring electronics amplifier on periodic error [6]. Beam shearing effects on variation PAGE 19 19 of periodic error have also been observed [7]. Several different approaches have been employed to develop analytical models and measurement techniques for nonlinearity in heterodyne interferometry. Sutton [2] first used a pressure scanning technique to measure periodic error. However, pressure scanning is a slow process and cannot be used for real time compensation of periodic error. Hou and Wilkening [10] described an interferometer configuration which used a beam splitter to generate a reference signal which, when compared to the measurement signal using phase measuring electronics, provided information on periodic error. Stone and Howard [11] described an analytical model for nonlinearities using a Jones calculus technique. They implemented a setup for measurement of optical path changes which involved th e gross mechanical rotation of the entire interferometer assembly, thereby limiting its application for calibrating installed interferometer applications. Bobroff [4] described a method where amplitude modulation of the beat frequency signal was used to de termine periodic error. The drawback of this method is that it cannot differentiate between first and second order periodic errors. Badami and Patterson [12] suggested a frequency domain method of measuring periodic error. This method is capable of isolati ng nonlinearity due to optical misalignments from those due to the phase measuring electronics. Other researches used spectrum analyzer data to predict periodic error [13]. More recently, Cosijns et al. [9] developed an analytical model for periodic error which accounts for misalignment in polarization axis of laser beam and the polarizer beam splitter, ellipticity in the polarizations of the independent laser beams, different transmissions coefficients for each independent laser frequency, nonorthogonali ty between polarizations of the two laser beams as well as misalignment of the mixing polarizer. This model considers all PAGE 20 20 other contributing parameters to be ideal, including ideal corner cubes and ideal coupling of intereferometer components with regard t o polarization states. Other analytical models have also been suggested [14]. Much research effort has also focused on in situ compensation of periodic error [1519]. This study uses the Cosijns et al. [9] model to study the sensitivity of periodic error to the contributing parameters. A local sensitivity approach is implemented to determine the variation of the output, in this case the periodic error, with respect to the input parameters about their nominal positions. Although local sensitivity provides an estimate of the dependence of the model output on input variation, these measures are only informative at the point at which they are calculated. For non linear models this is especially problematic. Therefore, the concept of global sensitivity is applied to estimate the importance of the input uncertainty on output variance for each input parameter [21,22]. Cukier et al. [23] proposed a method based on conditional variances for calculating first order sensitivity indices using a Fourier amplitude sensiti vity test (FAST) method. However, this method did not gain much popularity among practitioners because of its inability to calculate high order indices, i.e., this method was capable of estimating the effect of an individual input on the output variation b ut was unable to calculate the effects of interactions of two or more variables on the output [20] This issue was later addressed by Saltelli [2122]. Sobol introduced global sensitivity indices or Sobol sensitivity indices [25]. The Sobol sensitivity indices for a certain input parameter are representative of the amount of variance in the output which would be reduced by fixing that input. Sobol also explored the use of Monte Carlo methods for computing global sensitivity indices [26]. Subsequent research work focused on the PAGE 21 21 implementation and augmentation of these techniques to improve accuracy and computational efficiency [27]. Saltelli suggested a method to calculate higher order sensitivity indices at lower computational costs [28]. This study uses a genetic algorithm to solve a multi objective optimization problem where the permissible uncertainty in the input parameters is maximized while minimizing the probability of the periodic error being over 0.5 nm [2930]. Also, the possibility of using a particle swarm optimization algorithm as a predictive tool in estimating setup misalignment errors is explored [3134]. PAGE 22 22 CHAPTER 3 PERIODIC ERROR IN HETERODYNE INTERFEROMETRY Heterodyne Interferometer Description Heterodyne laser interferometry lends itsel f to demanding noncontact displacement measuring applications requiring high accuracy, resolution and range. Typical applications include position feedback sensors for high precision manufacturing equipment and for transducer calibration Figure 31 shows a schematic diagram of an ideal interferometer setup. For this ideal case, the two coherent, collinear, orthogonally polarized laser frequencies ( f1 and f2) are perfectly split at the polarizing beam splitter and are directed into the measurement and reference arms of the interferometer. The measurement arm contains the moving retroreflector (or corner cube) mounted on the stage whose displacement is to be m easured, while the reference arm retroreflector is stationary. Motion of the moving retroreflector causes the measurement arm frequency to be Doppler shifted by fd. The two frequencies from the measurement and references arms then pass through a mixing lin ear polarizer (to cause interference) and are collected by the photodetector which carries the interference signal to the phase measuring electronics (often using fiber optics). This Doppler shifted measurement arm frequency is compared to a reference inte rference signal in the phase measuring electronics and is used to determine the displacement information. However, due to imperfect optics (Figure 3 2 ) there is leakage of each frequency into both the measurement and reference arms, th ereby contaminating the measurement interference signal. As described previously, this frequency leakage gives rise to nonlinearities in the displacement signal. PAGE 23 23 Sources of Error in Heterodyne Interferometry Error in heterodyne interferometry can be broadly classified into two categories: 1) periodic error which is noncumulative in nature and repeats for every unit change in the optical path; and 2) nonperiodic error which depends on several factors, including interferometer alignment and the environmental conditions. Sources of Periodic Error Misalignment in the optical setup and an imperfect laser beam emitted from the laser source are the major sources of frequency leakage in heterodyne interferometers. 1 ) Non orthogonality of laser beams : Ideal separati on of the coherent, collinear laser beam into the separate arms at the polarizing beam splitter (PBS) requires that the two frequencies emitted from the laser source are orthogonally (plane) polarized to each other. Nonorthogonality leads to leakage of th e frequencies into both arms resulting in frequency mixing. 2 ) Rotational misalignment of laser beam with PBS : For ideal separation at the PBS, the incoming laser beam should be accurately oriented with respect to the PBS. Rotational misalignment of the incoming laser beam with the PBS is a source of frequency leakage. In some cases, a half wave plate can be used to accurately align the laser beam with the PBS. 3 ) Ellipticity of the laser beams : Ideal behavior requires that the two frequencies which are inciden t on the PBS are plane polarized. However, a departure from planar polarization, or ellipticity, can exist. 4 ) Rotational misalignment of mixing linear polarizer (LP): Rotational misalignment of the mixing polarizer located before the photodetector (connected to the phase measuring electronics) affects the interference signal by selecting different magnitudes of the measurement and reference arm signals in the interferometer. 5 ) Transmission coefficients: Different transmission coefficients for the two orthogonal ly polarized frequencies at the PBS causes the magnitudes of the light signals in the measurement and reference arms to differ. Other sources of periodic error include ghost reflections which occur every time a laser beam passes through an optical interface present in its path. Nonlinearities in the phase measuring electronics are another source of periodic error. Badami and PAGE 24 24 Patterson [12] suggested a method of using spectral data of the interference signal to calculate periodic error. This isolate s errors due to optical sources by eliminating the presence of phase measuring electronics. However, the measurement is now susceptible to errors in the spectrum analyzer. Other researchers have identified overlap in the interference terms due to high Doppler shifts during high velocity motion as a source of error [6]. Beam shear, which may arise as a result of cosine misalignment error in the interferometer setup, has also been shown to affect periodic error [7]. Non periodic Sources of Error Non periodic error may be random or cumulative in nature. Sources on nonperiodic error include: 1 ) Abbe error: Abbe error, also called sine error, exists whenever the measurement beam axis is not collinear with the motion axis (i.e., an Abbe offset exists) and there is a rotational error in the motion axis. 2 ) Cosine error: Cosine error occurs when the measurement beam is not parallel to the motion axis. Whenever cosine error exists, the measured displacement is always smaller than the true displacement (a bias is introduced). It is important to note the difference between Abbe error and cosine error. Abbe error occurs when there is an offset between the parallel measurement and motion axes, while cosine error occurs when there is angular misalignment between measurement beam and motion axis. 3 ) Deadpath error: Deadpath error occurs when there is a difference in the distance from the PBS to the reference arm and measurement arm corner cubes at initialization and there is an uncompensated change in the refractive index of the pr opagating medium (typically air). 4 ) Thermal effects: Deformation of the interferometer associated with changes in temperature give rise to error in displacement measurement. 5 ) Refractive Index of propagating medium: Changes in the refractive index of the propagating medium causes apparent displacements, even with no motion of the moving target (retroreflector in this study). PAGE 25 25 6 ) Wavelength stability: Changes in the wavelength of the laser source can give rise to error in displacement measurement. Typically laser so urces take some time to stabilize and emit constant wavelength laser beams. Analytical Model for Heterodyne Interferometry Several researchers have developed analytical models for heterodyne interferometry. Stone and Howard [11] describe a Jones matrix cal culus method of estimating periodic error, while Bobroff [4] used information ascertained from the amplitude modulation of the beat frequency signal to calculate periodic error. Cosijns et al. [9] developed an analytical model for periodic error taking int o consideration the rotational misalignment of the input laser beam with respect to the PBS, nonorthogonality of the linearly polarized laser beams, ellipticity in the polarizations of the laser beams, rotational misalignment of the mixing LP and the diff erent transmission coefficients for the PBS. All other parameters are assumed to be ideal. This study uses the Cosijns et al. model to compute sensitivity of the periodic error with respect to the contributing optical imperfections. The nonlinearity in di splacement as defined by Cosijns et al. is expressed in terms of the non linear phase shift as (Eq. 3 1). = tan 1 + sin ( 2 ) + cos ( 2 ) + sin ( 2 ) + cos ( 2 ) (3 1) where = [ 22 + 22 ] 1 2 1 1 2 2 [ 22 + 22 ] 1 2 1 1 2 2 + [ 2 + 2 ] 1 2 1+ 1 2 2 PAGE 26 26 = [ 22 22 ] 1 2 1 1 2 2 + [ 22 22 ] 1 2 1 1 2 2 + [ 2 + 2 ] 1 2 1+ 1 2 2 = 1 2 1 1 2 2 [ 1 2 ] + 1 2 1 1 2 2 2 1 2 1 1 2 2 2 1 2 1 1 2 2 [ 1 + 2 ] = [ 22 + 22 ] 1 2 1 1 2 2 + [ 22 + 22 ] 1 2 1 1 2 2 + [ 2 + 2 ] 1 2 1+ 1 2 2 = [ 22 + 22 ] 1 2 1 1 2 2 + [ 22 + 22 ] 1 2 1 1 2 2 + [ 2 + 2 ] 1 2 1+ 1 2 2 PAGE 27 27 = 1 2 1 1 2 2 2 + 1 2 1 1 2 2 1 2 1 1 2 2 2 + 1 2 1 1 2 2 + 1 2 1 1 2 2 2 + 1 2 1 1 2 2 2 The nonorthogonality and the rotational misalignment of the laser beams with respect to the PBS axes are given by the angles and For orthogonal, but rotationally misaligned, beams = When the beams are perfectly aligned with the PBS, = = 0. The ellipticities in the polarizations of the beams are defined as d 1 and d 2 and the transmission coefficients are and The angle represents the rotational misalignment of the mixing polarizer. In this study, the model is modified in order to introduce an error in nonorthogonality term. Therefore, err. The nonlinearity in the measured displacement is determined from the nonlinear phase shift as follows (Eq. 32) = 14 (3 2) W here n = refractive index of air, 1 = wavelength of laser frequency traversing measurement arm. Here, it is important to note that, although two different frequencies ar e used, the difference in the two frequencies is very small. Therefore, the difference in the two PAGE 28 28 wavelengths is also very small. The refractive index of air is assumed to be constant at 1 for the purposes of this study. The fast Fourier transform (FFT) of this displacement non linearity may be used to isolate first and second order periodic error. Figure 33 shows the steps for computing the periodic error. Part (a) shows the measured displacement with periodic error superimposed on it As the amplitude of displacement is much larger than the periodic error the periodic error cannot be readily observed. Part (b) shows the periodic error only, obtained by subtracting a linear fit from the constant velocity displacement data. Part (c) sho ws the FFT of the periodic error with content due to first and second order periodic errors. Higher order periodic error terms may also be present, but typically have negligible magnitudes. Experimental Verification of Analytical Model Figure 34 shows the experimental setup used to verify the Cosijns et al. model for periodic error in heterodyne interferometry. The laser beam emitted from the laser head passes through a half wave plate to a nonpolarizing beam splitter which splits it into two parts: the reference part (used in the phase measuring electronics) and the measurement part (which travels to the interferometer). The reference part is collected using the reference fiber optic pick up and forms the reference signal. The measurement part passes through the PBS where one frequency is (ideally) directed into the reference arm of the interferometer with the stationary retroreflector, while the other frequency is directed into the measurement arm with the moving retroreflector. The f requency in the measurement arm is Doppler shifted during motion of the moving retroreflector. The two laser beams recombine at the PBS and are directed to the mixing linear polarizer. The interference signal is then collected by the measurement fiber opti c PAGE 29 29 pick up. A half wave plate is used to artificially vary rotational misalignment between the source laser beams and the PBS. The orientation of the mixing polarizer can also be varied to vary periodic error magnitudes. The nonorthogonality of the two fre quencies emitted from the laser source, the ellipticities of the two laser beams and the different transmission coefficients are errors inherent in the system and cannot be externally manipulated in this setup. Experiments are carried out for a range of and values where is the rotational misalignment between the laser beam and the PBS, while is the rotational misalignment of the mixing linear polarizer. PAGE 30 30 Figure 31 Schematic representation of ideal heterodyne interferometer. PAGE 31 31 Figure 32 Actual interferometer setup with frequency leakage. PAGE 32 32 Figure 33 Data analysis and periodic error computation. (a) (b) (c) PAGE 33 33 Figure 34 Experimental heterodyne interferometer setup. PAGE 34 34 CHAPTER 4 SENSITIVITY AND UNCERTAINTY ANALYSIS OF PERIODIC ERROR MODEL This work studies the sensitivities of periodic error with respect to the contributing optical imperfections included in the Cosijns et al. model. Saltelli et al. define sensitivity as: The study of how the unc ertainty in the output of a model can be apportioned to different sources of uncertainty in model input. There are several possible motivations for a modeler to perform a sensitivity analysis of a given model [20] 1 ) Factor Prioritization: A factor prioriti zation setting assists in determining which input parameter which when fixed would result in the largest reduction of output variation. This allows the input variables to be ranked in an order of importance according to their effect on reducing output variance. Sensitivity analysis therefore helps in identifying in which direction the research effort should be focused in order to reduce output variance. It helps in identifying which input parameters should be more accurately measured. 2 ) Factor Fixing : A factor fixing setting is used to identify which factors or groups of factors have the least effect on output variance. The modeler can then confidently fix these factors at any value within their range without significantly affecting the variation of the output. This is particularly helpful in models involving a large number of input parameters. Identifying the parameters which have little bearing on output variation allows the modeler to simplify the model. 3 ) Variance Cutting : The variance cutting setting is used for applications where the analyst wants to ensure that the output variance is restricted below a certain value. 4 ) Factor Mapping: This setting is used to identify which parameters or combinations of parameters are responsible for driving the output into a particular region of the output space. This study focuses on identifying which parameters (optical imperfections) in the heterodyne in terferometry setup are most influential on periodic error. Typical errors encountered when completing a sensitivity analysis are classified into three categories [20]. Type I errors occur when an important parameter is identified as a noninfluential one. Type II errors occur when a noninfluential factor is identified as an important one. PAGE 35 35 Type III errors are typically framing errors where the sensitivity analysis is not set up so as to provide accurate results. Local Sensitivity Analysis Local sensitivity analysis involves computing the derivative of the model output with respect to the input. The derivative / of a model output Y with respect to an input variable can be thought of as the mathematical definition of sensitivity of the output with respect to the input. However, this definition is local in nature as the derivative must be evaluated at a certain fixed point in the input space thereby providing information of the variation in output only at that point. This is particularly problematic for nonlinear problems. For linear models, the property points away from the evaluation location can be linearly extrapolated, but this is not possible for nonlinear problems. Also, local sensitivity methods do not account for the uncertainty in the input variable and are therefore not recommended for models with uncertain inputs. For high dimensional models with large number of input parameters, it becomes particularly difficult to compute sensitivities with respect to each variable over the enti re range of the input parameter space. However, local sensitivity analysis does aid in identifying those inputs that are most influential on the output. Input/output scatter plots provide a visual depiction of the variation of the output with respect to each input parameter. Figure 4 1 and Figure 4 2 show the scatter plots for the first and second order periodic error, respectively, for each of the contributing parameters. The shape of the scatter plot is suggestive of whether the output is dependent on that particular input. Scatter plots with significant structure, or good shape, indicate that the output is dependent on that particular input variable, while those with relatively little structure, or poor shape, indicate that the influence of a PAGE 36 36 particular input on output is negligible. For example, Figure 41 suggests that first order periodic error depends mainly on err, while Figure 4 2 shows that second order periodic error is most strongly influenced by However, while scatter plots do provide an initial estimate of important factors they consider each factor independently and do not consider interactive effects of different input parameters in driving output variation. Two factors are said to interact w hen their effect on the output cannot be expressed as the sum of their individual effects. Also, they only provide a qualitative estimate of sensitivity as one cannot easily quantify the importance of each parameter just by studying the scatter plots. Global Sensitivity Analysis Although local sensitivity analysis does produce a measure of the local response of the outputs obtained by varying individual input parameters one at a time, it is ineffective for exploring the entire input parameters space in the case of uncertain inputs. The volume of the space explored is zero as the sensitivity is calculated only at one point with respect to one parameter while all the other parameters are kept constant. In order to obtain better estimates of variation of the outputs over the entire input parameter space the concept of global sensitivity analysis is introduced. In this study, a global sensitivity analysis of the Cosijns et al. model is conducted to estimate which imperfection(s) in the optical setup of a heterodyne interferometer is/are most influential on periodic error. Given this information, efforts in aligning and setting up the interferometer may be optimized. Sobols global sensitivity method is used here to calculate the Sobol global sensitivity indices A linear regression based method is also used to obtain similar results. Higher order terms of up to the fourth order are used in the regression analysis to provide a better approximation of the model PAGE 37 37 by accounting for the nonlinearities in the Cosijns et al. model. Both these method are variance based methods which allow for a better exploration of the entire input parameter space. The Cosijns et al. model produces accurate estimates of the periodic error given a deterministic set of inputs. As noted previously, a sensitivity analysis for the model can be performed by computing the model output while varying each parameter at a time. However, t his assumes that the model inputs are accurately known. This deterministic computation of sensitivity does not take into account uncertainties in the input parameters. Also, sensitivities are computed at fixed discrete points in the input space and are not representative of the entire input parameter space in general. Variancebased methods, on the other hand, enable the computation of output sensitivities given uncertain inputs. Variance based methods compute the output values at a number of points in the input parameter space and use information ascertained from a large number of model evaluations to calculate the sensitivity of the output with respect to each input. Global sensitivity measures provide an estimate of how much output variance would be reduc ed if a particular input parameter was fixed at a certain value. Some features of variance based methods are [20]: 1 ) The sensitivity indices are model independent and depend only on the value of the model output without having any bearing on the nature of th e model. 2 ) They are capable of capturing the influence of the full range of input uncertainty on model output. 3 ) They are capable of estimating the effect each input variable has on the output independently, as well as interactive effects with other input vari ables. 4 ) Variancebased methods are capable of handling the effects of groups of different input factors when there is logical dependence of one or more input factors on others. PAGE 38 38 Various sampling techniques have been suggested for sampling individual input pa rameter sets including random sampling, Latin hypercube sampling and quasi random number generators. This study employs the Latin hypercube sampling method for generating the input parameter data set. In Latin hypercube sampling, each input parameter uncer tainty range is divided into M intervals, where M is the number of samples and one sample appears in each interval. The size of each interval is inversely proportional to the probability of an individual in that interval. The individual elements are then s elected so as to satisfy the Latin hypercube condition. It is important to note that, in order to employ Latin hypercube sampling, the uncertainty range of each input parameter must be divided into the same M number of intervals. The advantage of Latin hyp ercube sampling is that is provides a better spread over the entire input parameter space, thereby providing a better representation of the input space. Subsequently, it avoids the formation of clusters in the input parameter space which might be possible when using random sampling techniques. Figure 4 3 and Figure 4 4 demonstrate a randomly sampled data set and a Latin hypercube data set, respectively. As can be seen from the figure, the data points selected using Latin hypercube sampling are more effectively distributed over the entire input parameter space. PAGE 39 39 The Sobol Method Sobol suggested a Monte Carlo method for computing global sensitivity indices of arbitrary groups of factors [26]. The individual effect of an input is defined as the reduction in output variance when the input is fixed at a certain value. The sensitivity index can be defined in terms of the unconditional variance of the output V(Y) and the variance V[E(YXi)] in the output Y when the input p arameter Xi is fixed (Eq. 41). = [ (  ) ] ( ) (4 1) This first order effect represents the main effect of a certain variable on the output wit hout taking into consideration interaction effects. These individual effect sensitivity indices are importance measures in terms of determining which parameters most influences output variation and can be used in a factor prioritization setting. The total effect sensitivity index takes into consideration the interactive effects of the various input parameters on output variance. The total effect sensitivity index can be expressed in terms of the unconditional output variance V(Y) and the output variance V[ E(YX~i)] which is the variance computed when all parameters apart from Xi are maintained at a fixed value. The total effect sensitivity index can then be expressed by Eq. 42 as = 1 [ (  ~ ) ] ( ) (4 2) The total effect sensitivity indices can be used in a factor fixing setting. If STi 0, then Xi can be fixed at any value within its input range without affecting the output variance. PAGE 40 40 Calculation of Sobol sensitivity i ndices A Monte Carlo method is used to compute the Sobol sensitivity indices, where larger sample sizes yield increased accuracy of the results. The stepby step procedure adopted to calculate the sensitivity indices follows. 1 ) Define two N x k matrices, A and B, using a Latin hypercube sampling technique where N is the number of samples and k is the number of input parameters. The column of each matrix corresponds to an input parameter with values within its input uncertainty range. Note that A and B are generated independently and are separate matrices. = 1 12 1 1 1 11 22 2 1 2 2 1 12 1 1 1 11 2 1 2 ) Define a Ci matrix for each input parameter which is obtained by replacing the ith column (i = 1 to k) of the B matrix with the ith column of the A matrix. = 1 12 1 1 1 1 11 22 2 2 1 2 2 1 12 1 1 1 1 11 2 1 3 ) Compute the model output using input samples from all the A, B and Ci matrices; this generates the output vectors. Each row of the A,B and Ci matrices forms a single input data point. = ( 1) ( 2) ( 1) ( ) = ( 1) ( 2) ( 1) ( ) = 1 2 1 4 ) The individual effect sensitivity indices can then be computed by Eq. 43 as follows = 1 = 1 21 ( )2 = 1 2 (4 3) PAGE 41 41 Where 2= 1 = 12 (4 4) 5 ) The total effect sensitivity indices are computed by Eq. 45 as = 1 = 1 21 ( )2 = 1 2 (4 5) Characteristics of Sobol sensitivity i ndices 1 ) Si is a measure of how much one can reduce variance of the output by fixing Xi. This is independent of the interactive effects or the total effect sensitivity indices. 2 ) STi is greater than Si in the presence of interactive effects; it is equal to Si when ther e are no interactive effects. 3 ) The difference between STi and Si is a measure of the interactive effects of a particular parameter. 4 ) If STi = 0 (Si also equal to zero), this indicates that a particular parameter has no influence on the output and can be fix ed at any value within its input uncertainty range. 5 ) The sum of all Si is equal to 1 for models with no interactive effects or additive models and is less than 1 for nonadditive models or models with interactive effects. The difference 1 is an indicator of how significant interactive effects are in the model. 6 ) The sum of all STi should always be greater than 1 and is equal to 1 for additive models with no interactive effects. Linear Regression Method for Global Sensitivity A linear regres sion method for calculating sensitivity is also explored in this study. The actual model is replaced by a simplified model expressed in terms of a linear summation of the input variables multiplied by the corresponding regression coefficients in order to obtain an expression (Eq. 46) of the form = 0+ = 1 (4 6) PAGE 42 42 A Monte Carlo method is used to calculate the regression coefficients. An input parameter data set is identified using the Latin hypercube sampling technique to obtain data point s from the k dimensional unit hypercube, where k is the number of input parameters. The input data set can be expressed as = 1 1 12 1 1 1 11 1 22 2 1 2 21 1 1 12 1 1 1 11 1 2 1 where N is th e number of data points. A corresponding output vector can be obtained for each input data point, where each row of the X matrix corresponds to a input data point, as follows = 12 Note that in this study two separate output vectors (for first and second order periodic error) must be calculated and different regression coefficients are obtained for each. Given the input data set and the output vector, a least square method (Eq. 47) is used to calculate the regression coefficients as follows = ( ) 1 (4 7) The magnitudes of the regression coefficients then provide an estimate of the importance measure or sensitivity of the corresponding input parameter. This analysis is applicable to linear models, where the regression model provides an accurate estimate of the model in question. However, the Cosijns et al. model is highly nonlinear. Therefore, the regression analysis is modified to include higher order terms. This provides a better estimate of the original model. In this study, a regression model PAGE 43 43 including higher order terms up to fourth order was found to provide a good approximation of the original model. The regression model can then be expressed in the form (Eq. 4 8) = 0+ = 1+ = = 1 + = = = 1+ = = = = 1 (4 8) For the Cosijns et al. model with seven input parameters there are 330 regression coefficients. The regression coefficients are computed using the same least squares formula. However, the input matrix in this case must be augmented in order to include the higher order terms. This gives an inpu t matrix of dimension N x 330. There are, however, only seven input sensitivity indices to be computed. The sensitivity index for a certain input parameter is computed by adding all the regression coefficients corresponding to any combination involving that parameter as follows (Eq. 49) = + = + = + = 0+ = 1+ = = 1 + = = = 1+ = = = = 1 (4 9) The advantage o f increasing the model to include higher order terms is that it can also expose the interactive effects of several input parameters together in driving output variance. The regression coefficients can be sequenced in order of magnitude to illustrate the im portance measures of different combinations of input parameters. The combinations of input parameters corresponding to the regression coefficient of largest magnitudes are the more influential combinations. It is also possible to have higher order terms of a single parameter. Combinations of different input parameters with corresponding large regression coefficients suggest interactive effects. Scatter plots of model output can be developed with respect to the highest ranking combination of parameters. PAGE 44 44 Com putation of Maximum Permissible Input Variation In setting up a heterodyne interferometer, it is of interest to estimate how much uncertainty in each setup parameter is permissible in order to maintain the output variation below a certain acceptable value. Extra effort can exerted towards accurate alignment of the setup parameters whose misalignment would significantly increase periodic errors. Less attention can be given to those parameters with little influence on periodic error. A multi objective optimiz ation problem is solved in order to determine the maximum permissible input uncertainties while minimizing the probability of periodic error exceeding a preselected value. A Monte Carlo method is used to predict the probability of failure. Evolutionary al gorithms are advantageous in solving optimization problems when the nature of the objective function surface is not easily predictable. While derivativebased optimization algorithms may be computationally more efficient, they are susceptible to finding lo cal solutions (local minima) in the permissible design space. Evolutionary optimization algorithms are more effective at finding the global optimal solution [33 34]. Evolutionary algorithms efficiently handle design constraints by eliminating (or giving a lower rank) to individuals which violate constraints. Evolutionary algorithms are best suited for optimization problems where the variables are allowed to take discrete values. In this study, a nondominated sorting genetic algorithm (NSGA II) is used for s olving the optimization problem [2930]. The NSGA II algorithms can handle multiple objective functions, as well as constraints, making it suitable for solving this problem. In genetic algorithms, a population is a number of different individuals distribut ed within the input design space. The size of the population is decided by the analyst. Each individual PAGE 45 45 is made up of the different variables being optimized. Each variable in an individual is called a gene Figure 45 shows the flow ch art for the NSGA II algorithm. First, an initial population is randomly generated from the permissible input data space, which forms the first generation. The objective function is evaluated for each individual in the population and the individuals in the population are ranked on the basis of their objective functions. In a multi objective optimization problem, the individuals are ranked according to the domination criteria. The domination criterion is as follows. A feasible individual A dominates another feasible solution B if both of the following conditions are true: 1 ) The solution A is no worse than B in all objectives. 2 ) The solution A is strictly better than B in at least one objective. If two individuals are compared, they are said to be nondominated with respect to each other if neither individual dominates the other. A Pareto optimal front of the two objective functions can also be obtained. A Pareto optimal front is the curve joining all the nondominated individuals in a generation. All individuals on the Pareto optimal front are ranked 1. The rank 1 individuals are then removed from the population. The remaining individuals (those that remain after removing rank 1 individuals) are once again ranked on the basis of the domination criterion. The nond ominated individuals in this new population are then ranked as rank 2. This process of ranking and eliminating the top ranked individuals is repeated until all the individuals in the population have received some rank. The higher ranked individuals are then selected as parents. A crossover operation is then completed to form new individuals. A crossover operation typically consists of mixing and matching genes from two or more parents to form a new individual. There are various methods of crossover, which are not discussed here. PAGE 46 46 A mutation operation is also performed on a certain fixed fraction (decided by the analyst) of the population where new individuals are created by adding random values to each gene in an individual. Crossover and mutation operations are carried out to form a new population the same size as the original population. This population is called the child population. The value of the objective function is then calculated for each individual in the child population. The individuals in the new population, along with the individuals in the original population, are together ranked according to their corresponding objective function values. The best individuals are then selected to form the next generation (of the same size as the original). This process is called elitism. If the value of the objective functions of any of the individuals in the current generation is satisfactory, then the operation is stopped. Otherwise, the process of crossover and mutation to form the next generation repeats and the process continues until a satisfactory solution is obtained. Two main features of evolutionary algorithms are the exploitary and exploratory parameters. The exploitary parameters use information already available to close in on the optimal solu tion, while the exploratory parameters explore the input parameter space for better solutions. In genetic algorithms the crossover operation exploits the available information by selecting the best parents to generate children, while the mutation operation explores the design space for better individuals by adding randomness to the process of creating the child population. The mutation operation assists in finding the global minimum of the objective function. Objective function: The optimization problem is setup up as follows = 1 PAGE 47 47 where i is the standard deviation of the ith variable. There Pf is the probability of failure, which is defined to occur if the magnitude of first or second order error is greater than 0.5 nm. Uncertainty Analysis The permissible input uncertainty ranges, once obtained from the optimization algorithm, can be used to perform an uncertainty analysis in order to estimate uncertainty in the output. A Monte Carlo method is used to compute output uncertainty. A Latin hypercube sampling technique is used to obtain the input parameter space. The output uncertainty can then be calculated by computing the mean and standard deviation of the output for a large number of samples. PAGE 48 48 Figure 41. Scatter plots for first order periodic error considering all input parameters individually. PAGE 49 49 Figure 42. Scatter plots for second order periodic error considering all input parameters individually. PAGE 50 50 Figure 43. Random sampling. PAGE 51 51 Figure 44. Latin hypercube sampling. PAGE 52 52 Figure 45. Flow chart for NSGA II optimization algorithm. PAGE 53 53 CHAPTER 5 PREDICITNG INTERFEROMETER SETUP MISALIGNMENTS The periodic error obtained from the phase measuring electronics can be used to determine the contributing interferometer setup misalignments. Corrective measures can then be taken to minimize errors. An optimization problem can be solved to find the setup misalignments which would produce the same periodic error as the measured error. The objective function of the optimization problem is defined so as to minimize the difference between the periodic error measured using the phase measuring electronics and t he periodic error predicted using the Cosijns et al. model using the setup misalignment values of the candidate solution. The objective function is = ( )2+ ( )2 wh ere FOactual and SOactual are the measured first and second order periodic errors, while FOcandidate and SOcandidate are the periodic errors computed using the Cosijns et al. model for a candidate solution in the optimization process. A particle swarm optimization (PSO) technique is used to solve the optimization problem. The main difference between genetic algorithms (GA) and PSO is the absence of a selection mechanism in PSO. While in GAs a selection mechanism, like crossover, is used to generate children individuals, PSO uses information based on the performance of each individual in the swarm, as well as its history, to determine its new position on the basis of a velocity vector. Although GAs perform better in identifying global minima [3334], they are better suited to solving discrete value optimization problems. The continuous nature of the velocity vector in PSO, on the other hand, lends itself to the solution of optimization problems with continuous design variables. PAGE 54 54 In PSO, a swarm of candidate individuals are initialized in the design space. This swarm is analogous to the population in genetic algorithms and is typically identified as particles in the design space with a position and a velocity. The value of the objective function is calculated and the best individual in the swarm is identified. The velocity vector then consists of four terms: the inertia term, the neighborhood term, the global term and the personal term. The C s are coefficients which determine the weight of each parameter (the ir contribution to the velocity) while the R s correspond to random numbers. A velocity vector is then computed as follows (Eq. 51) = 1+ ( ) + + ( ) (5 1) where w = inertia component Vi 1 = velocity in previous step Cn = neighborhood weight Rn = random number Xn = position of neighborhood best Cg = global weight Rg = random number Xg = position of global best Cp = personal weight Rp = random number Xp = position of personal best Xi = current position PAGE 55 55 The new position of the particle can then be computed by Eq. 52 as + 1= + (5 2) The inerti a weight controls the influence of the previous velocity on the new velocity. A large inertia weight facilitates greater exploration by encouraging the search of new areas in the design space; this gives the optimization process its global nature. The iner tia weight is usually reduced linearly with each iteration and finally drops to zero. The neighborhood weight makes use of the intelligence of particles in the local neighborhood in determining the velocity vector. This neighborhood term is not used in man y standard PSO algorithms. The global weight determines how much influence the global best particle has on the velocity vector while the personal weight determines how much the velocity vector of a particle depends on its personal best. In this study the v elocity equation is truncated so as to include only the global and personal terms. It was found through trial and error that the inertia term and the neighborhood component do not contribute to improving the result or increasing computational efficiency and are therefore not considered. The results obtained using only the personal and global weights were found to be sufficiently robust to ignore the other terms. The modified velocity vector is defined by Eq. 53 as = + ( ) (5 3) Figure 51 shows an example of the optimization process where the swarm is plotted at different stages during the optimization process with arrows indicating the velocity components. Red dots indicate that the particle is close to the optimum solution while blue dot s indicate particles that are furthest away. The migration towards the and combination in the 50th iteration identifies the parameter combination that yields PAGE 56 56 the measured first and second order periodic error. For Figure 5 1 the optimization is performed with target first and second order errors of 10 nm each. From the optimization process, a combination of and = 6.6 is obtained which would produce this combination of periodic error. In this study, the optimization al gorithm was initially performed taking into consideration all seven variables/setup parameters. However, in most interferometer setups, only the rotational alignment of the laser beams and polarizing beam splitter and the rotational alignment of the mixing linear polarizer are controlled, while the other parameters depend on component manufacturing and cannot be changed in situ The optimization algorithm is therefore modified to consider only these two parameters. It is observed that most periodic error can be corrected just by modifying just these two parameters. In this study three different types of analysis are performed. Firstly, the optimization problem is solved by supplying the PSO algorithm with a range of first and second order periodic error magnitudes and calculating the combination of and values which would produce that error. Figure 5 2 shows the data flow chart. Secondly, the optimization problem is used on experimental periodic error data obtained over a range of and and experimental setup. Figure 53 shows the data flow chart. Lastly, as a check for repeatability, period and PAGE 57 57 and and used in the Cosijns model to compute periodic error. Figure 54 shows the data flow chart. PAGE 58 58 Figure 51. PSO process. PAGE 59 59 Figure 52. Data Flow chart for PSO with periodic error input. Figure 53. Data flow chart for PSO with experimental data input. Input First and second order periodic error PSO Run the PSO algorithm Output Obtain and values which would produce same error Input Measure first and second order periodic error experimentally for a range of and values PSO Run PSO algorithm Output Obtain and values which would produce the same combination of periodic error Error Compare and values obtained from PSO with those from the experimental setup PAGE 60 60 Figure 54. Data flow chart for PSO with analytical data input. Input Calculate first and second order periodic error using the Cosijns et al model for a range of and values PSO Run PSO algorithm Output Obtain and values which would produce the same combination of periodic error Error Compare and values obtained from PSO with supplied to the Cosijns et al model PAGE 61 61 CHAPTER 6 RESULTS AND DISCUSSION Experimental Measurement of Periodic Error The first and second order error obtained from the phase measuring electronics is compared with analytical values computed using the Cosijns et al. model for a range of and values. The angle is set by the orientation of the half wave plate, while i s varied by changing the mixing linear polarizer angle. The analytical model assumes that all other parameters are at their nominal values. Figure 61 shows the comparison of experimental and analytical results for first and second order periodic errors. They are found to be in good agreement with each other. Figure 62 shows the differences between the experimental and analytical values. Table 6 1 shows the maximum, mean and standard deviation f or these differences. Local Sensitivity The local sensitivity of the first and second order periodic errors obtained from the Cosijns et al. model with respect to the input parameters are calculated by a finite difference method. The sensitivities are esti mated oneat a time by varying only one input variable while keeping all the other parameters at their nominal values. Figure s 6 3 to 6 12 shows the variation in first and second order periodic error s with respect to the seven input parameters. From the plots, it is noted that the angular misalignment of the mixing linear polarizer ( ) and the different transmission coefficients ( ) have no individual effect on periodic error for ideal values of the other variable s. While they do not produce periodic error when acting in isolation, they do produce variation in periodic error when acting in combination with other setup imperfections. Therefore, local sensitivities of these parameters are calculated with = 1 instead of its nominal value PAGE 62 62 of 0 so as to induce some periodic error. This enables variation in periodic error with respect to these parameters to be determined The figures indicate that the first order errors are more influenced by and err. Variation in the transmission coefficients and negligible changes in first order periodic error but only at very low values (the ideal values are 1) which are not normally observed in practice. The second order periodic error depends mainly on method, but provide no estimate of how much the output variance depends on each individual input parameter. Interactive effects are also not considered. Global s ensitivity indices are used to better study dependence of output variance on input parameters. Global Sensitivity Sobol Sensitivity Indices for Cosijns et al. Model The Sobol individual effect and total effect sensitivity indices are computed using the Monte Carlo approach described previously with a sample size of 1000. The uncertainty ranges in the input parameters used in the analysis are listed in Table 61 The input variables are assumed to vary normally in this range. Table 6 2 to Table 6 4 show the individual effect, total effect and interactive effect sensitivity indices computed for first and second order errors As can be seen from the results, for the given input range, the first o rder error is mainly dominated by err. However, the total sensitivity parameters are significant. Also, from the difference between the sum of all the total effect indic es and the sum of all individual effect indices it can be deduced that interactive effects do play a role in driving first order error variance. The angular err are the main interacting parameters. This also validates the PAGE 63 63 result obtained using the regression analysis. Second order error on the other hand It is important to note that the total effect sensitivity indices are larger than the corresponding individual effect sens itivity index by definition. However, in some cases they may be marginally smaller due to numerical errors in the computation. Effect of Variation of Input Uncertainty on Global Sensitivity Indices Variation of the global sensitivity indices of the model w ith changes in the input uncertainties identifies how much care should be given to determining accurate estimates of particular input uncertainties. In this study input uncertainty is varied in two ways. First, the uncertainty of each input parameter is ch anged proportionately as a percentage of the range. The uncertainty ranges for the different steps are defined in Table 6 5 Figure 613 and Figure 6 14 show area plots depicting the variation in sensitivity indices as a function of percentage variation in input uncertainty. It is interesting to note that as uncertainty in input increases, the individual effect of err on first order error, which is most prominent for low uncertainties, decreases and the total effect of increases slightly indicating that and err start interacting to drive output variance. Varying the uncertainty does not seem to have any significant effect on the sensitivity indices f or the second order periodic error, which is always dominated by The figures also show normalized sensitivity indices to highlight the relative importance of each parameter over the entire range. However, it should be emphasized that the normalization i s applied for visual representational purposes only (to indicate the important parameters) and that the normalized numerical values have no physical significance. Tables 6 6 to 6 11 show tabulated results. PAGE 64 64 The second approach employed to modify uncertainty is to vary only keeping all other uncertainty ranges fixed as defined in Table 612 nherent in the optical and t hey have the same range. Figure 6 15 and Figure 6 16 show the variation in sensitivity indices wit err on first order periodic error sensitivity indices is high for a low standard err gets suppressed much quicker for second order periodic error err becomes the dominating factor. Interactive effects are negligible over the entire range. Interactive effect err go on increasing for first order error with increase in standard deviation, while they are negligible for second order error. Tables 6 13 to 6 18 show tabulated results. Linear Regression S ensitivity Analysis. The linear regression sensitivity analysis was performed considering a sample size of 1000. Figure 6 17 and Figure 6 18 show the variation in sensitivities, where the input standard deviation s are varied as described in Table 612 Although the results do show similar trends to the results obtained from the Sobol method, they are not as consistent. This may be attributed to the approximation of a nonlinear model as a li near model with higher order terms. Scatter plots for the input parameter combinations with highest corresponding regression coef ficients are provided in Figure 6 19 and Figure 6 20 These figures PAGE 65 65 include the sc atter plots for the ten combinations of highest ranked input parameters. As expected, the scatter plots for high ranking parameters have better shape. Maximum Permissible Input Uncertainty The maximum permissible uncertainty in the input parameters requi red to maintain periodic error below a predetermined value can be found by solving the optimization problem using the NSGA II genetic algorithm. A multi objective optimization problem is solved where the uncertainty of each input parameter is maximized whi le minimizing the probability of failure. Failure is defined when the periodic error is greater than 0.5 nm. A Monte Carlo simulation is performed to evaluate probability of failure. A sample size of 500 data points is considered. The point with a probabil ity of failure of less than 1% is selected Three different cases are considered: a) both first and second order periodic error must be below the predetermined value; b) only first order error must be below the predetermined value ; and c) only second ord er error must be below the predetermined value. Table 619 shows the results obtained when both first and second order errors are considered simultaneously. Smaller uncertainties are allowed for and err, while uncertainties for other parameters are relatively lax. This can be expected because: 1) first order error is susceptible to interaction effects of and err; and 2) the individual effect of is negligible. Therefore, is allowed to take high values, which are subsequently compensated by stringent limitations on and err. Also, second order error is highly susceptible to placing additional limitations on its uncertainty level. Table 6 20 shows the permissible deviatio ns when only first order errors are considered. This shows significant limits on the and err uncertainty. The reasoning is the same as for the case when both first and second order errors are considered. PAGE 66 66 Table 6 21 shows the permis sible deviations when only second order errors are considered. This shows the most stringent limits on This is because second order error is highly influenced by Here it is important to note that these uncertainties represent three stand ard deviations for the input parameters. A Latin hypercube sampling technique was used to sample the individual data points. Here, it is important to note that in the extremely unlikely case that all the input parameters are at their extreme values of thei r uncertainty ranges simultaneously, it would result in periodic error greater than 0.5 nm. The permissible uncertainties reported here suggest that the periodic error obtained for each sample point generated from a Latin hypercube sample of input parameters, sampled from within these uncertainty ranges, would yield a periodic error of less than 0.5 nm for 99% of the sample points. Uncertainty Analysis An uncertainty analysis of the output of the Cosijns et al. model can be performed to understand the prop agation of input uncertainty to model output (periodic error) uncertainty. A Monte Carlo method with 10,000 sample points is used. Table 6 22 shows the mean and standard deviation of periodic error obtained for the case when input unc ertainties are defined by Table 6 19 Predicting Interferometer Setup Misalignments From the knowledge of first and second order periodic error, it is possible to use the Cosijns et al. model to solve an optimization problem to compute the values of the interferometer setup misalignments so that corrective measures can be applied. A PSO technique is used to solve this optimization problem as discussed before. PAGE 67 67 First, arbitrary values of first and second order periodic errors are subst ituted into the algorithm and optimization is completed to find the combination of input parameters which would produce the same error. Figure 6 2 1 shows the results obtained when all seven imperfections are considered simultaneously. There are definite trends in and while all the other parameters vary randomly within their range. This suggests that and result does not contradict the results obtained from the sensitivity analysis in the err was identified as an influential parameter on first order err are reduced here to keep them within practical ranges. Figure 62 3 and be variable while all other parameter are taken at their nominal values. The same trends and and and errors must always be considered in conjunction with each other. In other words, in order and their corresponding values for that combination of periodic error on their respective threedimensional (3D) surfaces. It is observed that for both cases there are some co mbinations of first and second order periodic error (high first order error and low second order error) which cannot be achieved irrespective of whether the optimization is and Figure 62 2 and Figure 6 2 4 show the objective function values for the optimization process and identify these region in the first second order error plane for both conditions. Therefore, PAGE 68 68 if c ombinations of periodic er rors are obtained which fall in this region, they can be deemed either as erroneous results or arising from some other external factors not considered in the Cosijns et al. model (phase measuring electronics, ghost reflections, an d analytical and experimental data (see Figure 62 5 ). These plots also indicate high first o rder error and low second order error does not generally occur under the conditions considered. This optimization is now performed with periodic error obtained experimentally as and and m. Figure 62 6 shows the first and second order periodic error obtained both analytically and experimentally. Figure 62 7 shows the difference between the two which corresponds to the error in periodic error meas urement. The and and and and From Figure 62 8 and values obtained from the PSO algorithm match closely with the analytical input data values Although similar trends are observed for the experimental data, the results are not as closely matched. Figure 62 9 and from the optimization algorithm for both analytical and experimental data. is zero. This is because PAGE 69 69 Thus, when is equal to zero, ee to take any value are extremely low. and can be attributed to errors in the per iodic error data itself ( Figure 62 9 ). Ideally, for low and always exists due to nonlinearities unaccounted for by the Cosijns et al. model. Also, the i and the errors are reduced and the optimization algorithm produces acceptable results. Figure 630 describes the pro pagation of error in the optimization process, where errors in measured periodic errors, which is the source input data for the optimization process, translate into errors in predicted misalignments of and The figures show both analytical and experimental data overlapped on each other. PAGE 70 70 Table 6 1 Errors between analytical and experimental periodic error. Error in first order periodic error (nm) Error in second order periodic error (nm) Mean 0.8 0.6 Standard deviation 0.9 0.5 Maximum error 6.3 3.7 Table 6 2 Uncertainty ranges for input parameters Parameter 5 5 1 1 0.025 0.025 err 1 Table 6 3 Sobol method sensitivity indices for first order error. Individual effect sensitivity index Si Total effect sensitivity index STi Interactive effect sensitivity index STi Si 0.065 0.278 0.212 0.076 0.263 0.187 0.066 0.088 0.023 0.085 0.080 0.005 0.034 0.010 0.025 0.031 0.008 0.023 err 0.436 0.660 0.224 Table 6 4 Sobol method sensitivity indices for second order error. Individual effect sensitivity index Si Total effect sensitivity index STi Interactive effect sensitivity index STi Si 0.966 0.964 0.003 0.043 0.033 0.010 0.042 0.033 0.009 0.042 0.033 0.009 0.042 0.033 0.009 0.042 0.033 0.009 err 0.046 0.046 0.001 PAGE 71 71 Table 6 5 Variation in input uncertainty range (proportional). 0.01 10 20 30 40 50 60 70 80 90 100 0.05 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0.05 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0.01 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.01 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.00025 0.0025 0.005 0.0075 0.01 0.0125 0.015 0.0175 0.02 0.0225 0.025 0.00025 0.0025 0.005 0.0075 0.01 0.0125 0.015 0.0175 0.02 0.0225 0.025 err 0.01 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Table 6 6 Individual effect Si for first order periodic error with varying uncertainty ranges (proportionally varying). Individual effect sensitivity indices Si 0.01 10 20 30 40 50 60 70 80 90 100 0.007 0.005 0.019 0.017 0.069 0.029 0.054 0.041 0.035 0.039 0.065 0.007 0.006 0.026 0.018 0.063 0.023 0.025 0.027 0.027 0.058 0.076 0.073 0.044 0.068 0.074 0.100 0.060 0.080 0.090 0.026 0.079 0.066 0.052 0.071 0.046 0.074 0.085 0.074 0.081 0.078 0.053 0.050 0.085 0.007 0.006 0.025 0.018 0.056 0.023 0.028 0.031 0.002 0.017 0.034 0.007 0.006 0.025 0.019 0.056 0.022 0.028 0.033 0.002 0.020 0.031 err 0.731 0.691 0.736 0.688 0.667 0.580 0.608 0.584 0.501 0.474 0.436 Table 6 7 Total effect STi for first order periodic error with varying uncertainty ranges (proportionally varying). Total effect sensitivity indices STi 0.01 10 20 30 40 50 60 70 80 90 100 0.026 0.055 0.039 0.032 0.084 0.128 0.131 0.179 0.250 0.278 0.278 0.026 0.056 0.041 0.041 0.079 0.150 0.132 0.214 0.241 0.260 0.263 0.188 0.250 0.177 0.185 0.177 0.206 0.168 0.132 0.155 0.091 0.088 0.205 0.229 0.201 0.180 0.180 0.199 0.179 0.143 0.146 0.102 0.080 0.026 0.051 0.020 0.024 0.040 0.053 0.049 0.013 0.050 0.028 0.010 0.026 0.052 0.021 0.024 0.040 0.053 0.048 0.016 0.051 0.027 0.008 err 0.752 0.779 0.739 0.752 0.715 0.791 0.709 0.692 0.759 0.712 0.660 PAGE 72 72 Table 6 8 Interactive effects for first order periodic error with varying uncertainty ranges (proportionally varying). Interactive effects 0.01 10 20 30 40 50 60 70 80 90 100 0.018 0.050 0.020 0.015 0.014 0.100 0.077 0.139 0.216 0.239 0.212 0.018 0.050 0.015 0.023 0.016 0.127 0.108 0.187 0.213 0.202 0.187 0.115 0.206 0.108 0.111 0.077 0.146 0.088 0.042 0.130 0.011 0.023 0.153 0.158 0.155 0.106 0.095 0.125 0.097 0.065 0.093 0.052 0.005 0.018 0.045 0.005 0.006 0.016 0.029 0.021 0.018 0.048 0.011 0.025 0.018 0.046 0.004 0.006 0.016 0.032 0.020 0.017 0.049 0.007 0.023 err 0.021 0.088 0.004 0.064 0.048 0.211 0.101 0.108 0.259 0.238 0.224 Table 6 9 Individual effect Si for second order periodic error with varying uncertainty ranges (proportionally varying). Individual effect sensitivity indices Si 0.01 10 20 30 40 50 60 70 80 90 100 0.983 0.979 0.975 0.994 0.985 0.980 0.978 0.984 0.971 0.986 0.966 0.007 0.008 0.031 0.011 0.021 0.011 0.008 0.016 0.015 0.035 0.043 0.007 0.008 0.031 0.010 0.021 0.011 0.008 0.016 0.015 0.035 0.042 0.007 0.008 0.031 0.010 0.021 0.011 0.008 0.016 0.015 0.035 0.042 0.007 0.008 0.031 0.010 0.021 0.011 0.008 0.016 0.015 0.035 0.042 0.007 0.008 0.031 0.010 0.021 0.011 0.008 0.016 0.015 0.035 0.042 err 0.009 0.010 0.025 0.010 0.017 0.015 0.008 0.016 0.013 0.035 0.046 Table 6 10 Total effect STi for second order periodic error with varying uncertainty ranges (proportionally varying). Total effect sensitivity indices STi 0.01 10 20 30 40 50 60 70 80 90 100 1.009 1.005 0.976 0.993 0.996 1.005 1.008 0.998 0.987 0.964 0.964 0.010 0.000 0.011 0.000 0.015 0.035 0.024 0.056 0.041 0.050 0.033 0.010 0.000 0.011 0.000 0.015 0.035 0.025 0.056 0.041 0.050 0.033 0.010 0.000 0.011 0.000 0.015 0.035 0.025 0.056 0.041 0.050 0.033 0.010 0.000 0.011 0.000 0.015 0.035 0.025 0.056 0.041 0.050 0.033 0.010 0.000 0.011 0.000 0.015 0.035 0.025 0.056 0.041 0.050 0.033 err 0.022 0.000 0.011 0.001 0.022 0.053 0.031 0.061 0.048 0.064 0.046 PAGE 73 73 Table 6 11 Interactive effects for second order periodic error with varying uncertainty ranges (proportionally varying). Interactive effects 0.01 10 20 30 40 50 60 70 80 90 100 0.026 0.025 0.001 0.001 0.010 0.024 0.030 0.014 0.016 0.022 0.003 0.002 0.008 0.020 0.011 0.006 0.024 0.016 0.040 0.026 0.015 0.010 0.003 0.008 0.020 0.010 0.006 0.024 0.017 0.040 0.027 0.015 0.009 0.002 0.008 0.021 0.010 0.006 0.024 0.017 0.040 0.026 0.015 0.009 0.002 0.008 0.020 0.010 0.006 0.024 0.017 0.040 0.026 0.015 0.009 0.002 0.008 0.020 0.010 0.006 0.024 0.017 0.040 0.026 0.015 0.009 err 0.014 0.010 0.014 0.009 0.006 0.038 0.022 0.045 0.036 0.029 0.001 Table 6 12 Variation in uncertainty range (angles only). 0 2 4 6 8 10 12 14 16 18 20 0 2 4 6 8 10 12 14 16 18 20 0 2 4 6 8 10 12 14 16 18 20 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0.025 0.025 0.025 0.025 0.025 0.025 0.025 0.025 0.025 0.025 0.025 0.025 0.025 0.025 0.025 0.025 0.025 0.025 0.025 0.025 0.025 0.025 err 1 1 1 1 1 1 1 1 1 1 1 Table 6 13 Individual effect Si for first order periodic error with varying and Individual effect sensitivity indices Si 0 2 4 6 8 10 12 14 16 18 20 0.017 0.015 0.043 0.124 0.192 0.231 0.280 0.380 0.305 0.350 0.363 0.017 0.013 0.045 0.077 0.286 0.329 0.296 0.340 0.330 0.405 0.377 0.063 0.094 0.101 0.044 0.060 0.023 0.004 0.018 0.010 0.020 0.007 0.053 0.107 0.095 0.047 0.051 0.024 0.003 0.017 0.009 0.020 0.007 0.017 0.013 0.052 0.029 0.046 0.025 0.001 0.018 0.010 0.019 0.007 0.017 0.017 0.052 0.029 0.042 0.023 0.003 0.018 0.010 0.022 0.008 err 0.723 0.701 0.613 0.276 0.108 0.045 0.005 0.028 0.015 0.021 0.007 Table 6 14 Total effect STi for first order periodic error with varying and Total effect sensitivity indices STi 0 2 4 6 8 10 12 14 16 18 20 0.026 0.031 0.120 0.438 0.639 0.696 0.648 0.607 0.644 0.530 0.598 0.026 0.029 0.106 0.485 0.582 0.699 0.665 0.642 0.689 0.585 0.681 0.197 0.175 0.090 0.116 0.024 0.142 0.031 0.081 0.067 0.034 0.041 0.232 0.175 0.080 0.107 0.023 0.142 0.031 0.081 0.068 0.034 0.041 0.026 0.021 0.035 0.037 0.009 0.132 0.031 0.080 0.069 0.034 0.042 0.026 0.019 0.033 0.036 0.011 0.134 0.028 0.083 0.068 0.034 0.041 err 0.777 0.758 0.670 0.566 0.275 0.231 0.089 0.111 0.088 0.037 0.053 PAGE 74 74 Table 6 15 Interactive effects for first order periodic error with varying and Interactive effects 0 2 4 6 8 10 12 14 16 18 20 0.010 0.016 0.077 0.314 0.447 0.465 0.368 0.226 0.339 0.181 0.234 0.010 0.016 0.060 0.408 0.296 0.371 0.369 0.302 0.359 0.180 0.304 0.135 0.081 0.011 0.072 0.036 0.118 0.027 0.063 0.057 0.014 0.034 0.179 0.068 0.015 0.061 0.027 0.118 0.028 0.064 0.058 0.014 0.034 0.010 0.008 0.018 0.007 0.037 0.107 0.029 0.062 0.059 0.015 0.035 0.009 0.003 0.019 0.006 0.031 0.111 0.025 0.065 0.058 0.012 0.033 err 0.054 0.057 0.057 0.290 0.167 0.186 0.084 0.083 0.074 0.015 0.045 Table 6 16 Individual effect Si for second order periodic error with varying and Individual effect sensitivity indices Si 0 2 4 6 8 10 12 14 16 18 20 0.018 0.882 0.970 0.989 0.996 0.982 0.992 0.989 0.973 0.971 0.983 0.017 0.006 0.015 0.006 0.009 0.014 0.000 0.013 0.014 0.024 0.027 0.068 0.005 0.015 0.006 0.007 0.012 0.000 0.012 0.013 0.021 0.020 0.015 0.006 0.015 0.006 0.007 0.012 0.000 0.012 0.013 0.021 0.020 0.017 0.005 0.015 0.006 0.007 0.012 0.000 0.012 0.013 0.021 0.020 0.017 0.005 0.015 0.006 0.007 0.012 0.000 0.012 0.013 0.021 0.020 err 0.872 0.008 0.013 0.006 0.006 0.014 0.000 0.013 0.014 0.019 0.019 Table 6 17 Total effect STi for second order periodic error with varying and Total effect sensitivity indices STi 0 2 4 6 8 10 12 14 16 18 20 0.078 0.994 1.001 1.022 1.006 0.992 1.016 0.999 0.999 0.969 0.977 0.066 0.017 0.041 0.017 0.008 0.035 0.024 0.028 0.024 0.048 0.031 0.121 0.017 0.040 0.017 0.009 0.035 0.021 0.026 0.019 0.031 0.007 0.095 0.018 0.040 0.017 0.010 0.035 0.021 0.026 0.019 0.031 0.007 0.066 0.017 0.040 0.017 0.009 0.035 0.021 0.025 0.019 0.031 0.007 0.066 0.017 0.040 0.017 0.010 0.035 0.021 0.026 0.019 0.031 0.007 err 0.934 0.092 0.060 0.036 0.016 0.036 0.025 0.027 0.020 0.034 0.009 Table 6 18 Interactive effects for second order periodic error with varying and Interactive effects 0 2 4 6 8 10 12 14 16 18 20 0.060 0.112 0.031 0.033 0.010 0.010 0.024 0.011 0.026 0.002 0.005 0.050 0.011 0.026 0.011 0.001 0.020 0.024 0.016 0.009 0.024 0.004 0.052 0.012 0.025 0.011 0.002 0.023 0.021 0.014 0.006 0.010 0.013 0.079 0.012 0.025 0.011 0.002 0.023 0.021 0.014 0.006 0.010 0.013 0.049 0.011 0.025 0.011 0.002 0.023 0.021 0.013 0.006 0.010 0.012 0.050 0.011 0.025 0.011 0.002 0.023 0.021 0.014 0.006 0.010 0.013 err 0.062 0.084 0.047 0.031 0.011 0.022 0.025 0.015 0.006 0.015 0.010 PAGE 75 75 Table 6 19 Permissible input uncertainty (first and second order periodic error). 1.2 11.333 0.667 0.667 0.193 0.193 err 0.35 Table 6 20 Permissible input uncertainty (first order periodic error). 0.4 20 0.6 0.6 0.3 0.3 err 0.475 Table 6 21 Permissible input uncertainty (second order periodic error). 3.467 18 9.333 9.333 0.45 0.45 err 6.25 Table 6 22 Output uncertainty. First order error Second order error Mean () (nm) 0.177 0.0027 0.1007 0.0037 PAGE 76 76 Figure 61 First and second order periodic error (analytical and experimental). Figure 62 Error in first and second order periodic errors. PAGE 77 77 Figure 63 Local sensitivity with respect to Figure 64 Local sensitivity with respect to alone does not induce periodic error. PAGE 78 78 Figure 65 Sensitivity with respect to error. Figure 66 Local sensitivity with respect to d 1. PAGE 79 79 Figure 67 Local sensitivity with respect to d 2. Figure 68 Local sensitivity with respect to alone does not induce periodic error. PAGE 80 80 Figure 69 Local sensitivity with respect to pe riodic error. Figure 610 Local sensitivity with respect to induce periodic error. PAGE 81 81 Figure 611 Local sensitivity with respect to in order to induce some periodic error. Figure 612 Local sensitivity with respect to err. PAGE 82 82 Figure 613 Sobol sensitivity indices for first order error (proportional variation). Figure 614 Sobol sensitivity indices for second order error (proportional variation). PAGE 83 83 Figure 615 Sobol sensitivity indices for first order error with varying and Figure 616 Sobol sensitivity indices for second order error with varying and PAGE 84 84 Figure 617 Regression analysis sensitivity indices for periodic error (proportional variation). Figure 618 PAGE 85 85 Figure 619 Scatter plots for first order error with respect to 10 most important input parameter combinations. PAGE 86 86 Figure 620 Scatter plots for second order error with respect to 10 most impo rtant input parameter combinations. PAGE 87 87 Figure 62 1 PSO output for optimized variable data (all seven parameters considered). Figure 62 2 PSO objective function values. The shaded part indicates the infeasible region in the periodic error plane. PAGE 88 88 Figure 62 3 Figure 62 4 PSO objective function values. The shaded part indicates the infeasible region in the periodic error plane. PAGE 89 89 Figure 62 5 Scatter plots of and data with respect to periodic error. PAGE 90 90 Figure 62 6 Periodic error (experimental and analytical). Figure 62 7 Errors in periodic error measurement. PAGE 91 91 Figure 62 8 PSO algorithm results with periodic error input: (left) analytical periodic error data; (right) experimental periodic error data. Figure 62 9 Error is PSO algorithm results ( and values) with periodic error input: (left) analytical periodic error data; (right) experimental periodic error data. PAGE 92 92 Figure 630 Propagation of errors in optimization process. PAGE 93 93 CHAPTER 7 CONCLUSIONS In a first test, periodic errors magnitudes for a heterodyne interferometer setup were measured for a wide range of half wave plate and linear polarizer angles and compared to periodic errors predicted by an analytical model [9]. In the setup, a half wave plate was used to artificially modulate the rotational misali gnment between the fiber optic pick up. All the other parameters were component dependant and were not varied externally. The measured periodic error was found to be in close agreement with the analytically computed results obtained from the Cosijns et al. model. A local and global sensitivity analysis was next performed to study the effect of each parameter on periodic error. Although local sensitivity analysis suggested that first err and that second order err or t on periodic error, input uncertainty into consideration and therefore does not pr ovide a holistic picture of periodic error variation in the entire input data space. For this purpose a global sensitivity approach may be applied. The Sobol global sensitivity indices method was used to evaluate individual and total effect sensitivity in dices. The evaluated sensitivity indices suggested that first order PAGE 94 94 errerr has significant each other and with err to drive first order periodic error. Second order periodic error, was also found that the sensitivity indices are considerably influenced by variations in input uncertainty, making it imperative for the analyst to have accurate knowledge of input uncertainty. A linear regression method with higher order error terms also provided similar results. The multi objective optimization algorithm was used to find permissible input uncertainties in order to keep the maximum periodic error below 0.5 nm (with a err uncertainties. This was expected since periodic error is most susceptible to these inputs. Although the sensitivity analysis suggested dependence of first order error errerr uncertainty permit also obtained using the optimization process. errors in interferomet er setups was explored. The periodic error obtained for the phase measuring electronics was used to find corresponding setup imperfections. Although contamination of periodic err or obtained by sources of nonlinearities not taken into account in the Cosijns et al. model (e.g., ghost reflections, beam shear, phase PAGE 95 95 measuring electronics) or uncertainties associated with the experimental setup used in the periodic error measurements. The optimization method worked well with analytical model which includes all possible error sources was available. However, noise may still be an issue for small perio dic error magnitudes. PAGE 96 96 APPENDIX MATLAB CODE Cosijns et al. Model % Cosijns et al model % Code used to evaluate the Cosijns et al model to evaluate periodic error % data from known setup misalignments function [fo so] = objfun(x) % Define all input parameterters a = x(1)*pi/180; % alpha, rad th = x(2)*pi/180; % theta, rad de1 = x(3)*pi/180; % ellipticity beam 1, rad de2 = x(4)*pi/180; % ellipticity beam 2, rad z = x(5); % Transmission coefficient X = x(6); % Transmission coefficient b = a + x(7)*pi/180; % Beta (Orthogonality error) n = 1; % refractive index L = 633; % lambda, nm dl = 0:L/2^10:L; % displacement, nm dp = 4*pi*n*dl/L; % nominal phase change, rad dx = dl(2) dl(1); fs = 1/dx; % Calculate the coefficients used in Cosijns model A = ( (z^2*sin(b)^2+X^2*cos(b)^2)*cos(de1/2)*sin(de2/2)... (z^2*cos(a)^2+X^2*sin(a)^2)*sin(de1/2)*cos(de2/2))*cos(dp)+ ... (z^2*cos(a)*sin(b)+X^2*sin(a)*cos(b))*cos(de1/2+de2/2)*sin(dp); B = ((z^2*sin(b)^2X^2*cos(b)^2)*co s(de1/2)*sin(de2/2)+(z^2* ... cos(a)^2X^2*sin(a)^2)*sin(de1/2)*cos(de2/2))*cos(dp)+(z^2*... cos(a)*sin(b)+X^2*sin(a)*cos(b))*cos(de1/2+de2/2)*sin(dp); C = z*X*(cos(b)*sin(b)*cos(de1/2)*sin(de2/2)*(1 cos(2*dp))+ ... sin(a)*sin(b)*cos(de1/2)*cos(de2/2)*sin(2*dp)cos(a)*cos(b)*... sin(de1/2)*sin(de2/2)*sin(2*dp) sin(a)*cos(a)*sin(de1/2)*... cos(de2/2)*(1+cos(2*dp))); D = ((z^2*sin(b)^2+X^2*cos(b)^2)*cos(de1/2)*sin(de2/2)+(z^2* ... cos(a)^2+X^2*sin(a)^2)*sin(de1/2)*cos(de2/2))*sin(dp)+... (z^2*cos(a)*sin(b)+X^2*sin(a)*cos(b))*cos(de1/2+de2/2)*cos(dp); E = ((z^2*sin(b)^2+X^2*cos(b)^2)*cos(de1/2)*sin(de2/2)+... ( z^2*cos(a)^2+X^2*sin(a)^2)*sin(d e1/2)*cos(de2/2))*sin(dp)+... ( z^2*cos(a)*sin(b)+X^2*sin(a)*cos(b))*cos(de1/2+de2/2)*cos(dp); F = z*X*(cos(b)*sin(b)*cos(de1/2)*sin(de2/2)*sin(2*dp)+cos(a)*... cos(b)*(cos(de1/2)*cos(de2/2)sin(de1/2)*sin(de2/2)*... cos(2*dp))+sin(a)*sin(b)*(sin(de1/2)*sin(de2/2)+cos(de1/2)*... cos(de2/2)*cos(2*dp))+sin(a)*cos(a)*sin(de1/2)*cos(de2/2)*... sin(2*dp)); % Calculate non linear phase shift using Cosijns equation dp_nl = atan2((A+B*sin(2*th)+C*cos(2*th)),... (D+E*sin(2*th)+F*cos(2*th))); % Convert phase data to displacement data e = dp_nl*L/(4*pi*n); % error, nm PAGE 97 97 % Perform FFT analysis of periodic non linearities steps = length (e); power2 = floor(log(steps)/log(2)); e = e(1:2^power2)'; e = e mean(e); [ER, f] = spec(e, fs); ER = ER/(2^(power21)); % Identify first and second order periodic errors index = f*L/2 == 1; fo = abs(ER(index)); index = f*L/2 == 2; so = abs(ER(index)); Code to evaluate fast Fourier transform % Fast Fourier Transorm % Code used to compute the fft X of signal x and the corresponding % frequency vector f given the sampling frequency fs. function [X,f]=spec(x,fs) N=length(x); X=fft(x); f= [0:fs/N:(1 1/(2*N))*fs]'; X=X(1:N/2+1,:); f=f(1:N/2+1,:); Local Sensitivity % Local Sensitivity % Code to compute local sensitivities w.r.t alpha. Similar codes are used % for other paramters clear all close all clc % Define alpha range alpha = 20:0.01:20; % deg dalpha = abs(alpha(2) alpha(1)); % difference step size, deg th = 0; % theta (polarizer angle), rad de1 = 0; % ellipticity beam 1, rad de2 = 0; % ellipticity beam 2, rad z = 1; % transmission coefficient X = 1; % transmission coefficient b_err = 0; % beta error, rad n = 1; % refractive index L = 633; % lambda, nm dl = 0:L/2^10:L; % displacement, nm dp = 4*pi*n*dl/L; % nominal phase change, rad dx = dl(2) dl(1); % step size PAGE 98 98 % Calculate periodic error for complete range of alpha fo = zeros(1,length(alpha)); so = zeros(1,length(alpha)); for cnt = 1:length(alpha) a = alpha(cnt)*pi/180; % alpha, rad [fo(cnt),so(cnt)] = objfun([a th de1 de2 z x b_err]); end % Now we calculate derivatives by forward difference methods slope_first = zeros(1, length(alpha)1); slope_second = zeros(1, l ength(alpha) 1); for cnt = 1:(length(alpha)1) % Calculate derivative by forward finite difference slope_first(cnt) = (fo(cnt+1) fo(cnt))/dalpha; slope_second(cnt) = (second_order(cnt+1) second_order(cnt))/dalpha; end Sobol Global Sensitivity Indices % Sobol' global sensitivity indices % Code to compute sensitivity indices using a Monte Carlo technique close all clear all clc % Define uncertainty ranges for input parameters mu = [0 0 0 0 0 0 0]; % mean sigma = [5 5 1 1 0.05 0.05 1]/3; % standard deviation Var = diag(sigma.^2); % Covariance matrix % Define process paramters sample_size = 1000; % Sample size % Generate A and B matrices Xa = lhsnorm(mu,Var,sample_size); % A matrix Xb = lhsnorm(mu,Var,sample_size); % B matrix % Correct for transmission coefficients as nominal value is 1 Xa(:,5:6) = 1 abs(Xa(:,5:6)); Xb(:,5:6) = 1 abs(Xb(:,5:6)); % Generate C matrices for each input variable by replacing the ith column % of B with the ith column of A C = zeros(sample_size,7,7); for i = 1:7 C(:,:,i) = Xb; C(:,i,i) = Xa(:,i); end % Compute the output vectors, in this case periodic error fro m Cosijns et % al model. Note that we have two output vectors in this case ya = zeros(sample_size,2); PAGE 99 99 yb = zeros(sample_size,2); yc = zeros(sample_size,2,7); for i = 1:sample_size [ya(i,1),ya(i,2)] = objfun(Xa(i,:)); [yb(i,1),yb(i,2)] = objfun(Xb(i,:)); end for cnt = 1:7 for i = 1:sample_size [yc(i,1,cnt),yc(i,2,cnt)] = objfun(C(i,:,cnt)); end end % Compute mean values for first and second order error fo_sq_FO = (sum(ya(:,1))/sample_size)^2; fo_sq_SO = (sum(ya(:,2))/sample_size)^2; % Now we calculate first order sensitivity indices S_FO = zeros(7,1); S_SO = zeros(7,1); for cnt = 1:7 % First order sensitivity (individual effect) S_FO(cnt) = (dot(ya(:,1),yc(:,1,cnt))/sample_size fo_sq_FO)/... (dot(ya(:,1),ya(:,1))/sample_size fo_sq_FO); % Second order sensitivity (individual effect) S_SO(cnt) = (dot(ya(:,2),yc(:,2,cnt))/sample_size fo_sq_SO)/... (dot(ya(:,2),ya(:,2))/sample_size fo_sq_SO); end % Now we Calculate total effect indices S_FO_t = zeros(7,1); S_SO_t = zeros(7,1); for cnt = 1:7 % First order sensitivity (Total effect) S_FO_t(cnt) = 1 ((dot(yb(:,1),yc(:,1,cnt))/sample_size fo_sq_FO)... /(dot(ya(:,1),ya(:,1))/sample_size fo_sq_FO)); % Second order sensitivity (Total effect) S_SO_t(cnt) = 1 ((dot(yb(:,2),yc(:,2,cnt))/sample_size fo_sq_SO)... /(dot(ya(:,2),ya(:,2))/sample_size fo_sq_SO)); end Linear Regression Method for Global Sensitivity % Linear regression sensitivity analysis % Code to compute global sensitivity analysis using a linear regression % technique % method close all clear all clc % Define process parameters sample_size =1000; variables = 7; % Define uncertainty ranges for input parameters PAGE 100 100 sdeviation = [5 5 1 1 0.025 0.025 1]; % standard deviation (3 sigma) mu = [0 0 0 0 0 0 0]; % mean % Initially we assume that all the variables vary within the unit % kdimensional hypercube, where k is the number of variables sigma = diag(([1 1 1 1 1 1 1]/3).^2); % Covariance matrix X = lhsnorm(mu,sigma,sample_size); % Create LHS sample % Generate augmented matrix for i = 1:sample_size % Linear terms X_linear = []; index1 = [0;0;0;0]; for cnt = 1:variables X_linear = [X_linear X(i,cnt)]; index1 = [index1 [cnt;0;0;0]]; end % Quadratic terms X_quadratic = []; index2 = []; for cnt1 = 1:variables for cnt2 = cnt1:variables X_quadratic = [X_quadratic X(i,cnt1)*X(i,cnt2)]; index2 = [index2 [cnt1;cnt2;0;0]]; end end % Cubic terms X_cubic = []; index3 = []; for cnt1 = 1:variables for cnt2 = cnt1:variables for cnt3 = cnt2:variables X_cubic = [X_cubic X(i,cnt1)*X(i,cnt2)*X(i,cnt3)]; index3 = [index3 [cnt1;cnt2;cnt3;0]]; end end end % Quartic terms X_quar = []; index4 = []; for cnt1 = 1:variables for cnt2 = cnt1:variables for cnt3 = cnt2:variables for cnt4 = cnt3:variables X_quad = [X_quar X(i,cnt1)*X(i,cnt2)*X(i,cnt3)*X(i,cnt4)]; index4 = [index4 [cnt1;cnt2;cnt3;cnt4]]; end end end end % Assemble all terms x(i,:) = [1 X_linear X_quadratic X_cubic X_quar]; PAGE 101 101 end % Assemble index index = [index1 index2 index3 index4]; % We now scale the input sample space generated for the unit hypercube % by the true standard deviations to get true sample space. for cnt = 1:variables X_cor(:,cnt) = X(:,cnt)*sdeviation(cnt); end X_cor(:,5:6) = 1 abs (X_cor(:,5:6)); % We evaluate first and second order periodic error for each data point % in the input sample space y = zeros(sample_size,2); for i = 1:sample_size [y(i,1),y(i,2)] = objfun(X_cor(i,:)); end % Perform a least square fit to find the regression coefficients beta = (x'*x) \ (x'*y); % We are interested in the magnitude of the regression coefficients Beta = abs(beta); % Find the all regression coefficients corresponding to an input parameter for i = 1:variables [row,col] = find(index == i); Col = []; for j = 1:length(col) 1 if col(j+1) == col(j); Col = Col; else if j == length(col)1 Col = [Col col(j) col(j+1)]; else Col = [Col col(j)]; end end end % Calculate sensitivity indices % First order sensitivity Sen_var_fo(i) = sum(Beta(Col,1))/sum(Beta(:,1)); % Second order sensitivity Sen_var_so(i) = sum(Beta(Col,2))/sum(Beta(:,2)); end Maximum Permissible Input Uncertainty NSGA II % Nondominated Sorting Genetic Algorithm (NSGA II) % Code to find maximum permissible input uncertainty clc close all clear all PAGE 102 102 % Define process parameters N_pop = 30; % size of the population N_gen = 30; % # of generations cross = 1.0; % crossover probability mut = 0.2; % mutation probability N_var = 5; % # of variables ( we assume that transmission coefficients % and ellipticities have same uncertainty) N_obj_fns = 2; % # of objective functions...this is hardcoded into program N_con = 1; % # of constraints % Set lower and upper bounds % Each variable can take a value between 5 and 30. The true value of each % variable will depend upon their independant scaling factor defined in the % code for evaluating objective function LB = [5 5 5 5 5]; UB = [30 30 30 30 30]; N_increments = UBLB+ones(1,N_var); % Number of discrete values permitted %% Initialize population % We form the initial population from a random set. Note that the floor % command has been used to round of to the lower integer value. for i = 1:N_var pop(:,i) = floor(rand(N_pop,1)*N_increments(i))/(N_increments(i) 1)* ... (UB(i)LB(i)) + LB(i); end %% Evaluate the performance of each individual for i = 1:N_pop [obj(i,:),con(i,:)] = objcon(pop(i,:)); end % Find population which violates constraints violation = abs(sum(con.*(con<0),2)); %% Loop: for q = 1:N_gen %% Ranking: temp = [1:N_pop]'; % initialize the temp design set for sorting rank_level = 1; % initialize the rank % Rank individuals based on domination criterion while temp % till temp is not empty for i = 1:length(temp) foo = 0; for j = 1:length(temp) if violation(temp(i)) > 0 & violation(temp(j)) > 0 if violation(temp(j)) < violation(temp(i)) foo = 1; end elseif violation(temp(j)) == 0 & violation(temp(i)) > 0 foo = 1; elseif violation(temp(j)) == 0 & violation(temp(i)) == 0 if obj(temp(j),1) > obj(temp(i),1) && obj(temp(j),2)... <= obj(temp(i),2) PAGE 103 103 foo = 1; end end end if foo == 0 % means design belongs to 'non dominated' set for current % rank rank(temp(i),1) = rank_level; % assign current rank to 'nondominated' design else % means design is 'dominated' rank(temp(i),1) = rank_level+1; % assign rank = current rank + 1 to the dominated design end end rank_level = rank_level + 1; temp = temp(rank(temp) == rank_level); % change temp design set to remove non dominated designs %(row number) in every run end % After having ranked all the individuals the parents are selected from % the present generation. % Selection of parents is on the basis of the roulette wheel which % basically means that individuals with better fitness value have a better % chance of being randomly selected as parents than individuals with low % fitness value. The fitness value is an indicator of the area a % certain individual occupies on the 'roulette wheel'. % However it is important to note that it is still possible that the % highest ranked individual is not selected as a parent as the % selection process is still random. However its "chances" of being % selected are higher because it occupies a greater area on the wheel %% Select Parents: c = max(rank)*1.5; fit = (c rank)/sum(crank); % fitness based on rank roulette(1,1) = 0; for i = 1:N_pop roulette(i+1,1) = roulette(i,1) + fit(i); % creating roulette wheel end Q=rand(N_pop,1); for i = 1:N_pop for j = 2:N_pop+1 if Q(i) < roulette(j) && Q(i) > roulette(j1) parent(i,1) = j 1; end end end % Children are then created from the individuals using crossover. In % this program one point cross over is used where the crossover point % is randomly selected. This is the exploitation part of the alg orithm %% Create Children: for i = 1:N_pop PAGE 104 104 par1 = randperm(N_pop);par1 = par1(1);par1 = parent(par1); par2 = randperm(N_pop);par2 = par2(1);par2 = parent(par2); while par2 == par1 % need to have 2 different parents par2 = randperm(N_pop);par2 = par2(1);par2 = parent(par2); end if rand > cross child(i,:) = pop(par1,:); else cross_pt = round((N_var2)*rand+1); child(i ,:) = [pop(par1,1:cross_pt),pop(par2,cross_pt+1:N_var)]; % one point crossover end end % Mutation of a certain percentage of individuals is done by randomly % adding values to their individual chromosomes. % This is the exploratio n part of the code Q = zeros(N_pop,N_var); Q(rand(N_pop,N_var) PAGE 105 105 for j = 1:length(temp) if violation_comb(temp(i)) > 0 & ... violation_comb(temp(j)) > 0 if violation_comb(temp(j)) < violation_comb(temp(i)) foo = 1; end elseif violation_comb(temp(j)) == 0 &... violation_comb(temp(i)) > 0 foo = 1; elseif violation_comb(temp(j)) == 0 &... violation_comb(temp(i)) == 0 if obj_comb(temp(j),1) > obj_comb(temp(i),1) && ... obj_comb(temp(j),2) <= obj_comb(temp(i),2) foo = 1; end end end if foo == 0 rank_comb(temp(i),1) = rank_level; else rank_comb(temp(i),1) = rank_level+1; end end rank_level = rank_level + 1; temp= temp(rank_comb(temp) == rank_level); end % Elitism and niching is a process where indivuals which are spread evenly % over the pareto front are preferred to individuals which are crowded % together. This spreading out of individuals is done by the niching % process by giving the individuals on the pareto front with a better % crowding functions a higher rank than individuals with poorer % crowding functions. %% Elitism, with niching [i,i] = sort(rank_comb); rank_comb = rank_comb(i,1); comb = comb(i,:); obj_comb = obj_comb(i,:); con_comb = con_comb(i,:); violation_comb = violation_comb(i ,:); rank_div = rank_comb(N_pop); foo = [1:N_pop*2]'; foo = foo(rank_comb == rank_div); foo1 = foo; obj_foo = obj_comb(foo,:); con_foo = con_comb(foo,:); violation_foo = violation_comb(foo,:); [i,i] = sort(obj_foo(:,1)); obj_foo = obj_foo(i,:); con_foo = con_foo(i,:); violation_foo = violation_foo(i,:); PAGE 106 106 foo = foo(i,:); crowd = zeros(length(foo),1); crowd(1) = inf; crowd(length(foo)) = inf; for j = 2:length(foo) 1 crowd(j) = crowd(j) + (obj_foo(j+1,1)obj_foo(j 1,1))/... (max(obj_foo(:,1))min(obj_foo(:,1))); end [i,i] = sort(obj_foo(:,2)); obj_foo = obj_foo(i,:); con_foo = con_foo(i,:); violation_foo = violation_foo(i,:); foo = foo(i,:); crowd = crowd(i,:); for j = 2:length(foo) 1 crowd(j) = crowd(j) + (obj_foo(j+1,2)obj_foo(j 1,2))/... (max(obj_foo(:,2))min(obj_foo(:,2))); end [i,i] = sort(crowd); i = flipud(i); foo = foo(i); crowd = crowd(i); r ank_comb(foo1,:) = rank_comb(foo,:); obj_comb(foo1,:) = obj_comb(foo,:); con_comb(foo1,:) = con_comb(foo,:); violation_comb(foo1,:) = violation_comb(foo,:); comb(foo1,:) = comb(foo,:); % The new generation is selected from the combin ed generation by % selecting the a number of individuals equal to the popluation size % defined which have the highest ranks. pop = comb(1:N_pop,:); obj = obj_comb(1:N_pop,:); con = con_comb(1:N_pop,:); violation = violation_comb( 1:N_pop,:); end Objective and Constraint Function % NSGAii (Objective function and constraint violation) % This code is used to return the values of the objective functions and the % constraint violations. function[obj,con] = objcon(individual) % Define process parameters sample_size = 500; % 'scaling' defines the scaling factor for the variables. This scaling factor % will differ for each case: (1) when both first and second order periodic % error are considered (2) when only first order error is considered % (3) when only second order error is considered. % The scaling factor changes as the optimized permissible range changes for % each case as we must provide appropriate scaling factors for the process. % The scaling factors are selected by a trial and error process PAGE 107 107 % case 1: both first and second order periodic error scaling = [6 20 2 0.2 1.5]/(3*30); % case 2: only first order error % scaling = [3 25 2 0.3 0.75]/(3*30); % case 3: only second order error % scaling = [8 45 20 0.5 7.5]/(3*30); % Define mean, standard deviation and covariance matrix mu = [0 0 0 0 0 0 0]; sdev = individual.*scaling; sigma = [sdev(1) sdev(2) sdev(3) sdev(3) sdev(4) sdev(4) sdev(5)] ; Var = diag(sigma.^2); % Form input sample set Sample_Set = lhsnorm(mu,Var,sample_size); % Correct for transmission coefficients as nominal value is 1 Sample_Set(:,5:6) = 1 abs(Sample_Set(:,5:6)); % Find Maximum periodic error max_err = zeros(sample_size,1); fo = zeros(sample_size,1); so = zeros(sample_size,1); for cnt = 1:sample_size; [fo(cnt),so(cnt)] = objfun(Sample_Set(cnt,:)); max_err(cnt) = max([fo(cnt),so(cnt)]); % case 1 % max_err(cnt) = fo(cnt); % case 2 % max_err(cnt) = so(cnt); % case 3 end % Find number of samples with periodic error greater than 0.5 nm counter = find(max_err>0.5); prob_failure = length(counter)/sample_size; % Probability of failure % Define objective functions obj_fun1 = sum(individual); obj_fun2 = prob_failure; obj = [obj_fun1 obj_fun2]; % In order to avoid stray terms we add a constraint that the maximum % probability of failure is less than 0.05 so as to avoid individuals with % very high probability of failure con = 0.05 prob_failure; Particle Swarm Optimization % Particle Swarm Optimization % Code to find the input variables which would produce a desired % comb ination of first and second order periodic errors close all clear all clc % Define lower and upper bounds for variables being optimized lb = ([0 0]); ub = ([90 90]); PAGE 108 108 % Define process parameters Max_Iterations = 50; % Maximum number of Iterations Tolerance = 10e6; % Objective function tolerance Swar mSize = 50; % Swarm Size Variables = 2; % Number of variables % Define target periodic error FO_target = 10; SO_target = 10; % Generate first swarm Swarm_initial= rand(SwarmSize,Variables); Swarm = zeros(SwarmSize,Variables); for i = 1:SwarmSize Swarm(i,:) = Swarm_initial(i,:).*(ub lb) + lb; end %Find objective function values for first swarm fun_val = zeros(SwarmSize,1); for i = 1:SwarmSize [fo_cand,so_cand] = objfun([Swarm(i,:) 0 0 1 1 0]); fun_ val(i) = sqrt((fo_cand FO_target)^2 + (so_cand SO_target)^2); end % Generate personal best position matrix and personal best value matrix per_best = Swarm; fun_val_per_best = fun_val; % Identify swarm best individual and corresponding positin [ fun_val_glo_best, glo_best_ind] = min(fun_val_per_best); success = 0; % Success Flag iter = 0; % Iterations' counter while( (success == 0) && (iter <= Max_Iterations) ) iter = iter+1; % Set GBest Global_best = repmat(Swarm(glo_best_ind,:), Sw armSize, 1); % Generate Random Numbers Rp = rand(SwarmSize,Variables); Rg = rand(SwarmSize, Variables); % Define weights cp = 2; % Personal weight cg = 2; % Global weight % Calculate Velocity Vel = cp*Rp.*(per_bestSwarm) + cg*Rg.*(Global_best Swarm); % Apply lower and upper bound constraints for cnt = 1:SwarmSize if (sum((Swarm(cnt,:) + Vel(cnt,:)) < ub) == Variables) &&... (sum((Swarm(cnt,:) + Vel(cnt,:)) > lb) == Variables) Swarm(cnt,:) = Swarm(cnt,:) + Vel(cnt,:); PAGE 109 109 else Swarm(cnt,:) = Swarm(cnt,:); end end % Evaluate objective function for new swarm for i = 1:SwarmSize [fo_cand,so_cand] = objfun([Swarm(i,:) 0 0 1 1 0]); fun_val(i) = sqrt((fo_cand FO_target)^2 + (so_cand SO_target)^2); end % Find which particles have improved their location index_imp = fun_val < fun_val_per_best; per_best(index_imp, :) = Swarm(index_imp, :); fun_val_per_best(index_imp) = fun_val(index_imp); % Update position of best particle [fun_val_glo_best, glo_best_ind] = min(fun_val_per_best); % Apply stopping criterion if fun_val_glo_best < Tolerance success = 1; elseif iter > Max_Iterations success = 1; end end % Output best function value and best position obj_fun_min = fun_val_glo_best; best_pos = Swarm(glo_best_ind,:); PAGE 110 110 LIST OF REFERENCES [ 1 ] Quenelle R. Nonlinearity in interferometric measurements. Hewlett Packard J 1983; 34(4):10. [ 2 ] Sutton C Nonlinearity in length measurement using heterodyne laser Michelson interferometry. J Phys E : Sci Instrum 1987;20:1290 2. [ 3 ] Xie Y, Wu Y Z. Zeeman laser interferometer errors for highprecision measurement. Applie d Opt 1992;31:881 4. [ 4 ] Rosenbluth A E Bobroff N. Optical sources on nonlinearity in heterodyne interf erometers. Precision Eng 1990;12: 7 11. [ 5 ] Wu C. Periodic nonlinearity resulting from ghost reflections in heterodyne interferometry. Opt Comm 2003;215: 17 23. [ 6 ] Schmitz T, Beckwith J. An inves tigation of two unexplored periodic error sources in differential path inter ferometry. Precision Eng 2002;27: 311 22. [ 7 ] Schluchter C, Chu D, Ganguly V, Schmitz T. Real Time Periodic Error Compensation with Low/Zero Velocity Parameter Updates In: Proceedi ngs of the 2 4th American Society of Precision Engineerin g (ASPE), Monterey, California, 2009. [ 8 ] Schmitz T, Kim H. Monte Carlo evaluation of periodic error uncertainty. Precis ion Eng 2007; 21: 2519. [ 9 ] Cosijns S, Haitejma H, Schellekens P. Modeling and verify ing nonlinearities in heterodyne displacement inter ferometry. Precision Eng 2002;26: 44 5 55. [ 10 ] Hou W, Wilkening G. Investigation and compensation on nonlinearity of heterodyne interferometers. Precisi on Eng 1992;14: 918. [ 11 ] Stone J A Howard L P. A simple technique for observing periodic nonlinearities in Michelson interferometers. P recision Eng 1998;22:220 32. [ 12 ] Badami V G Patterson S R A frequency domain method for the measurement of nonlinearity in heterodyne inter ferometry. Precision Eng 2000; 24 : 41 49. [ 13 ] Schmitz T, Kim H S. Periodic error calculation from spectrum anal yzer data. Precision Eng DOI:10.1016/j.precisioneng.2009.06.001 [ 14 ] Wu C, Deslattes R. Analytical modeling of the periodic nonlinearity in heterodyne interferometry. Applied Opt 1998;37:6696 700. PAGE 111 111 [ 15 ] Schmitz T, Chu D, Houck III L. Firstorder periodic error correction: validation for constant and nonconstant velocities with variable error magnitudes. Meas Sci Technol 2006;17:3195 203. [ 16 ] Chu D, Ray A. Nonlinearity measurement and correction of metrology data from an interferometer system. In: Proceedings of the 4th euspen Internati onal Conference. 2004. p. 300 1. [ 17 ] Patterson S R Beckwith J. Reduction of systematic errors in heterodyne interferometric displacement measurement. In: Proceedings of the 8th International Precision Engineering Seminar (IPES). 1995. p. 1014. [ 18 ] Kim H, Schmitz T, Beckwith J, Rueff M. A new heterodyne interferometer with zero periodic error and tunable beat frequency. In: Proceedings of the 23rd American Society of Pr ecision Engineering (ASPE), Portland, Oregon, 2008. [ 19 ] Schmitz T, Chu D, Kim H. First and second order periodic error measurement for nonconstant velocity motions. Precision Eng 2009; 33: 353 61. [ 20 ] Saltelli A, Andres T, Campolongo F. Global Sensitivity Analysis. A Primer. Wiley,2008. [ 21 ] Saltelli A. Sensitivity analysis for importance assessment. Risk Analysis 2002; 22:57990. [ 22 ] Homma T, Saltelli A. Importance measures in global sensitivity analysis of nonlinear models. Rel Eng Sys Safety 1996;52: 1 17. [ 23 ] Saltel li S, Tarantola S, Chan K. Quantitative model independent method for global sensitivity analysis of model output. Technometrics 1999;41: 3956. [ 24 ] Cukier R, Levine H, Schuler K. Nonlinear sensitivity analysis of multiparameter model systems. J Comp Phys 1978; 26: 1 42. [ 25 ] Sobol I M. Sensitivity analysis for non linear mathematical models. Math Modeling Comp Exp 1993; 1: 407414. [ 26 ] Sobol I M. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. Math C omp Simul 2001;55: 271 80. [ 27 ] Archer G, Saltelli A, Sobol I M. Sensitivity measures, ANOVA like techniques and the use of bootstrap. J Stat Comp Simul 1997; 58: 99 120. [ 28 ] Saltelli A. Making best use of model evaluations to compute sensitivity indices. Comp Phys Comm 2002;145: 280 97. PAGE 112 11 2 [ 29 ] Deb K, Pratap A, Agarwal S, Meyarivan T. A Fast and Elitist Mutliobjective Genetic Algorithm: NSGAII. IEEE Transactions of Evolutionary Computation 2002;6: 18297. [ 30 ] Jagdale V, Stanford B, Patil A, Ifju P. Conceptual Design of a Bendable UAV Wing Cons idering Aerodynamic and Structural Performance. In: Proceedings of the 50th AIAA Structures, Structural Dynamics, and Materials Conference, Palm Springs, CA. 2009 [ 31 ] Kennedy J, Eberhart R. Particle Swarm Optimization. In: Proceedings of the IEEE International Conference on Evolutionary Computation, Pis cataway, NJ. IEEE Press 1998. p. 6973. [ 32 ] Hu X, Eberhart R. Solving Constrained Nonlinear Optimization Problems with Particle Swarm Optimization. In: Proceedings of the Sixth World Multiconference on Systemics Cybernetics and Informati cs, Orlando, Fl; 2002. pp. 203 6. [ 33 ] Angeline P. Evolutionary Optimization Versus Particle Swarm Optimization: Philosophy and Performance Differences. In: Annual Conference on Evolutionary Programming, San Diego; 1998. [ 34 ] Eberhart R, Shi Y. Comparison between Genetic Algorithms and Particle Swarm Optimization. In: Annual Conference on Evolutionary Programming, San Diego; 1998. PAGE 113 113 BIOGRAPHICAL SKETCH Vasishta Ganguly was born in Chennai, India to Parthasarthy and Lalitha Ganguly He attended Maharashtra Institute of Technology, Pune between 2002 and 2006 and he graduated with a bachelors degree in mechanical engineering awarded by the University of Pune. He worked in Larsen & Toubro Limited, Powai from July 2006 to June 2008 when he started his graduate studies at the University of Florida. He now works in the Machine Tool Research Laboratory (MTRC) in the Department of Mechanical and Aerospace Engineering at the University of Florida. 