<%BANNER%>

Development of New Source Diagnostic Methods and Variance Reduction Techniques for Monte Carlo Eigenvalue Problems with ...

Permanent Link: http://ufdc.ufl.edu/UFE0042505/00001

Material Information

Title: Development of New Source Diagnostic Methods and Variance Reduction Techniques for Monte Carlo Eigenvalue Problems with a Focus on High Dominance Ratio Problems
Physical Description: 1 online resource (213 p.)
Language: english
Creator: Wenner, Michael
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2010

Subjects

Subjects / Keywords: Nuclear and Radiological Engineering -- Dissertations, Academic -- UF
Genre: Nuclear Engineering Sciences thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy DEVELOPMENT OF NEW SOURCE DIAGNOSTIC METHODS AND VARIANCE REDUCTION TECHNIQUES FOR MONTE CARLO EIGENVALUE PROBLEMS WITH A FOCUS ON HIGH DOMINANCE RATIO PROBLEMS By Michael Todd Wenner December 2010 Chair: Alireza Haghighat Major: Nuclear Engineering Sciences Obtaining the solution to the linear Boltzmann equation is often is often a daunting task. The time-independent form is an equation of six independent variables which cannot be solved analytically in all but some special problems. Instead, numerical approaches have been devised. This work focuses on improving Monte Carlo methods for its solution in eigenvalue form. First, a statistical method of stationarity detection called the KPSS test adapted as a Monte Carlo eigenvalue source convergence test. The KPSS test analyzes the source center of mass series which was chosen since it should be indicative of overall source behavior, and is physically easy to understand. A source center of mass plot alone serves as a good visual source convergence diagnostic. The KPSS test and three different information theoretic diagnostics were implemented into the well known KENOV.a code inside of the SCALE (version 5) code package from Oak Ridge National Laboratory and compared through analysis of a simple problem and several difficult source convergence benchmarks. Results showed that the KPSS test can add to the overall confidence by identifying more problematic simulations than without its usage. Not only this, the source center of mass information on hand visually aids in the understanding of the problem physics. The second major focus of this dissertation concerned variance reduction methodologies for Monte Carlo eigenvalue problems. The CADIS methodology, based on importance sampling, was adapted to the eigenvalue problems. It was shown that the straight adaption of importance sampling can provide a significant variance reduction in determination of keff (in cases studied up to 30%?). A modified version of this methodology was developed which utilizes independent deterministic importance simulations. In this new methodology, each particle is simulated multiple times, once to every other discretized source region utilizing the importance for that region only. Since each particle is simulated multiple times, this methodology often slows down the final keff convergence, but an increase coupling between source zones with important yet low probability interaction is observed. This is an important finding for loosely coupled systems and may be useful in their analysis. The third major focus of this dissertation concerns the use of the standard cumulative fission matrix methodology for high dominance ratio problems which results in high source correlation. Source eigenvector confidence is calculated utilizing a Monte Carlo iterated confidence approach and shown to be superior to the currently used plus and minus fission matrix methodology. Utilizing the fission matrix based approach with appropriately meshing and particle density, it is shown that the fission matrix elements tend to be independent. As a result, the keff and the source eigenvector can be calculated without bias, which is not the case for the standard methodology due to the source correlation. This approach was tested with a 1-D multigroup eigenvalue code developed for this work. A preliminary automatic mesh and particle population diagnostic were formulated to ensure independent and normal fission matrix elements. The algorithm was extended in parallel to show the favorable speedup possible with the fission matrix based approach.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Michael Wenner.
Thesis: Thesis (Ph.D.)--University of Florida, 2010.
Local: Adviser: Haghighat, Alireza.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2010
System ID: UFE0042505:00001

Permanent Link: http://ufdc.ufl.edu/UFE0042505/00001

Material Information

Title: Development of New Source Diagnostic Methods and Variance Reduction Techniques for Monte Carlo Eigenvalue Problems with a Focus on High Dominance Ratio Problems
Physical Description: 1 online resource (213 p.)
Language: english
Creator: Wenner, Michael
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2010

Subjects

Subjects / Keywords: Nuclear and Radiological Engineering -- Dissertations, Academic -- UF
Genre: Nuclear Engineering Sciences thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy DEVELOPMENT OF NEW SOURCE DIAGNOSTIC METHODS AND VARIANCE REDUCTION TECHNIQUES FOR MONTE CARLO EIGENVALUE PROBLEMS WITH A FOCUS ON HIGH DOMINANCE RATIO PROBLEMS By Michael Todd Wenner December 2010 Chair: Alireza Haghighat Major: Nuclear Engineering Sciences Obtaining the solution to the linear Boltzmann equation is often is often a daunting task. The time-independent form is an equation of six independent variables which cannot be solved analytically in all but some special problems. Instead, numerical approaches have been devised. This work focuses on improving Monte Carlo methods for its solution in eigenvalue form. First, a statistical method of stationarity detection called the KPSS test adapted as a Monte Carlo eigenvalue source convergence test. The KPSS test analyzes the source center of mass series which was chosen since it should be indicative of overall source behavior, and is physically easy to understand. A source center of mass plot alone serves as a good visual source convergence diagnostic. The KPSS test and three different information theoretic diagnostics were implemented into the well known KENOV.a code inside of the SCALE (version 5) code package from Oak Ridge National Laboratory and compared through analysis of a simple problem and several difficult source convergence benchmarks. Results showed that the KPSS test can add to the overall confidence by identifying more problematic simulations than without its usage. Not only this, the source center of mass information on hand visually aids in the understanding of the problem physics. The second major focus of this dissertation concerned variance reduction methodologies for Monte Carlo eigenvalue problems. The CADIS methodology, based on importance sampling, was adapted to the eigenvalue problems. It was shown that the straight adaption of importance sampling can provide a significant variance reduction in determination of keff (in cases studied up to 30%?). A modified version of this methodology was developed which utilizes independent deterministic importance simulations. In this new methodology, each particle is simulated multiple times, once to every other discretized source region utilizing the importance for that region only. Since each particle is simulated multiple times, this methodology often slows down the final keff convergence, but an increase coupling between source zones with important yet low probability interaction is observed. This is an important finding for loosely coupled systems and may be useful in their analysis. The third major focus of this dissertation concerns the use of the standard cumulative fission matrix methodology for high dominance ratio problems which results in high source correlation. Source eigenvector confidence is calculated utilizing a Monte Carlo iterated confidence approach and shown to be superior to the currently used plus and minus fission matrix methodology. Utilizing the fission matrix based approach with appropriately meshing and particle density, it is shown that the fission matrix elements tend to be independent. As a result, the keff and the source eigenvector can be calculated without bias, which is not the case for the standard methodology due to the source correlation. This approach was tested with a 1-D multigroup eigenvalue code developed for this work. A preliminary automatic mesh and particle population diagnostic were formulated to ensure independent and normal fission matrix elements. The algorithm was extended in parallel to show the favorable speedup possible with the fission matrix based approach.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Michael Wenner.
Thesis: Thesis (Ph.D.)--University of Florida, 2010.
Local: Adviser: Haghighat, Alireza.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2010
System ID: UFE0042505:00001


This item has the following downloads:


Full Text

PAGE 1

1 DEVELOPMENT OF NEW SOURCE DIAGNOSTIC METHODS AND VARIANCE REDUCTION TECHNIQUES FOR MONTE CARLO EIGENVALUE PROBLEMS WITH A FOCUS ON HIGH DOMINANCE RATIO PROBLEMS By MICHAEL T. WENNER A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2010

PAGE 2

2 2010 Michael Wenner

PAGE 3

3 To my parents, who always believed in me even when I didnt believe in myself

PAGE 4

4 ACKNOWLEDGMENTS I extend my thanks to my thesis advisor, Dr. Alireza Haghighat for his guidance over the years. I would also like to extend my gratitude to all of my committee, Dr. Glenn Sjoden, Dr. David Hintenlang, Dr. Alireza Entezari and Dr. John Wagner for their hel p and support. I would also like to thank the many members of the University of Florida Transport Theory Group for their thoughtfulness and willingness to listen to discussions about my work. In particular, Dr. Benoit Dionne, Dr Ce Yi, Mr. William Walter s and Mr. Kevin Manalo who have provided valuable insight and also their friendship. I would like to thank all of my family and friends for their love and support over the years. I would not have been able to accomplish this without you. Finally, I woul d like to thank Jen for putting up with me through the difficult times of this work and encouraging me to finish what I had started.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS ...............................................................................................................4 LIST OF TABLES ...........................................................................................................................9 LIST OF FIGURES .......................................................................................................................11 ABSTRACT ...................................................................................................................................15 CHAPTER 1 INTRODUCTION ..................................................................................................................17 Objective .................................................................................................................................17 Literature Review ...................................................................................................................19 Introduction to Monte Carlo Source Convergence ..........................................................21 Introduction to Monte Carlo Source Acceleration ..........................................................26 2 GENERAL THEORY ............................................................................................................31 Forward Transpor t ..................................................................................................................31 Adjoint Transport and Neutron Importance ...........................................................................33 The Eigenvalue Problem .........................................................................................................35 The Monte Carlo Method .......................................................................................................37 Random Variable .............................................................................................................38 Random Numbers and the Fundamental Formulation of Monte Carlo (FFMC) .............38 Particle Transport with Monte Carlo Methods .......................................................................39 Statistics for Monte Carlo Analyses ................................................................................42 Tallying in a Monte Carlo Simulation .............................................................................44 Collision estimators ..................................................................................................44 Path length estimators ..............................................................................................46 Surface crossing estimators ......................................................................................46 Othe r estimators .......................................................................................................47 keff estimators ............................................................................................................47 Non Analog Monte Carlo .......................................................................................................48 Common Var iance Reduction Techniques ......................................................................48 Efficiency of a Monte Carlo Simulation .........................................................................51 Monte Carlo keff Solution Strategies .......................................................................................52 Source Iteration Method ..................................................................................................52 Fission Matrix Method ....................................................................................................54 Deterministic SN Method ........................................................................................................55 Multigroup Energy Approximation .................................................................................56 Particle Streaming and Scattering in 3 D ........................................................................57 Angular Discretization via Numerical Quadrature ..........................................................59 Discretization of the Spatial Domain ..............................................................................59

PAGE 6

6 Source Iteration Scheme ..................................................................................................61 Differencing Scheme .......................................................................................................61 Computer Codes Utilized .......................................................................................................62 3 SOURCE CONVERGENCE DIAGNOSTIC METHODOLOGY ........................................66 The Dominance Ratio and Source Convergence ....................................................................66 Review of Information Theory, Stationarity and Undersampling Tests .................................67 Criterion One ...................................................................................................................68 Criterion Two ..................................................................................................................70 Criterion Three ................................................................................................................70 Source Center of Mass ............................................................................................................71 Higher Mom ents of the Center of Mass ..........................................................................72 Skewness .........................................................................................................................72 Kurtosis ............................................................................................................................73 Generalized KPSS Test for Stationarity Detection in Monte Carlo Eigenvalue Problems ....73 Significance Testing Procedure .......................................................................................76 Long Run Variance Estimation .......................................................................................77 4 IMPLEMENTATION AND TESTING OF STATIONARITY DIAGNOSTICS .................79 Implementation of the Information Theory Methodology ......................................................79 Implementation of the COM ...................................................................................................80 Implementation of the KPSS Methodology ............................................................................80 Final Table Output Additions in KENOV.A ..........................................................................81 Implementations into 1Dcrit ...................................................................................................81 Benc hmark Problems Introduction and Analysis ...................................................................81 Stationarity Benchmark Problem 1 .................................................................................81 Benchmark Problem 1 Analysis ......................................................................................82 Stationarity Benchmark Problem 2 .................................................................................88 Modified 3 X 3 array of spheres analysis .................................................................90 Analysis of misdiagnosed case 1 replication ............................................................92 Analysis of a misdiagnosed case 2 replication .........................................................94 Full 5 X 5 array of spheres analysis .........................................................................95 Stationarity Benchmark Problem 3 .................................................................................98 Conclusions of Stationarity Analyses ............................................................................107 5 DEVELOPMENT AND IMPLEMENTATION OF NEW FISSION SOURCE ACCCELERATION METHODOLOGIES ..........................................................................109 Extension of CADIS Methodology to Eigenvalue Problems ...............................................109 Review of the CADIS Methodology .............................................................................109 Integration by Monte Carlo ....................................................................................109 Importance sampling ..............................................................................................111 Source biasing ........................................................................................................112 Transport biasin g ....................................................................................................113 Extension of CADIS methodology to eigenvalue problems .........................................115

PAGE 7

7 Adjoint source ........................................................................................................115 Other considerations ...............................................................................................116 Modified s ource detector algorithm .......................................................................117 Fission Matrix Methods ........................................................................................................119 Standard Fission Matrix Method Review ......................................................................120 Combined Fission Matrix Acceleration with CADIS ...................................................121 Implementation into 1Dcrit ...........................................................................................121 6 TESTING OF NEW SOURCE ACCELERATION METHODOLOGIES ..........................123 PENTRAN Importance Calculations ....................................................................................123 Source Acceleration Test Problems ......................................................................................124 E CADIS Test Problem Results ...........................................................................................125 ME CADIS Test Problem Results .................................................................................133 E CADIS/ME CADIS Final Remarks ..........................................................................142 Fission Matrix Biasing Method with E CADIS/ME CADIS ........................................143 7 DEVELOPMENT Of FMBMC FOR HIGH DOMINANCE RATIO PROBLEMS ...........144 Fission Matrix Based Monte Carlo Method .........................................................................144 Test Problems for High Dominance Ratio Problems ...........................................................145 Fission Matrix Based Monte Carlo Method (FMBMC) Solution Procedure ................146 Confidence Estimation with FMBMC ...........................................................................148 FMBMC Test Problem Results .....................................................................................150 Test problem 1 analysis ..........................................................................................150 Test problem 2 analysis ..........................................................................................168 Test pro blem 3 analysis ..........................................................................................184 Cycle wise Fission Matrix Analysis ..............................................................................186 Effect of Initialized Source Distribution .......................................................................186 Automatic Independence Method with Minimum Sampling (AIMS) ..........................187 Extension to Parallel Computing Environment .............................................................193 8 CONCLUSIONS AND FUTURE WORK ...........................................................................196 Conclusions ...........................................................................................................................196 Future Work ..........................................................................................................................199 APPENDIX A 1DCRIT I NPUT DESCRIPTiON .........................................................................................201 Standard Input File Format ...................................................................................................201 Stan dard Input File Example ................................................................................................203 1Dcrit.src Input File Structure ..............................................................................................204 Bias.dat Input File Structure .................................................................................................204 Seedwrapper Parallel Driver Information .............................................................................205

PAGE 8

8 LIST OF REFERENCES .............................................................................................................207 BIOGRAPHICAL SKETCH .......................................................................................................213

PAGE 9

9 LIST OF TABLES Table page 31 Null hypotheses possible with Generalized KPSS test ......................................................74 32 Upper tail critical values of w, w and wo ........................................................................76 41 Problem 1 stationarity diagnostic summary .......................................................................83 42 Material properties for benchmark problem 2 ...................................................................90 43 Comparison of the number of replications with co rrect predictions for the entropy tests and Generalized KPSS test ........................................................................................91 44 Comparison of the number of stationary diagnoses for the full benchmark 2 problem for the information entropy tests and Generalized KPSS test ............................................95 45 Material properties for benchmark problem 3, checkerboard storage of assemblies material data (atoms/barn cm) .........................................................................................100 46 Problem 2 stationarity diagnostic summary .....................................................................104 51 Material properties for algorithm test problem ................................................................122 52 Comparison of keff predicted by different codes for algorithm test problem ...................122 61 Material component radii for test problem 1 and 2 ..........................................................124 62 keff Data for test problem 1 ...............................................................................................128 63 keff Data for benchmark problem 2 ..................................................................................131 64 keff Data for benchmark Problem 1 comparing ME CADIS to the standard algorithm ...136 65 Fission matrix elements and standard error for benchmark problem 1 ...........................136 66 keff data for benchmark problem 2 comparing ME CADIS to the standard algorithm ....138 67 Fission matrix elements and standard error for benchmark problem 2 ...........................138 71 keff Comparison for FMBMC test problem 1 with the standard and E CADIS algorithm with 1 source region ........................................................................................151 72 keff comparison for FMBMC test problem 1 with the E CADIS algorithm with 1, 5 and 12 source regions .......................................................................................................152 73 keff comparison for FMBMC test problem 1 with the E CADIS algorithm and the FMBMC algorithm (conservative and Monte Carlo iterated confidence inter vals) ........152

PAGE 10

10 74 keff comparison for FMBMC test problem 1 with the E CADIS algorithm and the FMBMC algorithm (conservative and MC confidence intervals) ...................................153 75 Speedup (relative to the standard algorithm) for FMBMC test problem 1 with the E CADIS and FMBMC algorithm with 1, 5 and 12 fission matrix regions ........................154 76 Source convergence diagnostic results for FMBMC test problem 1 with the E CADIS and FMBMC algorithm with 1, 5 and 12 fis sion matrix regions ........................155 77 FMBMC test problem 1 FMBC final converged source fractions for the 5 source region case .......................................................................................................................157 78 FMBMC test problem 1 combined converged sources with associated combined standard deviation ............................................................................................................157 79 FMBMC test problem 1 final converged source fraction comparison for source mesh 1 and 3 with the standard algorithm and FMBMC (5 total source meshes) ....................158 710 FMBMC test problem 1 final converged source fraction comparison for source mesh 1 and 6 and 12 with t he standard algorithm and FMBMC ..............................................158 711 Case input parameter summary for E CADIS and FMBMC ...........................................159 712 Speedup data for FMBMC test problem 2 with the E CADIS and FMBMC algorithm (10 cm and 100 cm model length) ...................................................................................170 713 Source convergence diagnostic results for FMBMC rest problem 2 with the E CADIS and FMBMC algorithm (10 cm and 100 cm model length) ...............................170 714 FMBMC test problem 2 final converged source fraction for source mesh 1, 3 and 5 for both the 10 cm and 100 cm pincells ...........................................................................173 715 FMBMC test problem 2 final converged source fraction comparison for source mesh 1, 3 and 5 for the standard algorithm and FMBMC for the 100 cm pincell ....................173 716 FMBMC test problem 2 subcase input parameters (E CADIS no importance) ..............175 717 FMBMC Test Problem 3 Final Converged Source for Source Meshes 13 ....................185 718 FMBMC test problem 2 AIMS cases input .....................................................................187 719 FMBMC test problem 2 AIMMS cases input ..................................................................188

PAGE 11

11 LIST OF FIGURES Figure page 21 3D Cartesian space angle coordinate system ...................................................................57 41 Schematic of the benchmark 1 problem .............................................................................82 42 Problem 1 keff results (first 50 values represent 2500 histories per generation, last 50 values represent 20000 histories per generation) ...............................................................83 43 Benchmark problem 1 average source fraction in cylinder 1 and 5 normalized to their average beyond 50 skipped cycles for replication 1 .........................................................84 44 Benchmark problem 1 source fraction in cylinders 6 and 8 normalized to their average beyond 50 s kipped cycles for replication 9 ..........................................................85 45 Illustration of a white Gaussian series with mean 1.0 and standard deviation 0.1 with and without a small deterministic trend (m=0.000 1) .........................................................86 46 Illustration of a white Gaussian series with mean 1.0 and standard deviation 0.0001 with and without a small deterministic trend (m=0.0000001) ...........................................87 47 Schematic of the simplified benchmark 2 problem ...........................................................89 48 Schematic of the full benchmark 2 problem ......................................................................90 49 COM series plot for replication 1 for case 1 (vacuum boundary condition) and case 2 (reflective boundary condition) of modified version of benchmark 2 ...............................91 410 x y z and COM series for replication 21 for case 1 (vacuum boundary condition) of benchmark problem 2 ........................................................................................................ 93 411 xg, yg, zg and COM series plot for replication 29 for case 1 (vacuum boundary condition) ...........................................................................................................................94 412 xg, yg, zg and COM series plot for replication 1 for case 1 of the full benchmark 2 problem (vacuum boundary condition) ..............................................................................96 413 xg, yg, zg and COM series plot for replication 5 for case 2 (reflective boundary condition) ...........................................................................................................................97 414 Schematic of the benchmark 3 problem .............................................................................99 415 KPSS Benchmark problem 3 COM plot for replication 1 with 10000 histories per generation .........................................................................................................................101 416 Benchmark problem 2 COM plot for replication 1 with 10000 histories per generation and the single replication case of 100000 histories per generation ................102

PAGE 12

12 417 Source fraction plot of problem 3 for assemblies 9, 11, 13, 15, 17, 19, 21 and 23 with 100000 histories per generation .......................................................................................103 418 Source fraction plot of benchmark problem 3 with 1 million histories per generation ... 105 419 Problem 2 keff results for all benchmark problem 2 cases ...............................................106 51 Source discretization with source transfer procedure for 2 region ME CADIS problem ............................................................................................................................118 52 Source discretization with source transfer procedure for 2 region ME CADIS problem ............................................................................................................................122 61 Schematic of test source acceleration test problem 1 ......................................................124 62 Geometry for test problem 2 ............................................................................................125 63 Benchmark problem 1 (3 source region) global importance to production (plotted at deterministic mesh center) ...............................................................................................127 64 Problem 1 standard and E CADIS source COM by generation ......................................129 65 Problem 2 (2 source region) global importance to production (plotted at determ inistic mesh center) .....................................................................................................................130 66 Problem 2 standard and E CADIS source COM by generation ......................................132 67 Problem 2 source fractions for the left and right slab ......................................................133 68 Benchmark problem 1 (3 source region) global importance to production (plotted at deterministic mesh center) ...............................................................................................135 69 Benchmark problem 2 (2 source region) global importance to production (plotted at deterministic mesh center) ...............................................................................................137 610 Benchmark problem 2 fission matrix element A12 for both E CADIS and ME CADIS .139 611 Benchmark problem 2 fission matrix element A12 E CADIS autocorrelation plot .........140 612 Benchmark problem 2 fission matrix element A12 ME CADIS autocorrelation plot ......141 71 Geometry and material data for yest problem 1 ...............................................................145 72 1Dcrit input flow paths for E CADIS/ME CADIS and source initialization, (dashed lines represent optional input flow path) .........................................................................147 73 Standard and fission matrix keff determination flowchart ................................................148 74 Importance to production for FMBMC test problem 1 ....................................................150

PAGE 13

13 75 Replication 5 FMBMC COM behavior ...........................................................................155 76 FMBMC test problem two replication one source mesh one convergence .....................156 77 FMBMC test problem 1 keff summary for cases 1 50 ......................................................160 78 Converged source bins for cases 1, 11 and 31 .................................................................161 79 Mesh 1 source fraction for cases 1 10 .............................................................................162 710 Mesh 1 source fraction for cases 11 20 ...........................................................................163 711 Mesh 1 source fraction for cases 21 30 ...........................................................................164 712 Mesh 1 source fraction for cases 31 40 ...........................................................................165 713 Mesh 1 source fraction for cases 41 50 ...........................................................................166 714 Autocorrelation of the A12 fission matrix element for representative cases 1, 11, 21, 31 and 41 ..........................................................................................................................167 715 FMBMC test problem 2 keff summary (5 source meshes) ...............................................169 716 FMBMC benchmark problem 2 replication 1 COM for the 10 cm (renormalized for plot) and 100 cm cases .....................................................................................................171 717 FMBMC test problem two source mesh one convergence (50000 particles per generation) .......................................................................................................................172 718 FMBMC test problem 2 replication 1 A12 fission matrix element autocorrelation plot ..174 719 FMBMC test problem 2 standard algorithm COM for subcase 2, replication 1 ..............176 720 FMBMC test problem 2 subcase 2 replication 1 mesh 1 source fraction evolution ........176 721 FMBMC test problem 2 replication 1 subcase 2 A12 fission matrix element autocorrelation plot ..........................................................................................................177 722 FMBMC test problem 2 standard algorithm C OM for subcase 4, replication 1 ..............178 723 FMBMC test problem 2 subcase 4 replication 1 mesh 1 source fraction evolution ........179 724 FMBMC test problem 2 subcase 3 and 4 source fractions ..............................................180 725 FMBMC test problem 2 subcase 4 A12 fission matrix element autocorrelation plot .......181 726 FMBMC test problem 2 subcase 5 and 6 source fractions ..............................................182 727 FMBMC test problem 2 subcase 6 A12 fission matrix element autocorrelation plot .......182

PAGE 14

14 728 AIMS flowchart ...............................................................................................................191 729 Parallel speedup with FMBMC for 3 and 5 processors ...................................................195

PAGE 15

15 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy DEVELOPMENT OF NEW SOURCE DIAGNOSTIC METHODS AND VARIANCE REDUCTION TECHNIQUES FOR MONTE CARLO EIGENVALUE PROBLEMS WITH A FOCUS ON HIGH DOMINANCE RATIO PROBLEMS By Michael Todd Wenner December 2010 Chair: Alireza Haghighat Major: Nuclear Engineering Sciences Obtaining the solution to the linear Boltzmann equation is often is often a daunting tas k. The time independent form is an equation of six independent variables which cannot be solved analytically in all but some special problems. Instead, numerical approaches have been devised This work focuses on improving Monte Carlo methods for its solution in eigenvalue form First, a statistical method of stationarity detection called the KPSS test adapted as a Monte Carlo eigenvalue source convergence test The KPSS test analyzes the source center of mass series which was chosen since it should be indicative of overall source behavior, and is physically easy to understand. A source center of mass plot alone serves as a good visual source c onvergence diagnostic. The KPSS test and three different information theoretic diagnostics were implemented into the well known KENOV.a code inside of the SCALE (version 5) code package from Oak Ridge National Laboratory and compared through analysis of a simple problem and several difficult source convergence benchmarks Results showed that the KPSS test can add to the overall confidence by identifying more problematic simulations than without its usage. Not only this, the source center of mass informat ion on hand visually aids in the understanding of the problem physics.

PAGE 16

16 The second major focus of this dissertation concerned variance reduction methodologies for Monte Carlo eigenvalue problems. The CADIS methodology, based on importance sampling, was ada pted to the eigenvalue problems. It was shown that the straight adaption of importance sampling can provide a significant variance reduction in determination of keff (in cases studied up to 30%?). A modified version of this methodology was developed whic h uti lizes independent deterministic importance simulations. In this new methodology, each particle is simulated multiple times, once to every other discretized source region utilizing the importance for that region only Since each particle is simulated multiple times, this methodology often slows down the final keffThe third major focus of this dissertation concerns the use of the standard cumulative fission matrix methodology for high dominance ratio problems which results in high source correlation Source eigenvector confidence is calculat ed utilizing a Monte Carlo iterated confidence approach and shown to be superior to the currently used plus and minus fission matrix methodology. Utilizing the fission matrix based approach with appropriately meshing and particle density, it is shown tha t the fission matrix element s tend to be independent As a result, the k convergence, but an increase coupling between source zones with important yet low probability interaction is observed. This is an important finding for loosely coupled systems and may be us eful in their analysis eff and the source eigenvector can be calculated without bias, which is not the case for the standard methodology due to the source correlation. This approach was tested with a 1 D multigroup eigenvalue code developed for this work. A preliminary automatic mesh and particle population diagnostic were formulated to ensure independent and normal fission matrix element s. The algorithm was extended in parallel to show the favora ble speedup possible with the fission matrix based approach.

PAGE 17

17 CHAPTER 1 INTRODUCTION Objective Since the Boltzmann equation was first introduced, it has found application in many areas such as the kinetic theory of gases ( Alexev, 2004) and charged ( Cercignani, 1975) and neutral particle transport ( Bell and Glasstone, 1985; Lewis and Miller, 1993) Although very powerful, obtaining a solution to the Bolt zmann Equation is often a daunting task. The time independent form of Linear Bolt z mann Equation (LBE) is an equation of six variables which cannot be solved analytically in all but some simplified problems Numerical approaches devised to solve the LBE can be broken into two distinct groups : deterministic and Monte Carlo methods In the deterministic approach the equation itself is discretized and then is either solved directly or via iteration schemes There are a large variety of deterministic meth ods available due to the different discretization and approximations utilized, but some of the more comm on methods include the discrete ordinates ( SNLewis and Miller, 1993 ) methods nodal methods method of characteristics (MOC ) and the collision probability methods (CPM) ( ) Errors from these methodologies are systematic in nature since they arise out of the inability to represent the system as it is due to the different discretizations utilized If the final solution is obtained iteratively, a convergence criterion is also utilized adding to the error. In the Monte Carlo Method, a stochastic model is constructed and the expected value of a desired random variable which can be an integral quantity of interest is calculated ( Kalos and Whitlock, 1986; Metropolis and Ul am, 1949) The expected value is obtained by calculating the average of many independent samples. Often, the method is likened to performing an experiment on a computer. Given that the st ochastic model utilized is accurate ( i.e. the underlying physical characteristics of the system are well known), the approach is accurate.

PAGE 18

18 Random numbers are used to sample from probability density functions (pdfs) representing the basic physical phenome na to provide the desired independent samples. Alternatively, the Monte Carlo Method can be used to directly solve deterministic equations as a numerical integration tool Monte Carlo error is observed from an incorrect stochastic model and from the statistical uncertainties of the calculated results. The former is reduced by modeling the process as close to nature as possible and t he latter is estimated using principles of statistics to be reviewed later such as expectation values, variance s use of the Central Limit Theorem, etc Recently, more researchers have begun to utilize both deterministic and Monte Carlo methods together to benefit from the strengths of each method. Particle transport via the Monte Carlo Method has long been lauded f or its ability to provide accurate solutions to some of the most challenging problems. Monte Carlo analysis of this type usually focus es on a specific local objective since the process is time consuming With the ever increasing computer capabilities, pr oblems of increasing complexity are being resolved like never before, but these problems pose new challenges and also bring to light some new issues. Problem specific objective s are no longer desired, and full phase space analysis is sought with the idea that even 3 D burnup with Monte Carlo is now an obtainable goal In addition, the solution of criticality (eigenvalue) problems via the Monte Carlo Method is standard practice, and although reliable eigenvalue estimates may be easily obtained in most cas es, full phase space information is often much more difficult to obtain. Even wi th current computing technology this can be a very time consuming process. I n addition, source convergence difficulties have long plagued the solution of criticality problem s particularly for high dominance ratio (DR) problems, as has been pointed out for decades ( Blomquist and Gelbard, 2000; Brissenden and

PAGE 19

19 Garlick, 1986; Gelbard, 1990; Gelbard and Prael, 1974; Ueki et al., 1997; Whitesides, 1971) This work aims at providing additional information regarding source convergence and providing new techniques for difficult source convergence problems, such as those with high DR while attempting to reduce variance and improve accuracy. Literature Review Current source convergence diagnostics have improved diagnostic ability with the introduction of several new approaches, however, none of the diagnostic tools are failsafe Consequently, it is believed that a multilevel testing approach using a combined methodology may be best. One focus of this work is the addition of a new diagnostic test for any Monte Carlo eigenvalue problem w ith analysis indicating combined use of available diagnostics as the most tractable approach. Source acceleration for criticality problems has not had the level of success that shielding problems have enjoyed. Nevertheless, some excellent ideas and succes s has been met with past and more recent methods such as stratified source sampling ( Blomquist, 2000; Gelbard and Roussel, 1995) super history powering ( Blomquist, 2000; Brissenden and Garlick, 1986) fission matrix methods ( Kadotani et al., 1991; Morton, 1956) and more recently some interesting ideas such as Wielandts method ( Brown, 2007; Yamamoto and Miyoshi, 2004) the Vacation Matrix a pproach ( Finch, 2006) a semi fixed source fission matrix approach ( Dufek, 2007) and a zero variance scheme using importance sampling ( Dufek, 2007) A good summary of their strengths and weaknesses (except vacation matrix approach) are summarized in Accelerated M onte C arlo E igenvalue C alculations ( Dufek, 2007 ) Source initialization via deterministic methods may also be utilized in conjunction with these approaches which in itself can accelerat e convergence due to a much shorter period of source initialization ( Wenner et al., 2000) Strengths and weaknesses are evident in these and other a pproaches and the need still exists to improve these

PAGE 20

20 calculations with regard to both accuracy (of the results and their confidence) and calculation time In fact, many of these methods are in the acceleration category, but are really used to reduce bias and help with fission source convergence of high DR problems due to their inherent poor convergence properties The work in this dissertation also put s forth new methodologies addressing the aforementioned issues The multiplication factor ( keffWhen all of the original source particles are exhausted, one can determine the k ) or dominant eigenvalue of a fissile system without an external source can be interpreted physically as the number of neutrons in one generation divided by the number of neutrons in the previous generation. In order to calculate this parameter via the Monte Carlo Method, one starts with an initial source distribution (guess) of neutrons inside the fissionable material, since the so urce distribution is not known. Using the known interaction probability distribution functions, the events in the life of each neutron (e.g., free flight, absorption, scattering) are randomly sampled Along the way the results of this process such as the number of fission neutrons produced, their location and direction are stored for future use. eff by tabulating the number of neutrons generated to the number of neutrons started. This should be the first estimate of the eigenvalue and is denoted the first generation (or cycle ) keff. This procedure is repeated for many generations until the source converges and then the active generations begin. Until this period has ended, the initial generations must be discarded. Once the source has converged, the final keff is taken as an average of the keff of all active generations A numerical problem that arises out of this strategy is that if the eigenvalue is not 1.0, the source will eventually die out ( keff < 1.0) or increase exponentially ( keff > 1.0). In order to account for this, a normalization scheme is utilized to keep the approximate number of source particles constant using the current keff estimate as a normalization factor.

PAGE 21

21 Many sources of bi as can become evident during a Monte Carlo eigenvalue simulation. A bias of the fission source has been shown to be related to the normalization of the source population at the end of each generation and autocorrelation of the fission source ( Brissenden and Garlick, 1986; Gelbard, 1991; Gelbard and Gu, 1994; Gelbard and Prael, 1974) This bias can be made insignificant for many problems; however significant issues can still be prevalent depending on the problem simulated (especially high DR problems) Also, the variance esti mators can also be biased for similar reasons ( Gelbard, 1990; MacMillian, 1973) This bias can become more pronounced if keffs from an unconverged source are utilized in the calculation of keffIntroduction to Monte Carlo Source Convergence or if the underlying source is not independent Source convergence in Monte Carlo eigenvalue problems is one of the most important issues the analyst must deal with in regards to a criticality simulation. All tally information should be deferred until the source has converged. The identification of a converged source, however, is not always an easy task. Many widely used codes such as MCNP ( X 5 Monte Carlo Team, 2003) and KENO ( SCALE, 2005) have multiple checks regarding any tally information, including the keffThe difficulty in identifying source convergence is that although the k These checks are not adequate though when pertaining to identification of a converged fission source. Most standard statistical tests for tallies are not adequate when applied to source convergence and may be biased for reasons previously discussed in this chapter Most code developers have begun or have already implemented ne w source co nvergence algorithms eff may have acceptable sample variance, it may be inaccurate due to many reasons, not least of which an unconverged source (from poor initial guess) and or strong autocorrelation. Traditional variance estimators are not pertinent when independence of samples is not guaranteed. Since keff is an

PAGE 22

22 integral quantity over the entire phase space, these errors may be small or no nexistent yet other tallies may suffer dr amatically from an improperly converged source. One major objective of this work is to provide a new diagnostic tool for detection of a converged source distribution for which bias will be reduced, and therefore overall accuracy improved for all tallies a s a secondary result In order to understand current and past work for identifying source convergence, the concept of a time series is introduced. A time series is a random or non deterministic function x of an independent variable t for which future behavior cannot be predicted exactly ( Jenkins and Watts, 1968) The keffJenkins and Watts, 1968 series is a time series that i s no more than a random variable of a complex stochastic process. One of the most important decisions (often assumptions ) made is whether the time series is said to be stationary. If stationary, the stochastic process can be described by its mean, variance, covariance, and other related parameters ( ) This is important, since a diagnosis of stationarity implies that the statistical pr operties of the series do not change with time (or generation in this case). Physically, without a stationary series, the source has not converged and reliable estimates of random variables such as flux, keffGeneral statistical tests for stationarity are difficult to find. Some of the more well known methodologies concerning source convergence diagnostics are as follows: ne utron current, etc., are not at tainable. Note that even with a diagnosis of stationarity, bias can still be present in both the tallies and their error estimation Furthermore, a converged stationary source may not be the true source For the remainder of this dissertation however, a statio nary source will be deemed converged unless otherwise stated Sandwich approach ( Yang and Naito, 2004) Information (Shannon) entropy based diagnostics ( Ueki, 2005) Intergenerational Correlation Length ( Shim and Kim, 2007) Limiting Distribution Approach ( Rich et et al., 2003)

PAGE 23

23 Th e sandwich method ( Yang and Naito, 2004) utilizes two calculations, one for which all the source is started in the most reactive area (obtained through pre calculations), and one with a completely flat initial guess. Provided the r esults of both simulations are within statistics of each other, it is assumed the calculation is successful. This approach suffers from several issues. For one, two calculations are needed instead of one. Second, both calculations may converge in keff; however the source may not have converged. It is well known that keff Ueki et al., 2003 convergence does not guara ntee fission source convergence further complicating the issue ( ) The information (Shannon) Entropy approach utilizes concepts from information theory in order to characterize whether the fission source is changing in an unacceptable manner. The Shannon Entropy is a measure of the randomness associated with data. The Shannon Entropy of a probability density distribution gives the minimum number of bits for which a density function can be stored in a computer memory A related definition called the relative entropy is a measure bet ween two distributions in terms of the information they carry Here, the idea is if the fission source is tallied over a finite binning structure, it can be characterized by its Shannon Entropy ( Cover and Thomas, 1991; Ueki and Brown, 2002) Several c riteri a have been developed to identify if the fissi on source is changing too much. To ide ntify what is too much, one criterion ( Ueki and Brown, 2002) relies on the concept of minimum descriptive leng th and the Kraft inequality to show that the source should be converged if the fluctuation of the relative entropy is insignificant relative to the fluctuation of the entropy itself through the use of their mean squares A second criterion ( Ueki and Brown, 2002) is derived from a large sample property and is another inequality comparing the difference between the entropy of the average source and the entropy of each generation with the relative entro py of each generation. A third criterion ( Ueki and Brown, 2002 ) is derived from the

PAGE 24

24 concavity of the Shannon Entropy and indicate s that the difference between the Shannon Entropy of the average source and the average Shannon Entropy should be within a small tolerance ( Cover and Thomas, 1991; Ueki, 2005) The Shannon Entropy approach has proven fruitful for many problems; however, assumptions must be made in the convergence criterion, none more important than the use of the average source from the second half of the simulation as the true source. Also, convergence of the Shannon Entropy does not guarantee fission source convergence ( Ueki et al., 2003) and the Shannon Entropy tests can suffer from a reduced diagnostic ability for problems which may exhibit symmetric oscillations ( LAbbate et al., 2007) The intergeneration correlation length ( Shim and Kim, 2007) approach attempts to modify the n ormal statistical testing procedure if the fission source had no autocorrelation. In order to do this, the equation must be modified to add the intergeneration correlation length ( Shim and Kim, 2007) This term is the minimum number of calculation generation s apart which separate two uncorrelated fission sources This method aims at an on the fly determination of the skipped generation s A pproximations to t he variance of the binned fission source matrix and the covariance matrix between fission sources of different generations must be estimated Good results have b een published in the literature; nevertheless as with all the other methods, estimators must be acquired which are problem dependent ( Shim and Kim, 2007) Furthermore, problems exist with very strong and long lasting correlation which is compounded in the high DR problems. The limiting distribution approach (LDA) uses convergence of probabilit y measures ( Billingsley, 1968 ) for which an autocorrelated time series is said to converge to the Brownian Bridge if the time series is stationary This methodology was originally developed for

PAGE 25

25 simulations in operations research ( Heidelberger and Welch, 1983; Schruben, 1982) It was applied to the kef f time series which will converge to the Brownian Bridge identified through the use of significance tests applied to the keffRichet et al., 2003 series ( ) This approach suffers from several drawbacks in that it tests the keffHeidelberger and Welch, 1981 series which is not a good indicator of sour ce convergence as previously mentioned. Also an estimate of a parameter called the long run variance must be determined which may be inadequate or impractica l using the estimators derived from the work in operations research. Many different estimators have been used and proposed ( ; Heidelberger and Welch, 1983; Schruben, 1982) Further, stationarity is being indirectly tested for, which only preliminary statistical power and size analysis has been undertaken for the proposed significance tests A major issue with the LDA is obtaining a parameter which is closely related to the Monte Carlo source convergence in order to perform the diagnostic test. A first guess has been the use of the keffWenner and Haghighat, 2007 series, which do es not imply source convergence A natural choice appears to be the use of the source c enter of m ass (C O M) ( ) The source COM is a single value at each generation which should closely follow the spatial behavior of the fission source. Given the source C O M, it was desired to utilize this series with analysis similar to that of the LDA In this dissertation a formal testing procedure has been implemented which shows that the sum of residuals from a least squares fit of a time series should follow a Brownian Bridge related distribution, and a sig nificance test has been implemented for this purpose called the Generalized KPSS test for stationarity ( Kwiatkowski et al., 1992) D istinct differences exist be tween this approach and the LDA approach making it a novel approach when compared to the LDA. The statistical testing procedure has been formalized, and the C O M series behavior has been chosen as the parameter to characterize source behavior. Also, estim ation of the long run

PAGE 26

26 variance has been updated to provide more accurate testing. More details on this method will be given in Chapter 3. Introduction to Monte Carlo Source Acceleration Accuracy of solution is the first priority for a Monte Carlo eigen value simulation. Nevertheless, if computation of an accurate solution is not practical within a limited time results are not useful. In order to overcome this, variance reduction techniques have been developed to speed up the calculation process Vari ance reduction has enjoyed great success concerning fixed source problems where localized objectives are desired. In an eigenvalue simulation, however, all fissile regions are important, therefore many conventional variance reduction techniques are not effective. Several methods have been proposed that are unique to the eigenvalue calculation. Some of these are as follows: Stratified Source Sampling ( Gelbard and Roussel, 1995) Super History Power ing ( Brissenden and Garlick, 1986) Fission Matrix Acceleration ( Dufek, 2007; Kaplan, 1958; Morton, 1956; Urbatsch, 1995) Vacation Matrix Approach ( Finch, 2006) Wielandts Method ( Brown, 2007; Yamamo to and Miyoshi, 2004) Importance Sampling Methodologies ( Christoforou and Hoogenboom, 2006; Dufek, 2007; Wagner et al., 2007) T he stratified source sampling method attempts to force at least one fission neutron be in each fissile region (user defined) This method has the potential to improve both accuracy and simulation time S uccess is limited since ensuring at least one fission neutron may not be enough, and at times it is still possible to lose the fission source entirely in a region ( Blomquist, 2000; Gelbard and Roussel, 1995) In s uper history powering, cycles and generations are not equivalent. Super history powering alters the source normalization scheme such that each cycle is not just one neutron

PAGE 27

27 generation but l generations contained within a cycle Instead of normalizing the weight for each cycle to the number of neutrons per gen eration, m it is normalized to the total source for that cycle and generation. The fission source for the first generation of each cycle is from the last generation in the previous cycle. Up to 10 generations per cycle is recommended. This approach can reduce source bias by increasing priority in highly multiplying regions however, any localized information in less multiplying areas may be less reliable ( Brissenden and Garlick, 1986; Dufek, 2007) If the system is discretized spatially, a matrix equation can be formulated such that each element of the matrix (fission matrix) represents the transfer probability of fission from one cell to another (or itself). This information can be used to bias the source and achieve speedup. The fis sion matrix method has had some success but problems in convergence of the fission matrix elements due to statistical noise has limited its usefulness. If the cumulative fission matrix is utilized; however, this problem may be reduced but not eliminated, and still contamination from unreliable sour ce cycles becomes an issue ( Dufek, 2007; Kaplan, 1958; Morton, 1956; Urbatsch, 1995) Un til recently, the fission matrix method has been the preferred method of fission s ource convergence acceleration. Statistical noise in computing the fission matrix elements, especially in those problems with loose coupling b etween different source areas h as prohibited the method from widespread success. A new approach called the Vacation Matrix method has been devised by identifying which variables in the matrix are independent of one another and uses them to devise a new spatial binning thereby reducing the noise associated with these variables The variables are the leakage probability and the vacated source probability ( Finch, 2006)

PAGE 28

28 Wielandts method is a deterministic acceleration technique for problems with high dominance ratio causing slow convergence. In this approach, t he eigenvalue in the transport equation is simpl y replaced by two components, and it can be shown that the dominance ratio of this equation will be reduced under certain conditions. In order to adapt this method to Monte Carlo eigenvalue s imulation s the fission bank and fission site selection is modif ied to incorporate these two components which facilitates the tracking of all fission neutrons and their progeny (with respect to this new formulation). The method causes more neutrons to be followed per generation yet there are less banked neutrons per collision for use with the next generation. This causes a larger dispersion of source neutrons inside each generation thereby reducing problems of loose coupling between fiss ion sites. A drawback to this method is the fact that often the simulation time has been shown to increase ( Brown, 2007; Yamamoto and Miyoshi, 2004) A semi fixed source fission matrix method has recently been devised which samples all regions equally since it is known that the sampling of fission matrix elements with different numbers of fission neutrons is one cause of instability in the fission matrix method. In order to achieve this, the spatial discretization must be done such that an equal number of source is in each volume. This discretization is not possible to know apriori ; however, if the mesh is sufficiently small, bias caused from this approximation is minimal or eliminated. Difficulties arise due to the size of the fission matrix utilized ( Dufek, 2007) Importa nce sampling related methods include a zero variance based scheme derived for eigenvalue c alculations wh ich can be implemented in a similar manner to conventional deterministic biasing. By looking at the components of the calculated keff, a detector res ponse l ike formulation can be derived and applied. Initial investigation in this dissertation shows

PAGE 29

29 favorable speedup is possible as well as in ( Christoforou and Hoogenboom, 2006) and ( Dufek, 2007) This work is a straightforward extension of the zerovariance formulations for shielding calculations which were extended to Monte Carlo shielding problems by Wagner and Haghigat in their Consistent Adjoint Driven Important Sampling (CADIS) methodology ( Wagner and Haghighat, 1998) Recently, the CADIS methodology has been exte nded to provide a means to obtain a similarly converged global solution over the problem phase space and is known as Forward Weighted CADIS (FW CADIS) ( Wagner et al., 2007) Lastly, a fission matrix based Monte Carlo method has recently been devised related to the semi fixed source method ( Dufek, 2009 ) In this work it is shown that the operator in the eigenvalue problem is independent of the source, however when the operator is discretized to write the fission matrix algorithm, it gains its spatial dependence. Through the use of limit theorems it is shown that in the limit of fission matrix mesh sizes approaching zero, the fission matrix element s are independent of the source distribution ( Dufek, 2009) This method shows some interesting characteristics of the fission matrix algorithm and is extended in this dissertation A solution methodology is arrived at in this dissertation for difficult source convergence problems, including high DR problems via the use of the f ission matrix algorithm with a deterministic obtained initialized source option, all include d in a parallel algorithm. U nder problem conditions which lead to a high DR the distribution of fission matrix element s remain indepe ndent (and Gaussian) if a large enough particle density is utilized with a fine fission matrix mesh density Utilization of a reasonable initialized source will assure that the bias is reduced while parallelization provides added efficiency A lso develo ped in this dissertation is a new source iteration strategy extending the CADIS methodology in an alternative fashion for reducing variance of the fission matrix element s. This

PAGE 30

30 methodology requires multiple deterministic importance calculations to acquire an estimate of the importance to fission for a user defined coarse source region. Once obtained, the power iteration method is modified such that each source in each region is transported to itself and every other region in separate calculations reusing the same initial conditions The weight window approach is utilized to control population weight as in the CADIS methodology instead of biasing the actual sampling probabilities as in the zero variance method This methodology can improve accuracy for lo osely coupled syste ms, as well as help to mitigate some of the difficulties seen with high DR problems Chapter 2 of this dissertation describes the general theory required encompassing this work. Chapter 3 gives a detailed discussion of the methodology utilized for the KPSS test for stationarity detection applied to Monte Carlo eigenvalue calculations, as well as a review of some information theoretic diagnostics utilized for comparison. Chapter 4 discusses implementation and testing of the diagnostics d iscussed in Chapter 3. Chapter 5 discusses the new source acceleration methodologies based on the CADIS importance sampling methodology and the fission matrix acceleration methodology. Chapter 6 discusses the implementation and testing of the acceleratio n methodologies developed in chapter 5. Chapter 7 gives new revised methodologies /algorithms for use in high dominance ratio problems by extending the fission matrix based Monte Carlo (FMBMC) approach. Lastly, chapter 8 gives some conclusions and suggestions for future work.

PAGE 31

31 CHAPTER 2 GENERAL THEORY A general description of the Monte Carlo eigenvalue problem in order to determine the multiplication factor keff is given here. Th e eigenvalue form of the equation is derived from the time independent Boltzmann transport equation. Once the equation is obtained, its mathematical adjoint is discussed in relation to particle importance. After that, a brief discussion of Monte Carlo transport is given, fol lowed by a short discussion of probability and statistics related to error estimation. Next, the estimators for keff are given and the two most common Monte Carlo keff algorithms are discussed. Furthermore, a discussion of stationary stochastic processes is given in the context of Monte Carlo source convergence. In addition, a short description of Monte Carlo variance reduction techniques is included. After that, a brief description of the deterministic SNForward Transport method is discussed. Lastly, computer codes ut ilized in this dissertation are described. Neutral particle transport can accurately be described by the Linear Boltzmann transport equation. Since neutron kinetics parameters will not be important for this work the time independent form is utilized. The LBE is nothing more than a balance equation ensuring neutron conservation and can be derived from physical arguments It does, however, contain six independent variables making direct solution, except in special cases, not possible The time independent integrodifferential form of the LBE is given by r E + ( ) = , + ( ) ( ) ( ) + (2 1) w here

PAGE 32

32 = particle coordinate in space E = particle energy = unit vector in direction of particle motion = angular flux = total macroscopic cross section = double differential scattering cross section = fission spectrum = average number of neutrons produced during fission = fission cross section = external independent source The first term (streaming) represents leakage into or out of the system. The second term (collision ) is a loss term due to collision. The first term on the right hand side of the equation (scattering) represents neutron scattering from all energies and angles dE into dE Next is the fission term which gives the total rate at which fissio n neutrons are born at a position with an energy distribution given by the fission spectrum ( ) This term assumes all neutrons are prompt, and since neutron kinetics is not being studied, this is adequate. Lastly, ( ) is the rate of neutrons appearing in about dE about E and about from an external source. Note that even though lowercase Greek symbols are utilized, it is still the macroscopic (as opposed to microscopic) cross sections in the equation as this follows convention of most textbooks concerning neutron transport theory The solution to this equation will provide the angular flux throughout the phase space. Often, to simplify the writing of EQ. (2 1) operator notation is used. The operator H is defined as = + ( ) d (2 2) Neglecting the fission source term, EQ. (2 1) can then be written as = (2 3)

PAGE 33

33 Adjoint Transport and Neutron Importance Utilizing operator notation and neglecting dependencies, the adjoint property is shown in EQ. (2 4) ( Bell and Glasstone, 1985) The dagger ( ) in EQ. (2 4) identifies the adjoint, and the brackets, are referred to as Dirac Brackets and signify integration over all independent variables. It can be shown that in order for the adjoint property to hold vacuum boundary conditions for the angular flux and its adjoint must be present as shown in EQ. (2 5) and EQ. (2 6) = (2 4) = 0 < 0 (2 5) = 0 > 0 (2 6) is the boundary surface and is an outward unit vector normal to the surface. Utilizing the adjoint property the adjoint operator can be derived and shown to be = + ( ) (2 7) N eglecting the fission term, the adjoint equation to the LBE can then be written as = (2 8) subject to the boundary conditions of EQs (2 5) and (2 6) Using physical arguments, however, it is possible to derive a balance equation for particle importance. This equation turns out to be the same as the adjoint transport equation, but is not subject to the vacuum boundary restriction ( Bell and Glasstone, 1985) This implies that the solution to the adjoint LBE relates to the importance of a particle to an objective represented by the adjoint source

PAGE 34

34 To illustrate the power of adjoint transport theory, the calculation of detector response for a fixed source problem is reviewed. Detector response ( R ) can be calculated from the angular flux and detector cross section ( ( ) ) as shown in EQ. (2 9) = ( ) (2 9) Forming a commutation relation by multiplying the LBE by the adjoint flux and the adjoint LBE by the angular flux and subtracting yields = (2 10) Utilizing the adjoint property in EQ. (2 4) EQ. (2 10) is reduced to = (2 11) Considering the adjoint source to be the detector cross section, detector response can be determined alternately as = ( ) (2 12) Physically, the adjoint flux is the importance toward producing a detector count. Note that if only a single adjoint calculation is performed, the detector response can be calculated for any forward source without the need for another calculation. These results were obtained by neglecting the fission term. This does not affect the resulting formulation for detector response which was omitted for simplicity. Understanding the concept of importance is necessary in utilizing it for biasing in the Monte Carlo method to be discussed later. Briefly, t he neutron importance can be used to bias the different physical processes in order to sample more important events relating to the desired objective. It is easy to s ee that when the objective is localized, the method is very powerful. In a c riticality problem though, the objective will b e spread throughout the problem and utilizing the neutron importance to accelerate the solution will be more difficult.

PAGE 35

35 The Eigenval ue Problem In multiplying systems, a system is critical if it has a time independent nonnegative solution to eqn. EQ. (2 1) Since it is not easy to know apriori the combinatio n of cross section and geometry necessary to achieve criticality, EQ. (2 1) is modified into the form of an eigenvalue problem. The eigenvalue that will be discussed in this work is the multiplication factor ( keffFirst, neglecting the external source, k ) eigenvalue. eff(2 1) is introduced into EQ. which will give a measure of the cr iticality of the system: r E + ( ) = , + ( ) ( ) ( ) (2 13) It is of interest to find the fundamental mode solution of EQ. (2 13) This eigenvalue is the multiplication factor ( keffk ) of the system. The system is critical (self sustaining) if eff = 1, supercritical if keff > 1, and subcritical if keff The form of the LBE in EQ. < 1. (2 13) is the integro differential form of the equation. The equation can be transformed to the integral form using the Greens function shown here, or the method of characteristics. If the Greens functi on ( ( ) ) is defined a s the neutron angular flux at due to a unit point source at emitting one neutron/sec in the direction with energy then the solution to the angular flux can be obtained by : = ( ) ( ) ( ) (2 14) This is so because if one looks at the fission source density given by ( ) ( ) it is apparent that the Greens function (whatever it may be) in EQ. (2 14) is like a transfer function for the source, providing its impact to the flux

PAGE 36

36 by effectively summing u p all the point sources totaling the actual problem source. This is important since the Monte Carlo Method is often said to solve the integral form of the LBE. This is so because of the Fundamental Formulation of Monte Carl o discussed later. Starting with either EQ. (2 13) or EQ. (2 14) the same matrix eigenvalue problem can be derived As previously st ated the transport equation is often written in operator form ( Lewis and Miller, 1993) where the transport operator H is as before, and the fission operator F is defined as = ( ) ( ) (2 15) Without explicitly writing the dependencies then, EQ. (2 13) can be written as = 1 (2 16) Inverting H and operating by F on both sides of EQ. (2 16) the resulting form of the equation is = 1 [ ] (2 17) Finally then, EQ. (2 17) can be rewritten as = (2 18) where = is the source and = Since the operator A applied on S gives the next generated fission source, with the eigenvalue, A S can then be defined as ( ) = ( ) ( ) (2 19) Physically, one can think of ( ) as the expected number of fission neutrons produced per unit volume at from fission neutrons at EQ. (2 19) is solved using the power iteration method ( Bell and Glasstone, 1985; Lewis and Miller, 1993) If the system is discretized spatially the operator A c an be approximated with the fission matrix ( ) ( Kaplan, 1958; Morton, 1956) The fission matrix elements ( ) s, can be considered as the probability that a neutron born in cell will generate a fission neutron in cell as

PAGE 37

37 ( ) = ( ) ( ) ( ) (2 20) where ( ) is the fundamental mode source. Of course ( ) is not known apriori and an approximation is used with biasing methods to be discussed in chapter 5. The Monte Carlo Method The Monte Carlo method is a statistical method to solve mathematical and / or physical problems. Unlike deterministic methods, such as the discrete ordinates ( SNLewis and Miller, 1993 ) method, which explicitly solve s the LBE ( ) Monte Carlo methods do not explicitly solve any equation Instead, the average particle behavior is determined from simulating individual particles ( Metropolis and Ulam, 1949; X 5 Monte Carlo Team, 2003) Many in the nuclear engineering discipline point out that Monte Carlo solves the integral form of th e LBE. Depending on your viewpoint, this may or may not be the case. First, Monte Carlo solves a transport problem by simulating individual particle histories for which an equation doesnt need to be written. In this way nothing must be said of the LBE. However, the Monte Carlo Method can be used as a mathematical tool to solve complex integrals (or simple ones) Since the transport equation can be written in integral form many say that Monte Carlo is used to solve this integral equation. This sta tem ent is unclear since the exact pdf of the complex process needed in the integral analysis such as transporting particles through a 3 D geometry is not known. Instead it is sampled by tracking all the microscopic events (which constitute the complex process) in the histories of a large number of particles, thus solving the integral equation by Monte Carlo methods ( Dionne, 2007; X 5 Monte Carlo Team, 2003) Because of this fact, importance sampling can be effectively utilized through the use of an approximate importance function.

PAGE 38

38 Random Variable All elementary physical processes appear to be random at the microscopic level. Individual process behavior cannot be determined with any certainty but these random events can often be characterized by their macrosco pic effects which represent average behavior of a physical system In general, every outcome of an experiment can be associated with a number by specifying a rule of association. This rule can then be called a random variable since different numerical values are possible and because the observed value depends on the possible experimental outcome ( Devore, 2000) Two functions are defined for any ra ndom vari able. These are the probability density function ( pdf ) and the cumulative distribution function (cdf). For a continuous random variable, t he pdf is the probability that the outcome of a random process is x with in x and x + dx For example, for a cubical die, the probability of a particular event to occur is 1/6 because there are six sides and the cube is fair. The pdf ( p(x) ) is normalized as ( ) = 1 (2 21) The cdf is the probability that the outcome of the random process has a value not exceeding some value x The cdf ( P(x) ) is then defined by ( ) = ( ) (2 22) Random Numbers and the Fundamental Formulation of Monte Carlo (FFMC) Random numbers are a sequence of numbers for which the occurrence of each number is unpredictable. Commonly, random numbers () are defined in the range of [0,1] and their pdf is given by ( ) = 1 0 < < 1 (2 23) This leads to a cdf given by

PAGE 39

39 ( ) = ( ) = (2 24) Assuming that the basic physics associated with a problem are known, i.e., obtaining the pdfs associated with the physical properties; it is then coherent to desire to simulate the problem. Assuming that random numbers can be generated, it is desired to obtain a random variable ( x ) relating to a random process with pdf p(x) by relating the random variable to a random number () as ( ) = ( ) 0 1 (2 25) Substituting EQ (2 23) into EQ. (2 25) the result is ( ) = = ( ) = (2 26) EQ. (2 25) is given the name the Fundamental Formulation of Monte Carlo (FFMC ) (class notes on Monte Carlo Methods by Alireza Haghighat) Particle Transport with Monte Carlo Methods Using the concept of the FFMC, individual particl es can be simulated from their birth to death This assumes that the pdfs associated with the particle behavior are known, such as particle interaction data, which are given through interaction cross sections. Even particle birth is a random process t hat has an associated probability. Consider the events that are possible in the life of a particle. The first event that must be present is the birth of a particle. Using a density function to represent the spatial distribution of source one can sample this density function to give birth to source particles. Alternatively, in an eigenvalue equation, once an initial starting source distribution is given, subsequent generations of neutron births are a direct result of fission from the previous generation. This takes care of the

PAGE 40

40 spatial distribution of source, but similarly, initial particle direction and energy must be determined. These as well can be determined similarly to the spatial distribution. Aside from particle birth, interactions may occur T his creates several physical phenomena which must be characterized. First, how far does the particle travel between interactions? To show how the basic physics are represent ed utilizing the FFMC, the equation for free flight between collisions is derived here. From basic particle physics it is know n that the probability of no collision in a distance r is and that the probability of collision in dr is dr This leads to EQ (2 27) ( ) = (2 27) Utilizing the FFMC, it can be written that ( ) = = (2 28) Performing the integration and r earranging terms the particle free flight length between interaction s is determined by = ln (2 29) All of the physical phenomena are sampled using this technique ( analytical inversion) or by using another sampling technique (such as the rejection technique) when direct analytical inversi on is not possible or practical Knowing the original particle location, its initial direction of travel, and the distance to collision, the interactio n site location is determined and the material or isotope of interaction is determined. Selection of interaction type can be performed by forming a cumulative distribu tion function relating different interactions. This ensures that each interaction is statistically chosen according to its interaction probability where x represents a generic interaction of type x If the particle has been a bsorbed, its life has been ended, however, if the absorption event created

PAGE 41

41 new particles /their initial position (s), energy /energies and direction (s) are then temporarily stored, or banked. Once the original source particle has bee n terminated, transport of the secondary particle(s) is performed. If the calculation is for neutrons only g ammas, electrons or other particles that are generated are not banked and are ignored. I f the particl e is not absorbed, a new energy and direction of travel are determined from random sampling of the physics of the interaction type and a new distance to collision calculated. The governing physics equation utilized in selection of a scattering angle and energy after collision is the differential sca ttering cross section of interest and subsequent kinematics information. Continuing then with the original particle, this process is repeated until the particle is either absorbed, or exits the problem boundaries. Upon particle termination, any secondary particles generated by that particle are transported until termination (except for neutrons in an eigenvalue problem) At this time a new source particle is generated until a user defined number of histories have been simulated ( Wagner, 1997) In an eigenvalue problem, fission neutrons are stored instead of transported. Upon completion of all particles in the initial batch (generation) of starting neutrons the banked source particles are started as if they were the original source particles. Eventually, the source distribution will settle and a user defined number of active cycles of source generations are simulated To obtain observable results from the system, tally regions are set up to score any contribution a particular particle has toward an objective. Since each particle is simulated, an adequate number of trials are necessary to obtain a statistically reasonable avera ge. To estimate the reliability of the results, the relative error of each tally is evaluated. In an eigenvalue

PAGE 42

42 problem, all scoring is deferred until a sufficient number of generations have been run such that the source has converged. These deferred generations are commonly called skipped generations. Statistics for Monte Carlo Analys e s Since the information obtained from Monte Carlo simulations is average quantities, it is desirable that these averages are within a specified precision. To achieve this, a basic review of pertinent probability and statistics is given. The expected value of a function of a random variable x is the true mean of the function. If a function g(x) is defined on an interval [a,b] then the expected value of the function is given as [ ( ) ] = ( ) ( ) (2 30) For a continuous random variable, where g(x)=x the true mean of x is given by = ( ) = ( ) (2 31) and its variance is given by = [ ( ) ] = ( ) ( ) (2 32) I t is of interest, however, to obtain averages and their associated variances when sampling from unknown populations, whereas the above formulae are from known populations. Similar to the above equations, the sample mean and sample variance can be defined respectively as = 1 (2 33) and = 1 ( ) (2 34) Since inferential statistics is utilized, an unbiased estimate of the population parameters is sought through the use of the sample parameters. Care must be taken to ensure that the parameters are unbiased estimators of the population parameters. To accomplish this, the expected values of the

PAGE 43

43 sample mean and sample variance are computed. The expected value of the sample mean is given by [ ] = 1 = 1 = ( ) = (2 35) As a result EQ (2 33) i s an unbiased estimator of the true mean. For the sake of brevity, only the result of the expected value of the sample variance is written as [ ] = 1 ( ) = 1 (2 36) This demonstrates that the EQ. (2 34) is not an unbiased e st imator of the true variance. Hence, a new unbiased estimator is define d as = 1 1 ( ) (2 37) To proceed from here, the Central Limit Theorem (CLT) and its validity is discussed. The CLT deals with a random sample of x1, x2, ., xn from a random distribution with mean and variance P2 It states that has a normal distribution with = and = if N is sufficiently large. This statement is true even when the original population distribution is not normal. Extending this to the sample variance the relative error for is then given by = = (2 38) This discussion assumes independence of samples. If samples are dependent, then the formulae for mean, variance and relative error become biased since one value will influence another and this correlation must be accounted for in proper error estimation. This will not be dealt with explicitly in this dissertation

PAGE 44

44 Tallying in a Monte Carlo Simulation Now that the tools necessary for transport of particles and statistics analysis have been discussed, a review of tallying (scoring) is given. The calculation of physical quantities such as neutron flux, neutron current, reaction rates, gamma heating or other quantities of interest are often desired In order to determine any quantities in a localized phase space, a discret ization of space, energy, angle or time must be unde rtaken according to the desired information. Recall also that results from Monte Carlo tallies are normalized per source particle. This should be apparent from an understanding of the method. If actual physical results are desired, the tally information must be multiplied by the total source strength. Another quantity of interest is the effective multiplication factor keff. Here, the parameter in question is not localized and will be dependent on the entire fissionable region. keff estimators are also discussed later In an eigenvalue calculation the average of the keffCollision e stimator s estimators over all active cycles is often utilized. Some standard computer codes use more than one estimator combined, while others utilize only the collision estimator. Time depen dence is not important in this dissertation and will be neglected. The tallies discussed are algorithmic in nature. A counter is set up in order to estimate the number of collisions in a certain discretized volume element Vi, with di rection within a solid angle with energy E within Ej(2 39) This counter can be represented with the counter in EQ ( ) = ( ) + ( ) (2 39) The parameter w is the particle weight and for an unbiased simulation w= 1. The reason the counter is divided by the total cross section at each score will be discussed.

PAGE 45

45 Physically, if the total number of collisions in a region were counted and normalized per simulation source particle, what would be obtained is the total reaction rate in that volume. Knowing that the relationship between neutron flux and reac tion rate first introduced in EQ. (2 9) and repeated here with the scalar flux and macroscopic cross section in EQ (2 40) i s ( ) = ( ) ( ), (2 40) recall t he relationship between angular flux and scalar flux i s ( ) = (2 41) T he counter in EQ. (2 39) can be utilized to yield the angular flux, scalar flux and reaction rate of interest First, the angular flux is given by = ( ) (2 42) where H is the number of histories. If the counter in EQ. (2 39) did not include the total cross section, EQ (2 42) would be nothing more than the total collisions within the phase space unit and is in effect a discretized reaction rate. To estimate the angular flux, a factor of the total cross s ection must be included in accordance with EQ. (2 40) which is energy dependent As a result the counter must be divided by the cross section at the time of collision. Following from EQ. (2 42) the scalar flux can be shown to be = = ( ) (2 43) Lastly concerning localized data, the re act ion rate collision estimator is written as ( ) = ( ) (2 44) where

PAGE 46

46 ( ) = ( ) + ( ) ( ) (2 45) Path l ength e stimators The concept of the path length estimator has its origin i n the physical interpretation of the definition of the scalar flux as the total path length of particles per unit volume. As before, discretization is necessary and a counter sums particle path lengths within volume element Vi, with direction within a solid angle with energy E within Ej and can be expressed as ( ) = ( ) + (2 46) where p is the path length. The angular and scalar flux can then be written in EQ.s (2 47) and (2 48) respectively as = ( ) (2 47) and = ( ) (2 48) The reaction rate is then given by ( ) = ( ) (2 49) where ( ) = ( ) + ( ) (2 50) Surface crossing e stimator s Surface crossing estimators lend themselves to easy calculation of the partial currents since they are associated with a surface in their definition. Simple counters can be set up for this task. It is still possible, however, to calculate the flux using a surface crossing estimator as well. The end result for the flux, equivalent to a volume estimator in a thin foil that approaches a zero thickness is shown in EQ. (2 51) as

PAGE 47

47 = ( ) (2 51) where ( ) = ( ) + ( ) (2 52) and the area is the area of the surface being crossed. A difficulty will arise as approaches 90 since the formulation will approach infinity. In order to overcome this problem, an exclusion region must be utilized. Other e stimators Othe r estimators may exist where an analytical formulation is utilized to improve the tallying process. These tally estim ators must be used with caution as biased results may occur since actual Monte Carlo transport is not done for all of the particle history keffSeveral k e stimators eff estimators can be devised. The collision estimator for keff for a single generation of neutrons is easily obtained. In a single region problem the equation to calculate keff for a single generation is given by = 1 (2 53) where n = current generation i = collision number for all collisions in an entire generation N = source particles in current generation wi = source particle weight before collision. For algorithms without implicit capture (variance reduction technique) the absorption estimator can be written as

PAGE 48

48 = 1 (2 54) where i is analog capture. Last, a track length estimator can be written as = 1 (2 55) where i is every path length segment in a fissionable region for all particles in a generation. A combination of these estimators can be used as in the MCNP code or a single estimator (usually the collision estimator) may be used. This procedure is easily exte nded to multi region problems as one only needs to modify the interaction cross sections as a particle moves from region to region. Non Analog Monte Carlo Quite often results from a Monte Carlo calculation require significant computation time and some results are even impractical to obtain. In order to make these simulations viable, techniques have been developed to reduce the variance associated with the tallies. In an apropos manner, these techniques are referred to as variance reduction techniques They range from very simple (and often effective) to the very elaborate. Usually, the effectiveness of variance reduction is measured using the figure of Merit (FOM) discussed later in this chapter Common Variance Reduction Techniques V ariance reducti on techniques deal with the concept of particle weight. One can think that in an analog (standard) Monte Carlo simulation, each particle has a weight of unity Utilizing variance reduction techniques, a particles weight may be adjusted. B iasing s hould not be introduced, however, and the weight adjustments will follow physical arguments. The weight is usually modified according to the following strategy shown in EQ. (2 56) = (2 56)

PAGE 49

49 w is the biased weight after biasing and wo1. Cutoff Methods: Cutoff methods take advantage of the fact that particles at some locations, energies and/or times may not be important. Using this fact particles can be selectively killed. is the weight before biasing. A list of some of the most common variance reduction techniques follows: a. Spatial Cutoff: The spatial cutoff would typically be the problem boundaries. Particles that go outside of this region are terminated. It is possible to set any region of space as a region to terminate particles if so desired. b. Energy Cutoff: Particles whose energies are not in the range of interest are not tracked and effectively killed. c. Time Cutoff: Particles not in the range of time of interest are not tracked. 2. Splitting Methods with Russian Roulette: These methods involve splitting particles in regions of high importance to lower weight particles since the probability of contributing to the desired result is high. The more particles that contribute, the faster the variance is reduced. Here, the word region may be geometric as in geometric spitting, energy reg ions for energy splitting, or certain angular bin for angular splitting. Time splitting is also possible for time dependent problems. If available, the neutron importance function can be utilized. For instance if the importance map is known in space, a particle moving from region of lower importance I1 to higher importance I2 should be split into n=I2/I1 particles (any fractions are kept by comparing a random number to their weight and killing the particle if t he random number is higher than the weight). For moving to lower

PAGE 50

50 importance regions, Russian Roulette is performed which always accompanies splitting methods After a particle has fallen below a chosen weight cutoff a random number () is generated and compared to 1/ d where typically d is between 2 and 10. If < 1/ d the history is terminated, else the particle survives and is given a weight d*w This ensures that particles whose weight has fallen too low to be of consequence are not tracked and, as a result, simulation time not wasted on their behalf. This action conserves the total particle weight so that a fair game is still being played. 3. Weight window : The weight window is a set of lower and upper weight s for which a particle weight must be main tained, outside of this window, splitting or Russian Roulette is applied to keep the particle within this weight window 4. Survival Biasing (Implicit Capture) : Here, each collision is treated as a fraction of individual interactions according to their int eraction probability. The particle then continues after interaction with a weight modified by its scattering probability (effectively, the fraction of the particle that scatters survives) If the weight falls below the weight cutoff, Russian Roulette is performed. 5. Exponential Transform: To transport particles long distances, the distance between collisions in a preferred direction is artificially increased utilizing an exponential transformation (and decreased in the opposite direction) and the weight is correspondingly adjusted according to EQ. (2 56) 6. Source Biasing: Sourc e particles that have a higher importance are sampled more than those with a lower importance To account for this, they are given a lower

PAGE 51

51 weight to preserve the physics (conserve total particle weight) This can be ang kle energy or space dependent, or any combination thereof. 7. Forced Collision: For very optically thin media, particles may be split into two components, an uncollided component and a collided component. This forces interaction s to occur in the region of interest. 8. Analytical Bias ing: Invo lves deterministically transporting particles on collision to the neighborhood of a tally and then calculating contributions to the tally from these particles. Care must be taken here to be certain the deterministic transport is accurate or severely biased (ye t precise) results may be obtained. 9. Importance sampling: Importance sampling utilizes known importance information (or estimated information) to sample particles that may contribute more to the objective with higher frequency. Many of the methods alr eady shown touch on this idea. More will be said about this technique in Chapter 5. These are many of the most widely used variance reduction techniques. Specific acceleration methods for criticality problems were introduced earlier and more detail will be given in Chapter 5. Efficiency of a Monte Carlo Simulation In general, a Monte Carlo tally relative error squarte ( R2) should be proportional to 1/ N the total number of particles simulated, if the Central Limit Theorem is valid. This is easily confirmed for the eigenvalue as well where generations of particles are simulated. Also, the computer simulation time T is proportional to the total number of particles simulated N This implies that the factor R2T should remain approximately constant. A good simulation will result in a reliable tally in a short time thus desiring a minimal relative error in a short time. As a result, the Figure of Merit (FOM) is utilized as a measure of efficiency of the simulation,

PAGE 52

52 whereby the larger figure of merit the more effective the calculation concerning the tally and is given by = 1 (2 57) To compare two different techniques (or vari ance reduction), speedup can be obtained by comparing the FOM from two different simulations as speedup = (2 58) If the same relative error is achieved in both simulations, the speedup reduces to a ratio of simulation times ( T1/T2Monte Carlo k ). effFor simplicity, the formulations are written in 1D with a uniform material. Note that these formulations can easily be expanded to 3D and multi region problems. Solution Strategies Source Iteration Method The power iteration method solves for the keff eigenvalue as the ratio of two successive neuron generation sources as = ( ) ( ) ( ) ( ) (2 59) where S(n)(x) is the source distribution related to the nth generation. Final results are calculated through a recursive procedure that starts with an initial guessed spatial source distribution S(0)1. Either from the initial source or the previous generation source, N source neutrons are transported. (x) The simulation is assumed to have N neutrons per generation. Survival biasing is assumed combined with Russian Roulette. The procedure is as follows:

PAGE 53

53 2. At each collision within the generation (for all particles) several things must occur (recall this problem is a 1 region problem so all collisions are in a fissile region) a. T he contribution to fission is calculated by one of the keff estimators such as the collision estimator using the counter ( ) = ( ) + (2 60) where w = the weight of the particle at the time of collision and n = the current generation number b. Store information of fission neutrons: i. Update the fission storage bank with the number of fission neutrons to include for the next generation source. In order to keep the particle population stable, a normalization factor of 1/ keff is utilized as follows: = + + (2 61) where = random number between 0 and 1. This is done to conserve the weight from fractional particles, i.e. if neutrons =2.2, 20% of the time three fission neutrons are generated and 80% of the time two fission neutrons. ii. Store the location of the fission neutrons. iii. Sample the fission spectrum in order to provide the energy of the fission neutron( s ) and store the/their energy /energies iv. Sample and sto re the fission neutron(s) initial direction of travel. c. Calculate the weight of the initial particle after collision.

PAGE 54

54 d. Apply Russian Roulette as needed. e. Sample the scattered particle direction. f. Repeat until the particle is terminated. NOTE: If splitting techniques are utilized, a run bank must be utilized to store split particles, when the original split particle is terminated, particles are taken from the run bank (other parts of the original split particle) until exhausted, whereupon the next particle i s selected. 3. Repeat for all particles in the generation. 4. Calculate the generation keff as: ( ) = ( ) (2 62) 5. Normalize the weight of ALL the fission neutrons to the input number per generation ( npg ) as ( This number should be close to 1 as the formulation should generate close to npg neutrons.) = (2 63) 6. Repeat process for all generations. 7. Calculate the final keffFission Matrix Method and associated relative error The fission matrix method does nothing different in the transport of particles, or the storage of the fission source for each generation. An addition to the algorithmic logic must be the discretization of the source into spatial regions. To this end, onl y the following modifications to the source iteration method are needed. 1. Store the number of source particles in each discretized spatial region as ( ).

PAGE 55

55 2. At each collision for every particle in the generation, increment a fission neutron counter to add to the proper fission matrix element as: ( ) = ( ) + (2 64) 3. After all histories for the current generation have been exhausted, the fission matrix elements are approximated as shown in EQ. (2 65) ( ) = ( ) ( ) (2 65) 4. Calculate ( ) as the dominant eigenvalue of the fission matrix ( ) Two different approaches for formulating the fission matrix solution are available. The elements of ( ) can be zeroed after every generation resulting in a cyclewise fission matrix, or a cumulative fission matrix can be utilized where the elements of ( ) are obtained using cumulative data from all active generations. Deterministic SNFor deterministic analysis, s ince the LBE is too complex to solve analytically in all but a few special cases, numerical methods are utilized. One of the most widely used methods of solving the LBE is the so called S Method N method. Some of the work performed for this dissertation utilizes results of deterministic SN calculations to perform variance reduction; therefore a brief background on the method is given. The discrete ordinates ( SN) method has its origin in the SNCarlson, 1953 meth od that was proposed by Carlson ( ) The discrete ordinates method approx imates a solution to the transport equation by solving the transport equation along a discrete number of directions using a numerical quadrature The directions and their associated weights define a numerical quadrature set. Discretization over energy and space are also required and are discussed

PAGE 56

56 Multigroup Energy Approximation In order to solve the LBE numerically with the discrete ordinates method, the energy variable is discretized. This discretization is performed by d ividing the energy domain i nto a discrete number of intervals ( G ) called groups starting from high energy at g=1, to the lowest energy group g=G The procedure is known as the multi group approximation. To obtain the multigroup form of the LBE, EQ. (2 1) (with the eigenvalue keff Lewis and Miller, 1993 included, and noting that the scattering cross section is only dependent on the scattering angle, ) is integrated over the energy intervals, resulting in ( ) ) + ( ) = ( ) + ( ) f or g=1, ,G (2 66) Particles in group g are considered to be at one energy between Eg and Eg+ 1 The group angular flux is defined by = ( ) (2 67) and the multigroup cross sections are defines as ( ) = ( ) , (2 68) ( ) = ( ) (2 69) and = 1 1 , 4 4 (2 70) and finally, the fission spectrum is defined as

PAGE 57

57 = ( ) (2 71) Particle Streaming and Scattering in 3 D The most common numerical solvers utiliz e the discrete ordinates method to represent the phase space in a Cartesian coordinates system. The necessary trigonometric information for this repre sentation is shown in Figure 21. Figure 2 1. 3 D C artesian spaceangle coordinate system The streaming operator of EQ. (2 1) is then written in Cartesian coordinates as = + + (2 72) where = = (2 73) = = 1 (2 74) and = = 1 (2 75) Utilizing this form of the streaming term, and expanding the scattering term in EQ. (2 1) using a set of truncated spherical harmonics, the Legendre expanded multi group form of the transport y z r xe ze ye x d

PAGE 58

58 equation in 3D Cartesian coordinates is given in EQ. (2 76) For brevity, the derivation is omitted ( Sjoden, 1997) + + ( , ) + ( ) ( , ) = ( 2 + 1 ) ( ) ( ) ( ) + 2 ( ) ( + ) ( ) ( ) cos ( ) + ( ) sin ( ) + ( ) ( ) (2 76) where = x direction cosine = y direction cosine = z direction cosine = arctan azim uthal angle g = angular flux of group g g = total macroscopic cross section of group g s,g l = lth from group g to g Legendre moment of the macroscopic differ ential scattering cross section Pl( ) = lth Legendre polynomial g,l = lth P Legendre flux moment of group g l k( ) = lth, kth = l Associated Legendre polynomial th, kth = l Associated Cosine Legendre flux moment of group g th, kth = fission spectrum of group g Associated Sine Legendre flux moment of group g fgFinally, the flux moments are defined as = fission generation cross section of group g ( ) = 2 ( ) 2 ( , ) (2 77) ( ) = 2 ( ) 2 cos ( ) ( , ) (2 78) ( ) = 2 ( ) 2 sin ( ) ( , ) (2 79)

PAGE 59

59 Angular Discretization via Numerical Quadrature Numerical quadrature is o ften utilized in mathematics as a powerful numerical integration technique ( Atkinson, 1988) In the discrete ordinates method, the transport equation is solved for a finite set of directions = ( ) = 1 2 Each angle has a weight associated with it. This collection of angles and weights replace angular integration with summation. A good quadrature set for discrete ordinates methods preserves the moments of the angular flux. These directions are located on a unit sphere such that the sum of the squares of the unit directions is equal to unity for any direction as = + + =1. (2 80) The most common quadrature set utilized with the discrete ordinates method is the level symmetric quadrature ( Carlson and Lee, 1961) The total number of directions in 3D for level symmetric SNThe level symmetric set is produced based on preservation of the moments of direc tion cosines and physical symmetry ( quadrature is N ( N +2) Lewis and Miller, 1993) Other quadrature sets are utilized above order S20Lathrop and Carlson, 1965 since some of the weights of the level symmetric quadrature become negative ( ) Still yet, other quadrature sets may be used for biasing toward a particular direction or directions in which ordinates are s plit utilizing techniques such as ordinates splitting (OS) ( Longoni, 2004; Longoni and Haghighat, 2001) and regional angular refinemen t (RAR) ( Longoni, 2004; Longoni and Haghighat, 2002) Discretization of the Spatial Domain In order to solve for the angular flux throughout the geometry of the problem, the spatial domain is partitioned into discrete spatial meshes. In Cartesian geometry, these meshes will correspond to the x y and z variable for which the transport equation has been written.

PAGE 60

60 Commonly, the equation is recast in cylindrical or spherical coordinates, but is not shown here. If EQ. (2 76) is written without energy dependence, combining source terms (including scattering), and incorporating the angular discretization, the abbreviated result shown in EQ. (2 81) is obtained where r represents the spatial domain (x,y,z) + + + ( ) ( ) = ( ) where = 1 2 (2 81) Note that the ( ) term can be represented by either the scattering source plus fission source (eigenvalue problem), or the scattering source plus fixed source since these terms dont explicitl y depend on the angular flux. Integrating EQ. (2 81) over a cell volume xiyjzk = Vi,j,k and simplifying terms yields , + , + , + , = (2 82) where, = 1 (2 83) = 1 (2 84) = 1 (2 85) = 1 ( ) (2 86) = 1 ( ) (2 87)

PAGE 61

61 Source Iteration Scheme The source iteration scheme proceeds by assuming the source term for the first iteration, or is updated from the utilization of previous results for each subsequent iteration. With assumed constant, EQ. (2 82) can be utilized to solve for the angular flux ( ) for group g, and then the flux moments can be backed out from EQ. (2 77) EQ. (2 78) and EQ. (2 79) These newly calculated values can then be used to update the source for the next ite ration. This process is repeated (iterated) until the zeroth flux moment is acceptably converged. Note that one typically starts with the faste st energy group and convergence proceeds from higher groups to lower and the group iteration is only performed once (no upscattering) since for no upscattering the scattering source of the current group only depends on the converged higher groups. For upscattering, an outer iteration scheme is necessary, while for eigenvalue problems another outer loop is necessar y since the fission source and keff need to be updated between the outer iterations and a convergence criterion for keffDifferencing Scheme utilized for this outer iteration. One final step is necessary to solve the LBE. Note that the streaming term in E Q. (2 82) contains seven unknowns, the incoming and outgoing cell fluxes and the average cell flux. Recall that the average cell flux can be obtained from (2 86) but it is done so using sweeps for every fine mesh for a given direction and repeated for all directions. Utilizing the swe ep methodology provides the fluxes through the fine mesh boundary conditions at the interface. To obtain the other three unknowns a differenc ing scheme is assumed to represent the angular flux dependence in the spatial mesh. Many differencing schemes ar e available from simple linear assumptions to advanced differencing algorithms like the Exponential Directional Weighted ( EDW ) Direction Theta Weighted ( DTW ) and Exponential Directional I terative ( EDI ) schemes

PAGE 62

62 ( Sjoden, 2007; Sjoden and Haghighat, 2004) With the aid of the differencing schemes, the system is now fully determined and can be solved. Computer Codes Utilized This work in this dissertation builds on the theory and algorithms discussed throughout this chapter. To that end, several well established computer codes which utilize the theory and algorithms discussed have been drawn upon for this work. These codes are discussed here. In addition, a 1Dimensional Monte Carlo eigenvalue code was developed and its basic features described here. New methodologies added to the code are detailed in th e chapter(s) pertinent to the features developed. Used extensively in this work concerning source convergence diagnostic analysis with Monte Carlo eigenvalue problems is the KENOV.a code. The KENOV.a code is not used as standalone software and is part of an analytical sequence inside of the S tandardized C omputer A nalyses for L icensing E valuation (SCALE) version 5.0 code package. SCALE is a modular code system that was originally developed by Oak Ridge National Laboratory. The SCALE system utilizes wellestablished computer codes and methods within standard analysis sequences that (1) provide an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem dependent cross section processing and analysis of criticality safety, shielding, depletion/decay, and reactor physics problems ( SCALE, 2005) KENO V.a is a three dimensional (3D), multigroup, Monte Carlo criticality transport program. It is run through the use of the Criticality Safety Analysis Sequence 25 (CSAS25) analytical sequ ence. This sequence links KENOV.a to cross section processing codes in order to

PAGE 63

63 obtain an automated procedure for running the 3D Monte Carlo transport problem with problem dependent cross sections. Used sparingly in this work is the well known Monte Carl o N Particle (MCNP) version 5 code. MCNP is a general purpose Monte Carlo N Particle code that can be used for neutron, photon, electron, or coupled neutron/photon/electron transport, including the capability to calculate eigenvalues for critical systems. The code treats an arbitrary three dimensional configuration of materials in geometric cells bounded by first degree and seconddegree surfaces and fourthdegree elliptical tori. Pointwise cross section data are used ( X 5 Monte Carlo Team, 2003) The PENTRAN code is utilized in this work for the determination of an initialized source distribution for Monte Carlo analysis as well as for obtaining importance data for use with importance sampling techniques. The PENTRAN code is a 3D Discrete Ordinates code with full phase space decomposition providing parallel processing capability in energy, space and angle. Some advanced features of PENTRAN include : Parallel I/O Parallel Memory Space, Angle, & Energy Decomposition Adaptive Differencing strategy Taylor Projection Mesh Coupling Rebalance and Alternating Direction Sweeping Flux Moment Preconditioning Advanced Quadrature with Refinement Options Input/Out put processing codes (PENMSHXP, PENDATA) Lastly, a 1 Dimensional multigroup Monte Carlo code (1Dcrit) was written in FORTRAN90/95 with both a traditional solution algorithm and a modified solution algorithm (discussed in Chapter 6). Implicit capture is always utilized, in conjunction with Russian Roulette. The standard solution algorithm computes keff utilizing a standard collision estimator.

PAGE 64

64 The number of fission neutrons is kept near the initial user input particles per generation utilizing normaliza tion with the current keffThe fission matrix is adapted in a separate vers ion of the 1Dcrit This version includes the option to utilize importance sampling with and without a new source iteration scheme as well as traditional fission matrix acceleration methods. Two starting source options are available and are either random sampling in all fissionable material or a source read as user input. Source convergence diagnostics developed in chapter 3 and 4 for use with the KENOV.a code have also been incorporated into 1Dcrit. estimate and an error component which reduces the likelihood of too few particles. Scattering currently treated is isotropic only. The particle weight is normalized such that the sum of banked particle weights is equal to initi al particles input as is utilized in the KENO V.a code. 1Dcrit is utilized for testing of new methodologies for source acceleration and fission matrix implementation. Although only 1 D with isotropic scattering, the same behavior associated with difficult problems in terms of slow and poor convergence can be adapted to one dimension with isotropic scattering. Note that while a Monte Carlo code may be 1D, particle movement is still treated as if it were in three dimensions, with tracking only in one dimension. More advanced features will be described in the next ch apter and subsequent chapters as needed. Since the solution algorithm memory structure was adapted differently for the fission matrix algorithm and the fission matrix algorithm with new source iteration methodology, much of the code logic for these soluti ons is similar, yet different. Each version has between ~ 4000 and ~ 5000 source lines, with another ~ 1000 lines added for the parallel implementation discussed in Chapter 7. Several processing utilizes were developed for formatting PENTRAN

PAGE 65

65 output into a format usable by 1Dcrit. These utilities are a combination of shell scripts and FORTRAN codes.

PAGE 66

66 CHAPTER 3 SOURCE CONVERGENCE DIAGNOSTIC METHODOLO GY Source convergence diagnostics currently available and under development were listed in Chapter 1. Stren gths and weaknesses were identified and the point was made that there is no perfect diagnostic tool. With that in mind, the LDA approach was discussed as a promising diagnostic tool; however, its application appears to have fallen short. In the spirit of this work, a new diagnostic methodology called the generalized KPSS test for stationarity detection in Monte Carlo eigenvalue problems is developed here. Since the generation to generation source may be, and often is correlated, a converged source may not behave as white noise. Source diagnostics aim at diagnosing source stationarity. Stationarity will only guarantee a well defined mean and variance (correlation may still be present), but does not guarantee convergence to the fundamental mode. This cannot be stressed enough. Nevertheless, a stationary source can be used as one indicator that a particular solution can be accurately obtained. This Chapter will discuss information theoretic diagnostics utilized for comparison to the KPSS methodology, and then utilization of the source Center of Mass (COM) series and the theory and implementation of the Generalized KPSS test for stationarity detection in Monte Carlo eigenvalue problems. Before going on to the discussion of the different diagnostics, a word about high dominance ratio problems is given. The Dominance Ratio and Source Convergence The dominance ratio is defined as the ratio between the first and second eigenvalue which can be shown to be = 1 where ko is keff and k1 is the second eigenvalue. It can be shown at iteration n that ( ) 1 + 1 (3 1) and

PAGE 67

67 ( ) + 1 (3 2) w here ( )= i = ( )i C 1 and C2(3 1) are constants and and are the first (source v ector) and second eigenvector. EQ. shows that the error associated with the neut ron flux convergence dies off as ( k1/ko)n+1(3 2) while EQ. shows that keff converges faster for high dominance ratio problems since ( k1/koReview o f Information Theory, Stationarity and Undersampling Tests ) tends to vanish. This means that many difficult source convergence problems will be problems with high dominance ratios. Not only this, but the high DR will then cause high correlation in the source distribution for which bias in tally and tally confidence can result. Three complimentary source stationarity and undersampling diagnostics have been proposed recently ( Ueki, 2005) These diagnostics rely on informat ion theoretic concepts in their derivation ( Cover and Thomas, 1991) The diagnostics require a source meshing scheme. Only a brief discussion of the criteri a are given here; The Shannon and r elative e ntropy of the Monte Carlo source distribution are given in EQ. ( 33) and (3 4) respectively. ( ) = ( ) log ( ) (3 3) ( ) = ( ) log ( ) ( ) (3 4) where B is the number of spatial bins i is the bin number SB(i) is the source density for the ith source bin, and

PAGE 68

68 TB(i) is the average source density for the ithShannon Entropy can be utilized to characterize the state of a random variable (ie, how well a pdf, in this case a source distribution, is known), while the relative entropy can provide a measure between the states of two random variables. These concepts are the basic concepts utilized in the information theoretic diagnostics. Based on this theory and extensions thereof, Ueki has derived the following three criteria ( source bin over the second half of the active cycles. Ueki, 2005) : Criterion O ne The first criterion deals with the with Shannon and relative entropies for the instantaneously decodable code. Utilizing these concepts, and the Kraft inequality, the following inequality holds ( Ueki, 2005) : ( ) min instantaneously decodable encoding schemes ( ( ) ) ( ) + 1 (3 5) where ( ) is the descriptive length of the particles born under SB. Since the true SB is not known, however, the following inequality can also be derived. ( ) + ( | | ) ( ) ( ) + ( | | ) + 1 (3 6) where ( ) is the descriptive code length (for an instantaneously decodable code) of particles born according to while characterizing them with the distribution ( Cover and Thomas, 1991; Ueki, 2005; Ueki and Brown, 2003 ) What this inequality represents is that the relative entropy between and namely ( | | ) is the penalty for utilizing as the true source distribution. Comparing EQ. (3 5) and EQ. (3 6) t he penalty ( | | ) is negligible if the fluctuation of ( ) is larger than ( | | ) In other words, since the relative entropy is a measure of the difference between the dens ity functions and and is negligible compared to the fluctuation of the entropy of itself, then is deemed stationary.

PAGE 69

69 In previous work by Ueki and Brown ( Ueki and Brown, 2003) the stationarity of the source distribution is diagnosed in a posterior manner by checking whether or not ( | | ) crosses its mean level determined from the second half of the active cy cles before the first active generation This scheme is utilized in the released version of the MCNP5 code system ( X 5 Monte Carlo Team, 2003 ) To improve upon this scheme and ensure that ( | | ) is negligible EQ. (3 6) Ueki propose d the following inequality: ) ( ) ( H cmsq D msq (3 7) w here msq is the mean square posterior relative entropy and cmsq the centered mean square Shannon Entropy which are defined in EQ. (3 8) and EQ. (3 9) respectively. ( ) = 2 ( | | ) (3 8) ( ) = 2 2 (3 9) w here M is the number of active cycles, j is the generation number, B is as before, and = 2 (3 10)

PAGE 70

70 due to the fact that ( | | ) is convex as a function of and assumes a minimum value of zero when = while is concave and is maximum when is uniform T his should hold for all but when is uniform (or very close) ( Ueki, 2005) Criterion T wo Criterion two relies on the large sample property based on the asymptotic equipartition property ( Cover and Thomas, 1991) (AEP) in relation to thermodynamic equilibrium, and also the method of type for its derivation ( Cover and Thomas, 1991) The results of which are shown in EQ. (3 11) ( | | ) ( ( Through active cycles ) (3 11) The AEP is a fundamental concept of the typical set, relating to the method of type and implies that though many series of results may be produced by a random process, there is a high probability that the one realized is from a loosely defined set of outcomes with roughly the same chance of being realized. This criterion assumes independence of source bins which is often not the case, however it is a zeroth order method for dealing with an equilibrium property of large neutron population. Note that has been currently set to 0.1 ( Ueki, 2005) Criterion T hree The third criterion is a direct result of the concavity of Shannon Entropy ( Cover and Thomas, 1991; Ueki, 2005) and is written as ( ) (3 12) where an acceptable difference is defined and written as ( ) < for = 0 1 (3 13)

PAGE 71

71 Source Center of Mass In order to diagnose stationarity with the KPSS stationarity test, a single scalar variable must be obtained which will be indicative of the source behavior, namely the source COM series ( Wenner and Haghighat, 2007) The source COM is a natural parameter to consider when dealing with source convergence and is given by = 1 + + over active ge n s = 1 (3 14) w here, G = is the number of active generations, npg = the number of particles per generation, g = the current generation number, m = the neutron mass, i = the source particle number and x y and z are the magnitude of the components of the vector extending from the geometric center of the model to the ithIntuitively, one would expect the source COM for each Monte Carlo generation for a converged problem to fluctuate in a stationary manner about the true COM. One can look at the individual direction ( x neutron g, yg, zg(3 14) ) components, and/ or use as defined in EQ. In this work all of the data is utilized Since correlation is often present, as with other parameters in Monte Carlo source convergence identification, purely random behavior cannot be tested for. S tandard statistical tests do not directly apply. This difficulty of applicability of standard sta tistical tests applies to the COM and any related parameters such as higher moments of the COM Nevertheless, higher moments still give some ideas about the symmetry and how peaked the

PAGE 72

72 source COM series is and may provide some physical insight. These par ameters were some of the first attempts at better characterizing the variability of a sample in statistics Higher Moments of the Center of Mass Moments in the statistical sense are usually central moments. The first central moment then should by definiti on be zero, and the second moment the numerator of the standard sample variance formulation. The general formulation for the kth central moment of a random variable X about a mean is then given by = [ ( [ ] ) ] = ( ) ( ) (3 15) Although this variance estimate is most likely biased for any difficult convergence problem, it still is a measure of the variability in the sample and may provide some useful qualitative analysis if nothing else. The same can be said for higher moments which should a lso stabilize and themselves could be theoretically tested for stationary behavior. The next two higher moments are called the skewness and kurtosis Skewness The skewness of a probability distribution is a measure of the asymmetry of the distribution and is related to the third central moment about the mean. Skewness can be estimated with EQ. (3 16) = ( ) ( 1 ) (3 16) where s is the sample standard deviation. The expected value of skewness for a sample from a normal distribution is zero as well as for any symmetric distribution Negative values in dicate negative skewed asymmetry in the tails of the data, and positive skewness indicates positive skewed asymmetry in the tails of the data. As a result, the skewness can tell about how symmetric the data is and a change in skewness over time (or generation) should be indicative of

PAGE 73

73 a change in the amount of symmetry. This series then should settle on some stationary result even if correlated and can itself be tested for stationarity of skewness if desired Kurtosis The kurtosis of a probability distribution is a measure of the peakedness of the distribution and is related to the 4th central moment about the mean. Kurtosis can be estimated with EQ. (3 17) = ( ) ( 1 ) 3 (3 17) Higher kurtosis means more of the variance is due to infrequent extreme deviations, as opposed to frequent modestly sized deviations The more positive the kurtosis, the more peaked is the distribution, while the more negative, the more flat. Here, the kurtosis can give insight about the variance of the COM and whether outliers may cont ribute significantly to the sample variance. Indeed, since correlation is present, the kurtosis estimator will be biased, however insight is still gained into the variance of the COM and how peaked or flat the source distribution is, and it should also be stationary upon convergence. Generalized KPSS Test for Stationarity Detection in Monte Carlo Eigenvalue Problems A formal test procedure is developed here based on analysis of residuals of regression of a scalar series. To investigate the stationarity of the source COM series, a statistical model is utilized given by = + + + = 1 (3 18) where i and tHobin et al., 2004 are both covariance stationary and short term memory (future values related to recent past values only) with mean zero and d{0,1} ( ) T is the total number of terms in the series. Each term represents a different characteristic of the series. The first two

PAGE 74

74 terms together represent the linear behavior in the model, while the third term corresponds to a random walk and the fourth term is a stationary error component This model differs slightly from the model proposed by Kwiatkowski, et al, for the original KPSS methodology ( Kwiatkowski e t al., 1992) The procedure continues by assuming that the source COM follows this statistical model. Different hypotheses are then formed from assumptions concerning the different terms of the m odel. These hypotheses are given by the following linear regressions of the COM: = + + (3 19) = + (3 20) = (3 21) w here, is the null hypothesis of trend stationarity, is the null hypothesis of level stationarity and is the null hypothesis of zero mean stationarity ( Hobin et al., 2004) These null hypotheses correspond to the restrictions d=0 d==0 and d===0 in EQ. (3 18) and are summariz ed in Table 31. Table 31. Null hypotheses possible with Generalized KPSS test Hypothesis Condition H Trend Stationarity d = 0 H Level Stationarity H Zero Mean Stationarity In order to test whether these regressions follow the test model of EQ. (3 18) with the selected restriction, the following test statistic is given by

PAGE 75

75 = (3 22) which can be estimated by EQ. (3 23) = (3 23) is an estimator to where is known as the long run variance ( Hobin et al., 2004) = are partial sums of the residuals of a linear regression of EQ. (3 14) and s are the residuals ( ) for all active generations T This methodology is applied to the COM series and its components, applying the test for level stationarity, in which EQ. (3 20) is utilized for since it is postulated that the source COM series should have constant mean. This test statistic is proven to converge to a known distribution under the different hypotheses, for which the KPSS significance test is based. In general, a test statistic is a function of the sample data on which a conclusion to reject or fail the null hypothesis can be determined. The significance level of a statistical test is the probability of Type I error. Type I error is the error of rej ecting the null hypothesis when it is true (false positive). Typically this type of error can be set since the distribution of the null hypothesis is known. For example, if normality is being tested, a significance level of 5% will mean that 5% of the te st statistics computed which are from a normal distribution will be identified as not normal. The Type II error is the error of failing to reject the null hypothesis when it is false (false negative). The lower the Type II error, the better the test is at identifying a true alternative hypothesis (higher power). The only way to guarantee zero Type I and Type II error would be to work from the entire population and not just a sample ( Peck and Devore, 2005) This is often impractical or impossible. Typically an accepted significance is chosen, and then the power is evaluated and hopefully sufficient for samples from known distributions.

PAGE 76

76 Significance Testing Procedure If the COM series regression can be represented by the model represented by EQ. (3 18) with d==0 then the partial sums of the residuals of t he regression are shown to converge to the second level Brownian Bridge ( ) As a result, this implies that : ( ) (3 24) where V(r) is a f irst level Brownian Bridge. Upper tail critical values for 10%, 5%, 2.5% and 1% significance levels are 0.348, 0.460, 0.580 and 0.754 respectively. These significance levels set the accepted Type I error. A summary of significance levels for all of the hypotheses are given in Table 32. Monte Carlo simulations of a first order autoregressive process, a first order moving average process, and a series wit h a deterministic component and a simple random walk show desirable statistical size and power properties ( Hobin et al., 2004) Table 3 2. Upper tail crit ical values of w, w and w Level o 10% 5% 2.5% 1% H 0.119 0.148 0.178 0.219 H 0.348 0.460 0.580 0.754 H 1.195 o 1.656 2.114 2.759 The standard KPSS test is oversized (rejects the null hypothesis of stationarity too often when its true) for highly autoregressive processes because it employs a semiparametric heteroskedasticity and autocorrelation consistent covariance estimator (HAC) of the long run variance of the process with an important positive finite sample bias. A sequence of random variables is heteroskedastic if the random variables have different variances The generalized f orm of the KPSS test suggests a more automatic form selection of the long run variance that reduces size distortion without suffering from inconsistency ( Hobin et al., 2004)

PAGE 77

77 Long Run Variance Estimation Studies have shown that that the accuracy of inferences made using the w test statistic crucially depends on the actual choice of estimator for the long run variance ( Heidelberger and Welch, 1983; Hobin et al., 2004; Schruben, 1982) The estimators employed frequently are heteroskedastici ty and autocorrelation consistent (HAC) and are of the form given by EQ. (3 25) EQ. (3 26) and EQ. (3 27) = + 2 ( ) (3 25) = (3 26) ( ) = 25 12 ( / ) sin ( 6 ( / ) / 5 ) 6 ( / ) / 5 cos ( 6 ( / ) / 5 ) (3 27) is used as an estimate of the j th order autocovariance of and ( ) is a kernel function depending on a bandwidth parameter m ( Hobin et al., 2004) Several kernels are available and the original KPSS work utilizes the Bartlett kernel, while the generalized test proposed utilizes the Quadratic Spectral kernel since the Quadratic Spectral kernel was shown to be superior to the Bartlett kernel ( Andrews, 1991; Newey and West, 1994) In the LDA approach, which is very similar in nature to the KPSS testing procedure, an estimate of the long run variance is also necessary. Estimation of this parameter was and still is an active area of research ( Hobin et al., 2004; Newey and West, 1994; Schruben, 1982) Lastly, a bandwidth, m must be chosen. Two such bandwidth choice options are reviewed in Hobijn et. al., a determi nistic method of the original KPSS test and a data dependant procedure explored by Andrews and developed further by Newey and West ( Andrews, 1991; Hobin et al., 2004; Newey and West, 1994) This procedure still depends on an a priori non-

PAGE 78

78 stochastic bandwidth parameter (THobin et al., 2004 ), but it has been shown that depends much less on this parameter than the directly deterministic one chosen ( ; Newey and West, 1994) The bandwidth parameter mT1. n is then chosen by the following procedure. T = o( T2/252. ( )= + 2 ) 3. ( )= 2 4. = 1 3221 ( ) ( ) 5. = In this dissertation T is the total active generations, as defined previously. Note that as in Hobijn, nT = integer[4 ( ( T /100)2/25) ] is utilized for step five

PAGE 79

79 CHAPTER 4 IMPLEMENTATION AND T ESTING OF STATIONAR IT Y DIAGNOSTICS Implementation of the Information Theory Methodology In order to evaluate the ability of the KPSS methodology to diagnose stationarity, it has been chosen to look at results not of the KPSS test alone, but also with results of another method for comparison. The Information Theory (Entropy) approach was chosen for comparison due to its fairly developed nature at the time this work was undertaken. As reviewed earlier in Chapter 1, the sa ndwich approach is not useful enough and the LDA approach is basically a less developed version of the KPSS test. The Intergeneration Correlation Length method should also provide a good comparison. As a result, the information theory diagnostics have bee n implemented into the KENOV.a code. These diagnostics require a source meshing scheme and in KENOV.a it has been coded into the logic of the already present fission matrix methodology to utilize already existing coding to provide the meshing required. T wo different available meshing strategies were made available to the user. In both strategies, the entropy approach is utilized only if a KENO matrix keff calculation is run, then the bin structure chosen is that of the matrix keff calculation. The two different strategies correspond t o source bins created from the matrix by array or matrix by unit KENOV.a input option. If matrix by array is chosen, the source bin structure is the array structure for the matrix keff calculat ion, otherwise it is a bin structure by unit for a matrix by unit calculation. Consult the SCALE5 manual for more detailed information about matrix keffSCALE, 2005 calculations ( ) An output table is given corresponding to entropy test criterion 1 and 2, giving the generation number, entropy, relative entropy and the left hand side of the inequality in EQ. (3 11) A second table gives msq and cmsq from entropy criterion 1 for different numbers of

PAGE 80

80 skipped cycles (corresponding with more skipped cycles than input) for informative purposes. Following this table is a summary table identifying the final result of each entropy test criterion. Implementation of the COM Determination of the source COM and all associated components has been implemented into the KENOV.a code. Output currently consists of a data table by generation for xg, yg, zgImplementation of the KPSS Methodology and the series Note also that the skewness and kurtosis of the entire COM series is calculated for qualitative analysis. T he generalized KPSS test statistic has been implemented into the KENOV.a code and this test is automatically performed for and its components, x g, yg and zgDuring this study a procedure was developed for which the KPSS test was performed not only on the COM series, but also on its components util izing a combined approach so that if any two of these failed at the 5% significance level, or any single series failing at the 1% significance level it is conclude d that the source is not stationary. This procedure was necessary as original results confir med the test being slightly oversized (yielding a higher type I error rate in many situations) for problems with a high dominance ratio and strong autocorrelation in which the source behaves like an autoregressive process. Note that this was not observed in other problems, for which tests for independence and normality of the COM could be used alternatively to the KPSS test. If the COM series is independent and drawn from a random normal distribution, the source should be stationary (but if not, this does not mean the source is not stationary). Output of KENOV.a has been modified such that the statistic is given for and its components This analysis is done without the need for any source meshing.

PAGE 81

81 Final Table Output Additions in KENOV.A The final output table has been modified in KENOV.a to reflect the results of the stationarity tests. Any failure results in a printed message indicating as such. Also, the number of meshes utilized for the entropy tests is listed here. Implementations into 1Dcrit Implementations into 1Dcrit of all of the previously discussed methodologies have also been performed. Output is the same format as with the KENOV.a implementation s (except dimension spatially) The meshing structure for the source entropy analysis is different, however, and not written into the logic of the fission matrix methodology. Instead, a virtual mesh structure is provided via user input. Benchmark Problems Introduction and Analysis Several Benchmark problems are analyzed in this section where the efficacy of the KPSS methodology is compared to that of the informatics approach. Many of these problems are problems with known source convergence issues They range from simple geometric configurations to a large spent fuel storage facility and cover both undersampling and loosely coupled scenarios as well as high DRs Stationarity Benchmark Problem 1 The first problem is a simple benchmark problem whic h is expected to have no source convergence issues (given standard accept able code inputs). This problem is sample problem 1 from the SCALE 5 KENOV.a manual. It is a critical 2 x 2 x 2 Array of uranium cylinders ( SCAL E, 2005; Thomas, 1973) (in void) and is shown schematically with pertinent material and g eometric data in Figure 41.

PAGE 82

82 Figure 41. Schematic of the benchmark 1 problem Benchmark Problem 1 Analysis Benchmark problem 1, a critical 2 x 2 x 2 array of uranium metal cylinders is analyzed here. Fifty replications (i.e. same case with different seeds) of 250 total generations with 1000 particles per generation and 50 skipped generations were undertaken. Six of the 50 replications are flagged with the KPSS testing procedure, while none are flagged as potential problems with the entropy diagnostics. Fifty more replications were then performed with 20000 particles per generation, twenty times the original particles per generation. Knowing that the source intensity per cylinder should be equivalent from symmetry, this equates to 2500 source particles per cylinder, whereas each cylinder is 1117.37 cm34 1 which implies if the source were even ly distributed there would be just over two neutrons per cubic centimeter. Even with this large number of source particles, eight replications are flagged as possible KPSS failures with none for the entropy diagnostics. A summary of these results is give n in Table while for completeness, the keff42 values obtained are given in Figure for all benchmark problem 1 cases.

PAGE 83

83 Table 4 1. Problem 1 stationarity diagnostic summary Hist ories per Generation Number Pass KPSS 1000 44 20000 42 Entropy Test 1 1000 20000 Entropy Test 2 1000 50 20000 50 Entropy Test 3 1000 50 20000 50 *Test not applicable for a uniform source such as this problem Figure 42. Problem 1 keffThese results led to several questions. Are the KPSS failures TYPE I error? Could the source series really be not properly converged? If so, what is the cause of this nonconvergence? In order to narrow down the answers, the results needed to be looked at more closely. r esults ( first 50 values represen t 2500 histories per g eneration, last 50 values represent 20000 histories per generation )

PAGE 84

84 In Figure 4 3, the source fraction for the 1 st and 5th cylinders are shown, divided by their own average sourc e value beyond the 50 skipped generations for one of the failed replications ( replicat ion 1) of benchmark problem 1 with 1000 histories per generation. Figure 4 3. Benchmark problem 1 source analysis showing the average source fraction in cylinder 1 and 5 normalized to their average beyond 50 skipped cycles for replication 1 (failed repli cation) with 1000 histories per generation Figure 43 shows that the source fractions depart significantly from their averages during the simulation and show some strong trends. Figure 44 is a similar figure for one of the failed replications (replication 9) of the 20000 histories per generati on replications.

PAGE 85

85 Figure 4 4. Benchmark problem 1 source analysis showing the average source in source fraction in cylinders 6 and 8 normalized to their average beyond 50 skipped cycles for replication 9 (failed replication) with 20000 histories per gene ration From Figure 44, similar behavior is observed as with Figure 43, with source fractions sig nificantly departing from their average behavior and some very strong trending. There is a significant reduction in the overall magnitude of oscillation, however. This is the subtlety of the KPSS test that must be pointed out. Its aim is to determine a nonstationary series, however even if the departure from stationary is insignificant on a physical level, it may still be statistically relevant. To illustrate this point, a purely Gaussian white noise series of with theoretical mean 1.0 and theoretical s tandard deviation of 0.1 was sampled for 500 data points. This series was modified to include very small deterministic trend with slope m = 0.0001. A plot of the pure Gaussian and trend modified Gaussian is shown in Figure 45. 0.90 0.95 1.00 1.05 1.10 50 70 90 110 130 150 170 190 210 230 250 source fraction / average source Generation # Cylinder 6 Cylinder 8

PAGE 86

86 Figure 4 5. Illustration of a white Gaussian series with mean 1.0 and standard deviation 0.1 with and without a small deterministic trend (m=0.0001) Figure 45 shows th at to the eye, the series are nearly identical, yet if these series are scrutinized with the Generalized KPSS test for level stationarity, the test statistic values for the Gaussian white noise series is 0.131 and for the modified series is 0.5145. The fi rst case easily passes the significance testing criteria, yet the second fails at 5% and nearly 2.5% significance with only a very small trend. This is expected, yet if this same series were the keff series, it would look ok to the eye. Notwithstanding if the overall scale of the series were mapped to a much smaller change, something much smaller than normal keff 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 0 100 200 300 400 500 Random Variable Data Series Number White Gaussian no Trend White Gaussian with Det. Trend deviations about its average, say with a Gaussian white noise series with mean 1.0 and standard deviation 0.0001, with a linear deterministic trend of m = 0.0000001, the expected result of the KPSS test would be identical to that before even though the physical significance of changes so small in the eigenvalue should be

PAGE 87

87 inconsequential. Figure 46 illustrates this graphically, for which the same value as before for the case with a deterministic trend was obtained Figure 4 6. Illustration of a white Gaussian series with mean 1.0 and standard deviation 0.0001 with and without a small deterministic trend (m=0.0000001) Figure 46 shows clearly that although the scale has changed dramatically, the result of the KPSS testing is the same since the same trend exists on a different scale. This will apply no matter how small a scale is obtained and is because the relative change of the series to the standard deviation is constant so the deterministic trend is significant when compared to the variance. In practice, this may result in KPSS tests identifying a nonstationary behavior where the physical differences are not significant. In this case, the analyst must observe the data and make a decision as to the reason for the KPSS failure. It should also be noted that even fo r those simulations where the series is stationary, the error estimates may not be correct when strong 9.990E 01 9.994E 01 9.998E 01 1.000E+00 1.001E+00 1.001E+00 0 100 200 300 400 500 Random Variable Data Series Number

PAGE 88

88 correlation is present as stationarity only guarantees a well established mean and variance, not that the variance can be calculated as if there were no correlation present. This analysis shows that although the KPSS methodology may indicate nonstationary behavior, this may or may not imply a true nonstationary issue in practicality Certainly, given the significance testing procedure, Type I error will be present; however, some of these nonstationary series may not be physically nonstationary enough to warrant any alarm Looking back to the nonstationary diagnoses associated with Figure 43 and Figure 44, it appears that these series may indeed be nonstationary; however the physical source dat a shows only a minimal issue which may not be much of a real problem Another cause of the diagnoses can be that for an autocorrelated series, trending can occur for which a longer sample will be necessary in order to get a representative sample of the po pulation. Not having a large enough sample may result in high TYPE I error. The key thing is that the methodologies implemented provide the user with all information regarding the source behavior with the KPSS results a nd the ability to plot the COM and its components along with the skewness and kurtosis values of the source COM In addition, through the entropy diagnostic implementation, the meshed source distribution is available to look at when any issue s may arise as well as the entropy and relative entropy series. It should be noted that t he KPSS test does have difficulties diagnosing series with strong alternating autocorrelation exhibiting seasonality (periodic fluctuations) ( Hobin et al., 2004) Stationarity Benchmark Problem 2 The second benchmark problem is broken up into several variations all based on the OECD/NEA source convergence benchmark 4 pro blem, array of interacting spheres. The problem can be characterized by its loose coupling among source regions, and in its original form, the most reactive region may even lose its source distribution when simulated with

PAGE 89

89 standard Monte Carlo codes. In t his dissertation both a simplified model with an array of 3 x 3 x 1 highly enriched uranium metal sphere s and a full geometry of 5 x 5x 1 spheres are investigated with and without vacuum boundary conditions ( Wenner an d Haghighat, 2008) Note that the true OECD/NEA benchmark 4 is the variation with 5 x 5 x 1 spheres, with vacuum boundaries. In the simplified (3 x 3 x 1) model, all spheres are of radius 1 0 cm, while in the full problem all spheres are of radius 8.71 cm except the central sphere, which is 10 cm. The simplified problem can be seen schematically in Figure 4 7 and the full proble m in Figure 48, while the material c ompositions are listed in Table 42. Figure 47. Schematic of the simplified benchmark 2 problem

PAGE 90

90 Figure 4 8. Schematic of the full b enchmark 2 problem Table 4 2. Material properties for benchmark problem 2 Modified 3 X 3 array of s pheres analysis Two cases of modified benchmark 2 are considered: i) Case 1 with vacuum boundary condition; ii) Case 2 with reflective boundary condition. For these cases, 50 replications are performed with a uniform initial source guess, 250 source generations, 5000 neutrons per generation and 50 skipped generations. Table 43 gives the overall comparison of the two cases for 50 replications for the generalized KPSS test, entropy tests, and their combination. Material C omposition (atoms/barn cm) U 238 (U metal) 4.549E 2 U 235 (U metal) 2.560E 3 N ( in air) 4.3250E 5 O (in air) 1.0810E 5

PAGE 91

91 Table 4 3. Comparison of the number of replications with correct predictions for the e ntropy tests and Generalized KPSS test Test Result Case 1 Case 2 (Stationarity) Correct diagnoses Correct diagnoses Entropy test 1 14 0* Entropy test 2 39 50 Entropy test 3 0 50 Generalized KPSS 46 46 Combined 49 46 *Not applicable for a flat source distribution The combined result given in Table 43 is achieved by applying a nonstationary diagnosis whenever either diagnostic approach fails. To illustrate just how significant an error can be made with the entropy diagnostics is plotted in Figure 4 9 for replication one for both cases. R eplication one of case 1 passed all the st ationarity diagnosti cs Fig ure 49. COM series plot for replication 1 for c ase 1 (vacuum boundary condition) and c ase 2 (reflective boundary condition) of modified version of benchmark 2 Figure 49 shows that for case 1, the departure from stationarity is extreme. This can Reflective Vacuum

PAGE 92

92 e asily be proven since both cases are symmetric and the expected distance of the average source from the geometric center of the model should be zero in both cases This nicely agrees with case 2, however case 1 is not only varying in a strange manner, it has significantly departed from zero indicating that the source behavior is not stationary and is clearly not settled As an aside, a single comparative MCNP5 ( X 5 Monte Carlo Team, 2003) calculation with the same input parameters was made f or both cases. MCNP5 results recommend at least 44 skipped generations for case 1; however, a tally of the fission source indicates that the source has not converged to a symmetric distribution as expected. This implies that the entropy stationarity diag nostics have failed. Note that the current released version of MCNP5 utilizes a more basic test than the entropy tests given here. Upon completion of the problem, MCNP will compute the average value of entropy for the last half of the active generations as well as the sample standard deviation of the entropy and report the first generation found where the entropy falls within one standard deviation of its average for the last half of the generations, recommending at least that many generations be skipped. Analysis of m isdiagnosed c ase 1 r eplication For case 1, the combined test has one false positive which is not correct. This false positive corresponds to replication 21. To look into this result, individual COM component behavior is shown associated with the source as the xg, yg, zg and series. Note that 50 cycles were skipped and the testing procedure disregards these generations.

PAGE 93

93 5.00 0.00 5.00 10.00 15.00 20.00 0 50 100 150 200 250 Location (cm) Generation Number x com 18.00 13.00 8.00 3.00 2.00 0 50 100 150 200 250 Location (cm) Generation Number y com 0.20 0.10 0.00 0.10 0.20 0.30 0.40 0 50 100 150 200 250 Location (cm) Generation Number z com 0.00 5.00 10.00 15.00 20.00 0 50 100 150 200 250 Distance of Avg Source from Geometric Center (cm) Generation Number V isual inspection of Figure 410 indicates that the replication appears to be nonstationary however a visual inspection is not always correct or easy to make. It may be that if enough cycles were undertaken, the source may indeed be s tationary with a high degree of autocorrelation The values for the xg, yg, zg and series are 0.492, 0.232, 0.202 and 0.209 respectively. Comparing with the values in Table 32, the x g Fig ure 4 10. x y z and COM series for replication 21 for c as e 1 (vacuum boundary condition) of benchmark problem 2. A) x series, B) y series, C) z series, D) COM series series fails the KPSS test at 5% significance, but the chosen nonstationary diagnoses in this work is for two series to fail at 5% significance or 1 series at 1% significance. A warning is still given Data tables are available for plotting the source data and the values are printed, and if any se ries fails at 5% a warning is given. The reasons why the KPSS test may not be accurate h ere are many. First, the statistical A B C D

PAGE 94

94 0.00 0.50 1.00 1.50 2.00 2.50 3.00 3.50 4.00 0 50 100 150 200 250 Distance of Avg Source from Geometric Center (cm) Geneation Number COM series 3.00 2.00 1.00 0.00 1.00 2.00 3.00 4.00 0 50 100 150 200 250 Locatoin (cm) Geneation Number x series 4.00 3.00 2.00 1.00 0.00 1.00 2.00 3.00 4.00 0 50 100 150 200 250 Locatoin (cm) Geneation Number y series 0.30 0.20 0.10 0.00 0.10 0.20 0.30 0 50 100 150 200 250 Locatoin (cm) Geneation Number z series nature of the test suggests that some Type I error will be made, yet the data here suggests this case should easily fail visual ly. Accordingly, it has been shown that series with alternating correlation exhibiting seasonality can be difficult to identify nonstationary behavior with this approach ( Hobin et al., 2004) In addition, utilizing only 200 active generations may not be enough to provide the diagnosis definitively. In fact, the series in question may be stationary indeed, yet have a very high autocorrelation. Significant tally bias will still be eviden t. Analysis of a misdiagnosed c ase 2 replication For case 2, the combined test predicted four false positives The behavior of the source COM gives an insight into the overall source variation. A s a result, source data for one of the four false positives is shown in Figure 411. Fig ure 4 11 x g y g z g and COM series plot for replication 29 for c as e 1 (vacuum boundary condition). A) x series, B) y series, C) z series, D) COM series A B C D Generation Number Generation Number Generation Number Generation Number

PAGE 95

95 Figure 411 shows that the source is not fluctuating too unreasonably, although at some points some small trending is evident through otherwise fairly random behavior A strength of the KPSS methodology is in identifying such trends. The values for the xg, yg, zg and series are 0.162, 0.639, 0.334 and 0.529. This implies that the y gThis analysis does show that replication 2 9 may be acceptable, but it would be the call of the analyst to check for further problems Having this source information on hand visually allows the analyst to check any source issues that may arise and then a judgment can be made whether the diagnosis of nonst at iona ry behavior is serious or not. and COM series fail to the KPSS test. Full 5 X 5 array of s pheres analysis Two cases of full benchmark 2 are considered just as in the modified case: i) Case 1 with vacuum boundary condition; ii) Case 2 with reflective boundary condition. For these cases, fifty replications are performed with a uniform initial source guess, 250 source generations, 15000 neutrons per generation and 50 skipped generations. More source particles per generation are utilized since the system is significantly larger, in fact 150 00 may not be enough. Table 44 gives the overall comparison of the two cases for 50 replications for the generalized KPSS test, ent ropy tests, and their combination. In this benchmark, the central sphere is larger than the rest, causing a very large percentage of the source population to reside in the central sphere. Table 4 4. Comparison of the number of stationary diagnoses for the full benchmark 2 problem for the i nformation e ntropy tests and Generalized KPSS test Test Result Case 1 Case 2 (Stationarity) Stationary Diagnoses Stationary D iagnoses Entropy test 1 50 1* Entropy test 2 0 50 Entropy test 3 50 50 Generalized KPSS 31 44 *Not applicable for a flat source distribution

PAGE 96

96 Up to this point, it has been evident that the most reliable Information Theory based diagnostic is e ntropy tes t 2. It is then important to look at the source behavior for a replication with a discrepancy in diagnos i s for Case 1 (vacuum BC) for the full benchmark problem 2. As a result Figure 4 12 shows a plot of the source parameters ( xg, yg, zgFig ure 412. x and COM series ) for one of these replications (replication 1). g, yg, zgFigure and COM series plot for replication 1 for case 1 of the full benchmark 2 problem (vacuum boundary condition) 412 co nfirms that the full benchmark 2 problem physics is significantly different. Here, due to the central sphere influence, the source population (COM) never reaches 1.5 cm from the center of the problem, however, some strong seasonal behavior is seen. This behavior does pose some problems for the KPSS methodology when the autocorrelation coefficient would

PAGE 97

97 3.00 2.00 1.00 0.00 1.00 2.00 3.00 4.00 50 100 150 200 250 Location (cm) Generation Number x series 3.00 2.00 1.00 0.00 1.00 2.00 3.00 50 100 150 200 250 Location (cm) Generation Number y series 0.00 0.50 1.00 1.50 2.00 2.50 3.00 3.50 50 100 150 200 250 Distance of Avg Source from Geometric Center (cm) Generation Number COM series 0.10 0.05 0.00 0.05 0.10 0.15 50 100 150 200 250 Location (cm) Generation Number zseries continuously alternate sign. Nevertheless, a look at the values for the xg, yg, zgFigure and (COM) series (0.411, 0.068, 0.414 and 0.174, respectively) show that although a failure is not given, the values are close to a failure at 5% significance. Also note that entropy test 2 is performed for every active generation, and only 21 generations failed the test. It appears that this simulation is near a converged state, but undersampling is still distorting the source behavior. 413 shows the xg, yg, zg and (COM) series for replication 5 of the full benchmark 2 problem for case 2 (reflective boundary conditions). This replication was identified as possibly nonstationary with the KPSS methodology. Fig ure 4 13. x g y g z g and COM series plot for replication 5 for c as e 2 (reflective boundary condition). A) x series, B) y series, C) z series, D) COM series A B C D

PAGE 98

98 Figure 413 shows that for the most part, the dat a support no major problem The series that faile d the KPSS test are the xg and zg series. The xg series appears visually to have some sm all trend in the first and second hal f but it appears to be insignificant overall as the overall variability is quite small. The zg series has a maximum variability of less than 0.25 cm and is not really very important in these simulations since the lattice of spheres does not extend in the z direction, therefore the possibility for a problem with the zgStationarity Benchmark Problem 3 series is unlikely. Statistically, the KPSS test can be correctly identifying issues that are hard to see visibly from undersampling, or the issues ma y b e physically insignificant. Note that an autcorrelation analysis may also shed more light on whether a source convergence issues is present. The third benchmark problem utilized is a real world type problem and is also OEC D/NEA Source Convergence Benchmark 1: Che ckerboard Storage of Assemblies T he model comprises a 24 x 3 array LWR fuel storage rack with fuel assemblies stored in alternate locations. The fuel assemblies are ~5.0% enriched by weight and are located within fully water flooded steel storage racks surrounded by a close fitting full conc rete reflection on three sides with water on the remaining side and water on the top and bottom. The fuel assemblies are formed from a 15 x 15 lattice of Zr clad UO2 Blomquist and Nouri, 2002 ( ) The geometry is given in detail in F igure 414, while material specification is tabulated in Table 45. This problem has been studied extensively and it is known that the most reactive source region is the upper left assembly in F igure 414 (assembly 49) due to the superior reflection near t hat area. For clarity, a numbering system has been utilized in Figure 414 for the 24 x 3 lattice s tarting at 1 at the bottom left. This makes the three leftmost lattice regions numbered as 1, 25 and 49, starting with the bottom and moving to the top.

PAGE 99

99 Figure 4 14. Schematic of the benchmark 3 problem All Dimensions in cm 648 30 81 40 40 40 2.5 15 X 15 lattice water moderated, centrally located Pitch 1.4 Fuel radius 0.44 Water Channels have same exterior dimensions as fuel channels 27 30 30 4 20 Cut view through the assemblies closest to the water reflected side

PAGE 100

100 Table 4 5. Material properties for benchmark problem 3 c heckerboard s torage of a ssemblies m aterial data (atoms/barn cm) Fuel Concrete U238 2.2380E 02 H 5.5437E 03 O 4.6054E 02 C 6.9793E 03 U235 8.2213E 04 Si 7.7106E 03 Water Ca 8.9591E 03 H 6.6706E 02 O 4.3383E 02 O 3.3353E 02 Iron Zirconium Fe 8.3770E 02 Zr 4.2910E 02 After extensive analysis in Source Convergence in Criticality Safety Analyses, Phase I: Results for Four Test Problems ( Blomquist et al., 2006) it was suggested that with 5000 histories per generation that at least 700 cycles are needed to r each a fully converged source. In order to investigate this 50 replications of the problem were simulated with 10000 total cycles, 10000 histories per g eneration and 1000 skipped cycles. From the results of these simulations, the KPSS test indicates that the source may not be converged for all 50 cases. Interestingly, this is also true of the entropy tests as implemented (note that this wouldnt be the case using the entropy diagnostic currently utilized in the MCNP code) as the second of the three entropy tests fails for all cases, while test 1 and 3 pass for all 50 cases. Figure 415 shows for replication 1 with 10000 histories per generation From Figure 415 i t is apparent that skipping 700 cycles is not adequate as the source is still settling. It appear s that skipping 2000 cycles is more appropriate

PAGE 101

101 Figure 415. KPSS Benchmark problem 3 COM p lot for r eplication 1 with 10000 histories per generation Since it appears that skipping 2000 cycles will definitely be adequate, 50 replications were performed with 2000 skipped generations, 10000 histories per generation and 5000 total ge nerations to save calculation time as simulation time can be quite lengthy for 50 replications with such large numbers of partic les per generation and total generations Results of these calculations yield 50 stationarity failures exactly as before. Undersampling is suspected to be at the heart of the failures as there is no doubt with the diagnostic results that the source is not converged properly. Two more calculations were performed then with increasing numbers of histories per generation, the fir st with 100000 histories per generation with 5000 skipped and 5000 active generations. Five thousand generations were skipped to be absolutely certain that an adequate number of cycles were skipped and the skipped cycles are not adversely affecting conver gence.

PAGE 102

102 The second calculation is a very extreme trial, and has 1 million histories per generation with 2000 skipped generations and 10000 tot al generations. Even with 100000 histories per generation the KPSS test and entropy test 2 still failed, while th e entropy test 2 only failed 277 cycles out of a total of 5000 possible cycles. Figure 4 16 shows (COM) series for replication 1 with 10000 histories per generation and the (COM) series for the single replication of 100000 histories per generation Clearly with 5000 skipped cycles for the 100000 histories per generation case, not skippi ng enough cycles is ruled out. Figure 416. Benchmark problem 2 COM plot for r eplication 1 with 10000 histories per generation and the s ingle r eplication c ase of 100000 histories per g eneration

PAGE 103

103 To investigate further the undersampling behavior, Figure 417 shows the source fraction for assembly numbers 23 (farthest from the most reactive assembly in the bottom row), 21, 19, 17, 15, 13, 11 a nd 9. Figure 4 17. Source f raction plot of problem 3 for assemblies 9, 11, 13, 15, 17, 19, 21 and 23 with 100000 histories per generation From F igure 417 several things can be seen Most striking is the fact that all of the assemblies with the exception of assembly 9 have a signif icant amount of cycles for which there is no source neutrons in that assembly, with several of the assemblies lacking source for nearly the entire calculation This is due to their large distance from the most reactive area. Also it can be seen that the so urce fraction in many assemblies oscillates due to some periodic influx of source from other assemblies. Clearly undersampling is still occurring in these assemblies (and others not shown) due to most of the relative source strength being located in

PAGE 104

104 just a few assemblies. Stratified source sampling may improve this problem, as well as some of the other techniques previously mentioned in chapter 1 ; however the problem most likely cannot be eliminated since there are just not enough particles to properly de scribe the source. In the case with 1 million histories per generation this time the entropy tests passed and no indication of nonstationary behavior is given, while interestingly the KPSS test still fails. A summary of the stationarity results for problem 2 are given in Table 46. Table 46. Problem 2 s tationarity diagnostic s ummary Number Pass Total Rep s Case KPSS Entropy Test 1 Entropy Test 2 Entropy Test 3 1000 Skip 10000 Hist/Gen. 0 50 0 50 50 2000 Skip 10000 Hist/Gen. 0 50 0 50 50 5000 Skip 100000 Hist/Gen. 0 1 0 1 1 2000 Skip 1000000 Hist/Gen. 0 1 1 1 1 Figure 418 shows the results of assembly source fractions 9, 11, 13 and 15. Recall that t hese assemblies were also included in Figure 4 17 with 100000 histories per generation, ten times less than the data used to generate Figure 418. The trend of reduced source per source mesh continues with higher assembly numbers as they get increasingly farther from the most reactive source elements.

PAGE 105

105 Figure 418. Source f raction plot of benchmark problem 3 with 1 m illion histories per generation A) Ass embly 9, B) Assembly 11, C) Assembly 13, D) Assembly 15 0.00E+00 1.00E 04 2.00E 04 3.00E 04 4.00E 04 5.00E 04 6.00E 04 7.00E 04 3000 4000 5000 6000 7000 8000 9000 10000 Source Fraction Cycle Number Assembly 9 6.00E 04 8.00E 04 1.00E 03 1.20E 03 1.40E 03 1.60E 03 1.80E 03 2.00E 03 3000 4000 5000 6000 7000 8000 9000 10000 Source Fraction Cycle Number Assembly 11 0.00E+00 5.00E 05 1.00E 04 1.50E 04 2.00E 04 2.50E 04 3.00E 04 3.50E 04 3000 4000 5000 6000 7000 8000 9000 10000 Source Fraction Cycle Number Assembly 13 0.00E+00 5.00E 05 1.00E 04 1.50E 04 2.00E 04 2.50E 04 3000 4000 5000 6000 7000 8000 9000 10000 Source Fraction Cycle Number Assembly 15 A B C D

PAGE 106

106 Clearly, from Figure 418 looking at the source fraction for different assemblies (9, 11, 13 and 15) what is happening becomes evident Although now assembly 9 has a fairly stable source fraction as does assembly 11, assembly 13 has many significant fluctuations while assembly 15 has even more fluctuations and many generations without any source. It is safe to say that assemblies 17, 19, 21 and 23 are progres sively worse. Although the addition of more particles stabilized more assembly locations, there are still many which arent and undersampling is still an issue. These assemblies may have a negligible impact on the final keff, but localized tallies in th ese regions will be erroneous. keff for this case was 0.8870735 0.0000089. A comprehensive plot of the keff419 values obtained for benchmark problem 3 for all replications and is shown in Figure Figure 419. Problem 2 keff 0.8863 0.8865 0.8867 0.8869 0.8871 0.8873 0.8875 0 20 40 60 80 100 k eff Overall Replication Number 1000 skip 2000 skip 100000 Hist/Gen 1 Million Hist/Gen r esults for all benchmark problem 2 c ases

PAGE 107

107 From Figure 419, it is clearly seen t hat the keffConclusions of Stationarity Analyses value from the 1 million histories per generation case is statistically different than several of the other replications at 10000 histories per generation yet the results are not far off. A new stationarity diagnostic for use with Monte Carlo eigenvalue problems known as the generalized KPSS test for stationarity detection in Monte Carlo eigenvalue problem s test for sta tionarity has been developed. It has been compared to information theory based diagnostics for several benchmark problems. The first problem is a simple critical 2 x 2 x 2 lattice of uranium cylinders. Results of these simulations suggest that the KPSS test is comparable to the entropy testing procedure, and appears in some cases to improve diagnostic capability. Next, comparisons are done for the OECD/NEA Source Convergence Benchmark 4: Array of interacting Spheres. Several variations ar e also simulated considering only 9 spheres and using reflective boundary conditions. Again, results of these simulations showed the KPSS test to be comparable and at times better than the information theory based diagnostics. Lastly, comparisons are made for th e so called OECD/NEA Source Convergence Benchmark 1: Checkerboard Storage of Assemblies Model T he model comprises a 24x3 array LWR fuel storage rack with fuel assemblies stored in alternate locations Results of simulations have shown that past r esearchers may have underestimated the total number of skipped cycles required for the warm up period. Beyond this, both the KPSS test and entropy test 2 have shown that even for up to 100000 histories per generation that the source behavior differs from a stationary one. Through thorough analysis of the actual source distribution per generation it was shown that undersampling is the key issue. Finally, a case with a single replication of 1 million histories per generation was undertaken, with 2000 s kipped and 10000 total cycles. The KPSS

PAGE 108

108 test still correctly identifies nonstationary behavior while the entropy tests do not. Thorough analysis of the source distribution itself shows undersampling is yet still an issue, although to a lesser extent as b efore. Since the use of the generalized KPSS test is done on the (COM) series, this parameter is highly sensitive to the source distribution and is a very good indicator for or against source stationarity for many problems As expected, there is no foolproof diagnostic method. As shown here, comparing the effectiveness of the KPSS methodology to that of the implemented information theory diagnostics, it is clear that entropy test 2 and the KPSS test are the most robust by far, however neither is per fect. Therefore, it is a recommendation that a complementary approach incorporating all of these testing procedures is to be used. These tests need not be exclusive. Other tests are mentioned in Chapter 1, and have not been studied extensively here, but having more tests available should only increase reliability. Having these diagnostic tools available, an analyst has an abundance of information available to make a determination if a particular simulation should be scrutinized more carefully. Note tha t currently only the experimental version of KENOV.a and the 1Dcrit code include the test procedures utilized here (both KPSS and Information Theoretic diagnostic s ). Lastly, note that a misdiagnosis of stationarity may only result in minimal error to the keff but may still be significant. Moreover, all localized tally information will be significantly less reliable. S ince the KPSS test can be applied to any time series, it may be useful to apply it locally to individual source regions to ensure each r egion has a stationary source distribution or perhaps the user can be given the flexibility to test a particular source region.

PAGE 109

109 CHAPTER 5 DEVELOPMENT AND IMPLEMENTATION OF NEW FISSION SOURCE ACCCELERATION METHODOLOGIES This chapter reviews and introduces methodologies that are devised to accelerate the solution of Monte Carlo eigenvalue problems. The methodologies reviewed are not exhaustive, but are necessary for the work involved in this dissertation Fission matrix methods are discussed as well as methods based on importance sampling such as the extension of the Consistent Adjoint Driven Importance Sampling (CADIS) methodology to e igenvalue problems. Extension of CADIS Methodology to Eigenvalue Problems The CADIS methodology was developed and implemen ted by Wagner and Haghighat ( Wagner and Haghighat, 1998) The goal of the CADIS methodology is to apply the concept of importance sampling in a consistent manner for use in a Monte Carlo algorithm. It accomplishes this goal by utilizing imp ortance sampling concepts to provide source biasing, and also to apply transport biasing through the weight window technique The major points are reviewed here. Review of the CADIS Methodology The review of the CADIS methodology starts with the essential concept of integration via Monte Carlo methods and how importance sampling can increase the integration efficiency. After that, source and transport biasing procedures are derived utilizing these concepts Integration by Monte Carlo Utilizing the definition of expected values, it is easy to see how integrals can be estimated from the Monte Carlo method. Suppose the integral of a function g(x) is desired over a range of [a,b ] as ( ( ) ) = ( ) (5 1) Recall from EQ. (2 30) repeated here, that the definition of the expected value of g( X ) is

PAGE 110

110 [ ( ) ] = ( ) ( ) (5 2) Then, given the random variable X drawn from p(x) it is written = 1 ( ) (5 3) Utilizing expectation values, it is shown that [ ] = 1 ( ) (5 4) [ ] = 1 [ ( ) ] = 1 [ ( ) ] (5 5) [ ] = [ ( ) ] (5 6) [ ] = ( ) ( ) (5 7) Since the integral sought is ( ( ) ) = ( ) p(x) is chosen to be a constant A such that = 1 (5 8) t hen, = 1 (5 9) and substituting into EQ. (5 7) yields [ ] = 1 ( ) = 1 ( ( ) ) (5 10) Rearranging yields ( ) [ ] = ( ( ) ) = ( ) 1 ( ) (5 11)

PAGE 111

111 The results of the previ ous derivations yield an estimator for the integral of a function g( x ). It can be shown that the convergence of this estimator to the actual integral is ( ) and converges du e to the law of large numbers. This is the typical brute force method of integration for Monte Carlo methods and can be extended to multi dimensions Recall the XiImportance s ampling s are sampled from a uniform distribution ( p(x) was chosen to be constant) Also, this is for a chosen flat p(x) over the range of the problem [a,b] yet t ypically random numbers are generated over the range [0,1]. If desired a suitable mapping can be done to recast the integral over the range of [0 1], in which case p(x)= 1, and (b a)= 1. If one wishes to increase accuracy and speed up the convergence of [ ] to the desired integr al, it is possible to utilize the concept of importance sampling. Importance sampling is choosing a good distribution from which to s imulate one's random variables. By multiplying the integrand by 1 ( h( x )/ h( x )), the integral is unchan ged, but the sampling distribution can be modified to yield an expectation of a quantity that varies less than the original integrand over the region of integration. Recall again that [ ( ) ] = ( ) ( ) (5 12) Multiplying by ( h( x )/ h( x )) results in [ ( ) ] = ( ) ( ) = [ ( ) ] (5 13) since ( ) = ( ) ( ) ( ) (5 14)

PAGE 112

112 The two integrals are identical, implying identical expectation values ; however the sampling distribution has been modif ied. Given the random variable X drawn from h(x) the final integral is determined using the estimator ( ) = ( ) 1 ( ) (5 15) It turns out that the closer h( x ) is to g( x ) in proportion (must be a pdf), the smaller the variance, with the result of zero variance for h( x )= g( x ) p( x ) where is a constant. This can be seen by substituting for h ( x ) in g* ( x ) as follows: [ ( ) ] = [ ( ) ] = ( ) ( ) ( ) ( ) ( ) (5 16) Rearranging yields [ ( ) ] = ( ) = (5 17) Utilizing the original expectation, this shows that = ( ) ( ) (5 18) T his implies that to have a zero variance solution, the original integral involving g( x ) is computable and there would be no need for Monte Carlo integration The usefulness of this is such that the closer the chosen sampling pdf h ( x ) is to the integrand, the lower the variance and faster the calculation of the integral will be. Source b iasing Recall that if a detector cross section is utilized for th e adjoint source th en the detector response can be written as = ( ) (5 19) Utilizing importance sampling the detector response can then be written as

PAGE 113

113 = ( ) ( ) ( ) (5 20) Where ( ) > 0 and ( ) =1. According to the previous section a zero variance solution will be arrived at if ( ) = ( ) (5 21) Utilizing the following equation to conserve the total number of particles, the biased source distribution from EQ. (5 21) can be applied during a Monte Carlo simulation as ( ) = ( ) (5 22) Substituting EQ. (5 21) into EQ. (5 22) given that = 1, yields = 0 (5 23) Of course, since the true values of R and are not known, their ratio is obtained approximately (often via deterministic methods) The better the appr oximation is the more effective the source biasing. Transport b iasing Again, it is assumed that the adjoint source is set to a detector cross section of interest. Neglecting the external source, the transport equation can be written in integral form, similar to EQ. (2 14) rewritten again here as = ) ( ) ( ) ( ) , = , (5 24) w here is the expected number of particles emerging at within dr E within dE with direction about due to an event in Multiplying this equation by and replacing with dP for simplicity, EQ. (5 24) becomes

PAGE 114

114 ( ) = ( ) ( ) ( ) (5 25) here ( ) = ( ) ( ) (5 26) Multiplying by ( ) results in ( ) = ( ) ( ) ( ) ( ) ( ) (5 27) which is reduced to ( ) = ( ) ( ) ( ) ( ) (5 28) or ( ) = ( ) ( ) (5 29) where ( ) = ( ) ( ) ( ) (5 30) Practically speaking, ( ) is not known. As a result, particle transport is performed in an analog manner but when an event occurs in (changing energy, position or direction), particles emerging from this event are biased according to( ) ( ) This will then increase the number of particles in higher importance areas and lower the number of particles in lower importance regions In order to account for this change in particle population, particle weight must be adjusted for particle conservation according to EQ. (5 31) ( ) = ( ) ( ) ( ) (5 31)

PAGE 115

115 This methodology is utilized within the framewor k of the weight window technique and is known as Consistent Adjoint Driven Importance Sampling (CADIS) All importance function data is obtained via simplified deterministic models. At this point a note about zerovariance schemes is prescient. It is possible instead of utilizing the weight window technique to use a biased transport kernel such that dur ing the transport and collision of particles, the free flight and collision probabilities are modified directly, resulting in a nonanalog game whereby weights are then modified accordingly (as in the weight window technique). This requires the direct com putation of the kernels. Extension of CADIS methodology to e igenvalue problems Theoretically, an extension of the CADIS algorithm to eigenvalue problems hinges directly on the choice of the adjoint source. In a shielding problem, the adjoint source is chosen as the detector cross section in the region of interest. It is then straightforward to utilize the neutron production, specifically fAdjoint s ource as the adjoint source. Importance obtained will be relative to fission neutron production in the sys tem. In order to obtain importance to neutron production, the adjoint source in equation (2 8) rewritten here as = is set to = (5 32) It is also important to note that importance obtained from EQ. (2 8) is different from the importance that would be obtained from the adjoint eigenvalue equation. Importance from the eigenvalue equation would take into account production from a neutron including all its progeny, while from EQ. (2 8) will only include importance of a neutron to direct production of next generation neutrons. Given the generation to generation process of the Monte Carlo calculation,

PAGE 116

116 this is adequate since neutron importance can be utilized to reduce variance in the next generation neut ron source Other c onsiderations An added difficulty with the extension of CADIS to eigenvalue problems called here Eigenvalue CADIS ( E CADIS ) is the fact that since the fission cross section is non zero in all fissionable material, importance obtained is to that of ALL source regions Efficiency may be severely affected since source particles are born in fissionable material which are distributed throughout the model and the need for extreme biasing to a region of interest is drastically reduced. Next s ince the source is unknown in an eigenvalue problem source biasing as applied in shielding problems is not applicable. Normalization of the weight window generated values is more difficult. Several different choices are possible. The simplest choice is to normalize the most important particles to th e center of (or somewhere within) the weight window This is most likely not the best approach since another region may have less importance but more overall source. To improve this method one should utilize the fractional importance and start source par ticles with the most estimated contributed response (to production) inside the weight window Another possibility is starting all particles within the weight window This is accomplished by renormalizing the weight window values for every source region a nd energy group combination. Some of these source weight window initialization strategies concerning the lower weight normalization would result in some particles being started outside of the weight window If th is is the case, a procedure can be utiliz ed to adjust source particle weight via spl itting or Russian Roulette before the particles are even transported for the first time in order to bring them within the weight window bounds

PAGE 117

117 With that said, transport biasing c an still be performed in the same manner as was done for shielding problems. Alternatively, it is possible to derive a zero variance formulation extended to the eigenvalue problem utilizing the theory just discussed. In fact, if the integral form of the Boltzmann equation for the collisi on densities (as opposed to the flux) is written, a separate collision kernel and transport kernel can be obtained. Instead of utilizing the weight window technique, these kernels can be biased directly (as can the transport kernel already discussed) with a corresponding modification of particle weight for conservation. The ideas are formulated with much rigor, and have been reviewed and tested. ( Dufek, 2009; Lux and Koblinger, 1991) Utilizing a separated collision and transport kernel was beneficial because it was shown that biasing the transport kernel had a negative impact (slowed down the calculation) while biasing the collision kernel saw speedup of up to 70% in some test cases ( Dufek, 2009) Modified s ourced etector algorithm To overcome the issue of source neutrons being born in important regions the eigenvalue problem can be projected as a series of fixed source problems by decomposing the spatial domain via a new source iteration scheme The difference between the standard solution algorithm and the modified algorithm is how the particles are transported. In the modified algorithm, the spatial domain is segmented as in the f ission Matri x algorithm ( Kadotani et al., 1991; Morton, 1956) Each source region is then methodically transported to each other sourc e region, one at a time, ignoring the contribution to fission from every other region. This process is repeated for each source region and the keff5.1 is calculated from a combination of all the separate calculations within the generation. This method is re ferred to as Modified Eigenvalue CADIS (ME CADIS) To illustrate, consider a 1D, monoenergetic, two region problem shown in Figure

PAGE 118

118 Figure 5 1. Source discretization with s ource t ransfer procedure for 2 r egion ME CADIS problem Transport of particles proceeds as follows: 1) Source particles born in region A are transported, utilizing an importance function specifically generated to minimize variance of particle transport to region A. a) Any fission that occurs in the region of interest, region A, is tallied and next generation neutrons are banked from this fission process. b) Any fission occurring outside the region (region A) of interest is not tallied, nor are any source particles stored for use in the next generation. 2) Source particles born in region A are transported, utilizing an impor tance function specifically generated to minimize variance of particle transport to region B. a) Any fission that occurs in the region of interest, region A, is tallied and next generation neutrons are banked from this fission process. b) Any fission occurring outside the region of interest (region A) is not tallied, nor are any source particles stored for use in the next generation. 3) Repeat Steps 1 and 2 for region B. In this case, twice the amount of particle tracking is done which obviously increases computatio n time if no biasing is performed. Although twice the number of histories has been simulated, only half are simulated in the Monte Carlo sense since the contribution to each region A B S A A S A B S B A S B B 1.) 2.) 3.) 4.)

PAGE 119

119 was obtained using the same source particles and only the contribution to the region of interest obtained. The number of theoretically simulated histories is given by = (5 33) w here NPG is the number of particles per generation, and Nreg is the number of regions. Given algorithm equality elsewhere, this algorithm would be a factor of NregThe division of the source to separate regions and calculating the contribution from each region to every other is similar to simulating multiple fixed source calculations within each generation. In order to obt ain region specific importance data, N slower. regFission Matrix Methods approximate importance calculations must be performed deterministically such that for each calculation the adjoint source is set to be sensitive to fission in only one source region Transport biasing through the CADIS approach can be utilized for each Monte Carlo segmented source region reducing the penalty of more simulated particles. F or simulations with undersampling issues or loose coupling this methodology aims to help stabilize source population in regions with little to no source (within reason). If the flux gradient is too large, however, the method may not be effective enough to mitigate undersampling. In the fission matrix acceleration methods, the fission matrix elements from eq uation (2 20) repeated here, are utilized to provide an approximate solution for the fundamental mode eigenvector. ( ) = ( ) ( ) ( ) (5 34)

PAGE 120

120 Standard Fission Matrix Method Review Recall from chapter 2 that ( ) is approximated as ( ) =( ) ( ) and that if the matrix elements of ( ) are zeroed after every generation, the result is the cyclewise fission matrix. Alternatively, if the elements of ( ) are kept over all desired generations, the cumulative fission matrix is obtained. Once obtained, the fundamental mode eigenvalue ( k eff) and eigenvector ( ( si)j) are solved for numerically. Fission source acceleration methods assume that the source distribution represented by ( si)j converges faster than the fission source in the standard Monte Carlo eigenvalue calculations. ( si)j is then used to correct the fission source during the simulation. V arious methodologies have been proposed for doing this, with the most common approach modifying the particle weights in source regions given by ( ) = ( ) ( ) ( ) for all i ,j (5 35) where, w = particle weight i = particle number n = generation number j = source mesh (from fission matrix) fm ~ fission matrix mc ~ Monte Carlo. A ll attempts at using the cycle wise fission matrix computed by Monte Carlo have met with limited success due to the problematic convergence (often statistical noise) of ( si)jDufek, 2007 ( ; Urbatsch, 1995) U sing the cumulative fission matrix can also be problematic if generations with unconverged source were to be included. In addition, noise may be reduced but not eliminated when utilizing the cumulative fission matrix ( Dufek, 2007; Dufek, 2009; Urbatsch, 1995) The success that has been achieved has been the reduction in skipped cycles

PAGE 121

121 necessary since ( si)jCombined Fission Matrix Acceleration with CADIS does converge close to the true solution faster than the traditional power iteration algorithm. Utilizing fission matrix acceleration methods while applying E CADIS/ME CADIS can be utilized This methodology takes advantage of the reduced variation in the fission matrix element s when utilizing the CADIS methodologies and is nothing more than a simultaneous use of both methods. Several variations of biasing are possible including weighted percenta ges of the fission matrix eigenvector (source) with the standard Monte Carlo source. Implementation into 1Dcrit All of the methodologies discussed in this chapter have been implemented into the 1Dcrit code. No new input information is necessary in the main input file, however a secondary input file (bias.dat) contai ning the importance i s needed for the E CADIS or M E CADIS algorithms This file contains the number of energy groups and spatial bin structure and importance information for each region and group combination. The E CADIS and ME CADIS execution are actually separately compiled codes due to the memory res tructuring required in the modifie d iteration scheme for ME CADIS To benchmark the new source iteration structure utilized in the ME CADIS algorithm, a simple 3 region 2 energy group problem with 2 fissile regions of the same material sandwiching a nonf issile region was simulated as shown in Figure 52. The first region is 20 cm thick, the second region is 10 cm thick and the third region is 15 cm thick. Vacuum boundary conditions were considered To benchmark 1Dcrit, reference calculations were performed with the P ENTRAN code and the MCNP5 code in multigroup mode 1Dcrit was utilized with standard and modified ( ME CADIS) solution algorithms. Note that all simulations with PENTRAN utilize the DTW differencing scheme.

PAGE 122

122 Figure 5 2. Source discretization with s ource t ransfer procedure for 2 r egion ME CADIS problem For the Monte Carlo analysis, 10000 particles per generation were utilized with 20 skipped cycles and 200 total cycles. Material properties are given for the 2 group cross sections in Table 51, while results of a comparative simulation for keff52 are given in Table Table 5 1. Material properties for algorithm t est problem Material Group (cma 1 ) (cmf 1 t ) (cm1 ) s (cm 1 Group ) 1 2 1 1 0.025 0.025 0.045 0.70 0.005 0.015 2 0.030 0.050 0.060 0.30 0.000 0.030 2 1 0.025 0.000 0.060 0.00 0.010 0.025 2 0.020 0.000 0.040 0.00 0.000 0.020 Table 5 2. Comparison of keff Code/Algorithm predicted by different codes for algorithm test problem k Std. Dev. eff MCNP/Multigroup 1.17932 0.00085 PENTRAN (DTW) 1.17905 n/a 1 D/Standard 1.17921 0.00048 1 D/Modified (ME CADIS) 1.17959 0.00055 *Based on a convergence criterion of 1x10 5 Table 52 shows that all of the algorithms essentially achieve the same result for keff To further support this claim, the average source distribution over the active cycles is 0.593 for region 1 a nd 0.412 for region 3 for the standard source algorithm, while it is 0.592 for region one and 0.413 for region three for the ME CADIS algorithm. This provides confidence that the modified source iteration scheme and the traditional scheme are both consiste nt. More information about the input necessary to run 1Dcrit is given in appendix A. 20 cm 10 cm 15 cm Source (material 1) (Material 2) Source (material 1)

PAGE 123

123 CHAPTER 6 TESTING OF NEW SOURCE ACCELERATION METHODOLOGIES The selected methodologies detailed in chapter 5 are tested in this chapter. The E CADIS and ME CADIS techniq ues are evaluated for their effectiveness and their combinations with the fission matrix acceleration method are examined. In order to benefit from E CADIS/ME CADIS, approximate adjoint flux (importance) is required. This importance is obtained util izin g the PENTRAN code Shell scripts combined with FORTRAN processing codes have been written t o semi automate the p rocess of running PENTRAN and 1Dcrit for a single problem. PENTRAN Importance Calculations To obtain E CADIS and ME CADIS approximate importance information for the test problems, the PENTRAN code is utilized. For the E CADIS approach, recall that the adjoint source in EQ. (2 11) repeater here as = is set to the detector cross section which when adapted to the eigenvalue problem, is represented by EQ. (5 32) Note that the resulting importance relating to the adjoint source defined by EQ. (5 32) is then a global importance to production from all fissile source regions. For the ME CADIS approach, a separate PENTRAN calculation is performed for each discretized source region ( i ) such that the adjoint source of EQ. (5 32) is modified to be = Each resulting importance function is then the im portance to production for only the region for which the adjoint source was defined. This provides the ability to optimize region to region transfers as discussed in Chapter 5. Note that the importance utilized is both space and energy dependent as a separate importance is obtained for every energy group and i s utilized separately inside the weight window framework Note a lso that since the goal of the work in this dissertation is to determine whether or not the E CADIS/ME CADIS approaches are viable, the importance is calculated

PAGE 124

124 with high fidelity to ensure little effect should be seen relating to a poorly determined importance function. Nevertheless, a well defined importance distribution on too fine a mesh can result in poor perfo rmance due to the need of locating particles within the weight window (in both space and energy in 1Dcrit) becoming too time consuming Note also that the total source magnitude in PENTRAN utilized for the importance cases is for all I source regions utilized in the particular importance calculation, with the energy spectrum (serg in PENTRAN) normalized to 1.0. Source Acceleration Test Problem s Test problem 1 is a simple 3 energy group (Fast, epi thermal, thermal) 3 source region problem without upscatter As shown in Figure 61, this problem has 19 distinct geometry regions with 4 different materials Figure 6 1. Schematic of t est s ource a cceleration t est problem 1 Test problem 1 was devised to specifical ly test the effectiveness of the CADIS like methodologies since fissile material is separated by large moderator regions whereby importance should vary significantly. Relevant geometry and material information is located in Table 6 1. Boundary conditions are an albedo of 0.5 on the left and 1.0 on the right. Table 61. Material c omponent r adii for t est problem 1 and 2 Region Radius Material Fuel (3 w / o UO 2 0.3922 ) UO 2 ~3 w / G ap (He) o 0.40005 He Clad (Zr) 0.4572 Zr UO2 UO 2 UO2 H 2 O H 2 O H2O H 2 O Zr Zr Zr Zr Zr Zr Gap Gap

PAGE 125

125 Moderator (H 2 12.4754 (Pitch O) H ) 2 Side moderator thickness 12.1727 cm 0 The second problem is a 3 group (same as previous) 3 region slab problem shown in Figure 62. The first (left) region is a 20 cm slab of fuel, the center regi on is a 35 cm slab of moderator and the 3rdFigure 6 2. Geometry for t est problem 2 region is a 20 cm fuel slab. The material compositions again follow those given in Table 61. This pr oblem illustrates larger source region s separated by a large moderating material. Vacuum boundaries are set on both sides E CADIS Test Problem Results The E CADIS implementation in 1Dcrit was optimized by testing several implementation options (as well as weight window normalizations) On average, the most efficient results are achieved based on the following considerations: 1. On the fly weight window normalization to the average relative response 2. Use of source weight biasing before any free flight (all generations) 3. Biasing only on collision and boundary crossing (biasing on mfp too computationally expensive) For the first item (1), since it is easy to estimate the response of a source population to production utilizing the fission matrix an onthe fly r elative response is utilized to adjust the

PAGE 126

126 weight window boundary so that for a given energy group and spatial source region the response utilized to normalize the weight window is the average response from that region and group combination, not the total response. The second item (2) refers to a form of Russian Roulette and splitting Since the contribution of each source region/group to the total detector response can be estimated, this ratio is used to split or coalesce the starting source particles an d set the starting weights within the windows set from step 1. This is done before the particles start their normal history. Every particle in a particular weight window mesh is modified at once accordingly (and effectively split or coalesced). For the third item, due to the time constraint required by calling the weight window and checking particle location every mfp or any factor thereof (greater or less than a mfp) weight window checking is only performed during a boundary cross ing This is essentia lly the same conclusion as seen in the literature ( Dufek, 2009; Lux and Koblinger, 1991 ) that the direct biasin g of the collision kernel is too expensive. All data given in this and subsequent chapters utilize these options unless otherwise stated A PENTRAN importance simulation pertaining to test problem 1 was performed to initialize weight window parameters via the importance obtained from the simulation. The importance was also used to obtain approximate response to neutron production for initialization of generation 1 in 1Dcrit Neutron importance for this simulation is shown in Figure 63. 115 equally spaced meshes were utilized with an S8 quadrature. Additionally, P3 cross sections were utilized.

PAGE 127

127 Figure 6 3. Benchmark problem 1 (3 source region) g lobal i mportance to production (plotted at deterministic mesh center) Figure 63 shows that the neutron importance is fairly flat near the fuel material for energy group 1 and 2 (in which particles are born) and peaked for the thermal energy group, i.e., group 3. Near the boundaries importance declines significantly, with a lower value on the left due to the different boundary conditions on the ends Ten replications of benchmark problem 1 were simulated for both the standard algorithm with no CADIS bias ing and the E CADIS algorithm. All simulations were performed with 3000 particles per generation, 3350 generations with 350 skipped generations and a keff 62 guess of 1.0. The results of these ten replications are given in Table

PAGE 128

128 Table 6 2. k Replication eff Data for t est problem 1 Standard Algorithm k Standard Algorithm Standard Error eff E CADIS k E CADIS Standard Error eff 1 8.080380E 1 2.565E 4 8.080470E 1 2.502E 4 2 8.081437E 1 2.551E 4 8.083985E 1 2.46 2 E 4 3 8.083118E 1 2.622E 4 8.081054E 1 2.45 5 E 4 4 8.082848E 1 2.600E 4 8.080216E 1 2.47 1 E 4 5 8.077993E 1 2.543E 4 8.088277E 1 2.453E 4 6 8.081518E 1 2.60 5 E 4 8.082642E 1 2.46 3 E 4 7 8.085063E 1 2.567E 4 8 .083868E 1 2.48 6 E 4 8 8.079978E 1 2.583E 4 8.081582E 1 2.4 60 E 4 9 8.079605E 1 2.593E 4 8.079581E 1 2.477E 4 10 8.081535E 1 2.63 8 E 4 8.081221E 1 2.47 1 E 4 Average 8.081348E 1 8.18 1 E 5 8.082290E 1 8.1 80 E 5 From Table 6 2, it can be seen that the final average values are within statistics at one confidence. The average speedup over the cases was just over 1.06 a s defined by EQ. (2 58) Repeated here as speedup = This is a modest speed gain as should be expected after observing the obtained importance obtained from PENTRAN since a flatter importance should correlate with a reduced E CADIS speedup. While each problem is unique and significantly different speedup may be obtained for other problems this problem illustrates well why eigenvalue problems are difficult problems for acceleration methods predicated on importance sampling since particles are born in fissionable material, where they are often most important (energy spectrum complicates the matter some). While these results for keff may be accurate, as keff64 c onvergence is much simpler than the source convergence in high DR problems, it is prudent to analyze the source distribution. If any other information from the simulation is desired the source must be properly converged, as tally confidence estimates will be unbiased only if the source appears random. Shown in Fi gure is the source COM for replication 1 for both the standard a lgorithm and the E CADIS algorithm

PAGE 129

129 F igure 6 4. Problem 1 s tandard and E CADIS s ource COM by g eneration Figure 64 shows that the source appears for the most part to be well behaved and the source convergence diagnostics indicate as such. To confirm the source behavior being converged, a ShapiroWilk nor mality test yields a p value of ~0.32 and ~0.86 for the standard and E CADIS cases respectively. I f the p value is less than the chosen significance level (typically from 1% to 10%), then the null hypothesis is rejected (i.e. one concludes the data are not from a normally distributed population). If the pvalue is greater than the chosen significance level, then one does not reject the null hypothesis that the data came from a normally distributed population. Th e test suggests no reason to reject the hypothesis of normally distributed data at all typical significance levels.

PAGE 130

130 Next, ten replications of benchmark problem 2 were simulated for both the standard algorithm with no CADIS biasing and the E CADIS algorithm. Importance obtained via PENTRAN for this problem is shown in Figure 65. Figure 6 5. Problem 2 (2 source region) g lo bal i mportance to production (plotted at deterministic mesh center) PENTRAN simulation parameters were as follows: 118 equally spaced meshes S8 P level symmetric quadrature 3 cross sections. All simulations were performed with 5000 particles per generation, 5350 generations with 350 skipped generations and a keff63 guess of 1. Given in Table are the keff values for all replications along with the standard error.

PAGE 131

131 Table 6 3. keff Replication Data for b enchmark p roblem 2 Standard Algorithm k Standard Algorithm Standard Error eff E CADIS k E CADIS Standard Error eff 1 7.165342E 1 1.54 3 E 4 7.165885E 1 1.492E 4 2 7.163229E 1 1.50 7 E 4 7.160510E 1 1.488E 4 3 7.162166E 1 1.506E 4 7.163476E 1 1.49 6 E 4 4 7.161542E 1 1.517E 4 7.161822E 1 1.50 5 E 4 5 7.159475E 1 1.57 1 E 4 7.160875E 1 1.50 6 E 4 6 7.165154E 1 1.54 6 E 4 7.162615E 1 1.504E 4 7 7.162641E 1 1.53 3 E 4 7.163767E 1 1.526E 4 8 7.161802E 1 1.538E 4 7.161852E 1 1.500E 4 9 7.163624E 1 1.53 1 E 4 7.163218E 1 1.51 1 E 4 10 7.161230E 1 1.54 6 E 4 7.160740E 1 1.508E 4 average 7.162620E 1 4.85 1 E 5 7.162476E 1 4.755E 5 Data from Table 63 indicates that results are consistent statistically as in test problem 1. Average s peedup for this problem was 1.18, showing better perf ormance, yet still a modest gain of 18% Again the source distribution is analyzed. This time stationarity diagnostics are not favorable for a converged source distribution. For the standard MC algorithm replicati ons, the entropy tests all pass, but the KPSS test shows ten failures. Shown in F igur e 66 is the source COM for replication 1 for both the standard algo rithm and the E CADIS results.

PAGE 132

132 Figure 6 6. Problem 2 s tandard and E CADIS s ource COM by g eneration It is visibly apparent that the source has typical behavior indicative of a high DR problem. At the very least, very strong fission source autocorrelation is observed as can be seen visually with the strong localized trending seen for both t he standard and the E CADIS results. This indicates that the source is strongly correlated with and without E CADIS, and for this particular replication, the correlation appears very similar between the E CADIS and standard algorithm. To see this further and rule out undersampling, the source fractions for the left and right source slabs are plotted in F igure 67 for replication 1 with E CADIS 27.5 29.5 31.5 33.5 35.5 37.5 39.5 41.5 43.5 0 500 1000 1500 2000 2500 3000 Src. Center of Mass (cm) (x locaton) Generation Standard E CADIS

PAGE 133

133 Figure 6 7. Problem 2 s ource f ractions for the l eft and r ight sl ab Figure 67 shows that the lowest source fraction is roughly 30%, which is about 1500 particles, which should rule out undersampling. These findings indicate that any localized tally information from this simulation will have biased confidence estimates A better solution procedure is needed for the high DR problems. ME CADIS Test Problem Results Similarly to the E CADIS technique the ME CADIS technique in 1Dcrit was optimized by testing several implementation options ( including weight window normalizations). This time a clear choice of the best path forward is not as obvious The original intention of ME CADIS was to provide a source detector relationship as in the original CADIS methodology The difficulty with this approach is that while the source detector relationship is made, much time may be spent on reducing variance in low probability transfers In other words, variance is 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0 500 1000 1500 2000 2500 3000 Source Fraction Generation Left Right

PAGE 134

134 reduced for less probable region to region transfers, yet these transfers con tribute less than the most important transfers to the final keffThis procedure may not be very effective in reducing k (and eigenvector). effAlternatively, one can proceed by trying to optimize run time utilizing a source biasing scheme whereby estimated response from a source region and energy group combination utilizing EQ. variance since the contribution is not a large percentage of the final result; however since they still contribute a significant amount to the final result, the overall accuracy of the simulation may be improved if the lower probability transfers are causing a bias as in a loosely coupled system. (5 23) sets the weight window boundaries thereby reducing the need to simulate particles which contribute min imally to the current objective. This then reduces the amount of sampling for the unimportant source detector configurations, nullifying the original intent of decreasing variance for these configurations All detailed testing of ME CADIS presented here then utilizes a weight window normalization scheme such that all source particles are normalized to start within the weight window To begin the ME CADIS testing, ten replications of test problem 1 were simulated for both the standard algorithm with no CADIS biasing and the selected M E CADIS algorithm. Region wise importance is obtained via PENTRAN and is shown in Figure 68. The same PENTRAN meshing, quadrature and cross section Legendre order are utilized as with E CADIS, however recall multiple importance calculations are performed for each distinct source region.

PAGE 135

135 Figure 6 8. Benchmark problem 1 (3 source region) g lobal i mportance to production (plotted at deterministic mesh center) From figure 68, it can be seen that region to region transfers are fairly improbable for all groups and that a transfer from group 3 region 2 to region 3 is most likely to occur. All replications were p erformed with 3000 particles per generation, 3350 generations with 350 skipped generations and a keff guess of 1.0. Given in Table 64 are the keff6 4 values for all replications Additionally in Table are the s tandard error and total average information with standard error

PAGE 136

136 Table 6 4. keff Replication Data for b enchmark Problem 1 c omparing ME CADIS to the s tandard a lgorithm Standard Algorithm k Standard Algorithm Standard Error eff ME CADIS k ME CADIS Standard Error eff 1 8.080380E 1 2.565E 4 8.082874E 1 2.739E 4 2 8.081437E 1 2.55 2E 4 8.082 515E 1 2.690E 4 3 8.083118E 1 2.62 3E 4 8.081293E 1 2.74 8E 4 4 8.082848E 1 2.600E 4 8.085346E 1 2.75 2E 4 5 8.077993E 1 2.543E 4 8.081553E 1 2.70 3E 4 6 8.081518E 1 2.60 5E 4 8.082704E 1 2.61 4E 4 7 8.085063E 1 2.567E 4 8.080805E 1 2.724E 4 8 8.079978E 1 2.583E 4 8.077524E 1 2.778E 4 9 8.079605E 1 2.593E 4 8.081826E 1 2.74 7E 4 10 8.081535E 1 2.63 8E 4 8.079795E 1 2.707E 4 Average 8.081348E 1 8.181E 5 8.081624E 1 8.60 2E 5 Data in Table 64 show that results are within statistics. Average speedup was significantly negative ( 4.5). Overall error achieved for keff65 is comparable favoring the standard methodology meaning a slightly more precise calculation of the eigenvalue is obtained per particle simulated This can be due to the fact that the particular weight window normalization and window width chosen is not optimal for this problem. The chosen scheme should reduce variance in lower value (representing less probability) fission matrix element s however. To investigate whether this implementation of M E CADIS can be useful for more precise fission matrix element determination the fission matrix element s are compared when using E CADIS vs. ME CADIS. Given in Table are the final average fission matrix element s corresponding wit h each element of the 3 X 3 fission matrix determined from the 3 source region problem, and their associated relative error Table 6 5. Fission m atrix e lement s and s tandard e rror for benchmark problem 1 Type A A 11 A 12 A 13 A 21 A 22 A 23 A 31 A 32 E CADIS 33 0.591 .140 0.0154 0.14 0.589 0.142 0.0156 0.144 0.625 E CADIS Rel. Error (%) 4.35 8.53 30.12 9.74 3.17 9.12 32.05 8.26 3.68 ME CADIS 0.590 0.140 0.0154 0.141 .590 0.142 0.0157 0.144 0.625 ME CADIS Rel. Error (%) 4.34 5.99 12.59 7.25 3.64 6.51 14.27 5.96 3.73

PAGE 137

137 From Table 6 5, it can be seen that the standard deviation of the off diagonal (concerning the fission matrix) elements generated by the ME CADIS algorithm are consistently lower than those generated by the E CADIS algorithm. This shows that the fission matrix element s describing coupling between regions is calculated with less variance. This has little impact on the calculated eigenvalue yet may significantly impact the eigenvector accuracy The same analysis was repeated for ten times less particles per generation, with the final results showing similar behavior. Lower probability transfers had greater than 90% relative error while for the ME CADIS algorithm they were between 35% and 45%. Ten repli cations of benchmark problem 2 were then simulated for both the standard algorithm with no CADIS biasing and the M E CADIS algorithm. Region wise importance obtained via PENTRAN is shown in Figure 69 on a log scale. PENTRAN quadrature, cross section Legendre order and mesh size are the same as with the E CADIS replications. Figure 6 9. Benchmark problem 2 (2 source region) g lobal i mportance to production (plotted at deterministic mesh center)

PAGE 138

138 From Figure 69, it is easy to see that coupling between regions is minimal. Given in Table 66 are the keffTable 6 6. k values for all replications along with the standard error. eff Replication data for b enchmark p roblem 2 c omparing ME CADIS to the s tandard a lgorithm Standard Algorithm k Standard Algorithm Standard Error eff ME CADIS k ME CADIS eff Standard Error 1 7.165342E 1 1.54 3 E 4 7.161984E 1 1.180E 4 2 7.163229E 1 1.50 7 E 4 7.161952E 1 1.18 8 E 4 3 7.162166E 1 1.50 7 E 4 7.1 62365E 1 1.17 7 E 4 4 7.161542E 1 1.517E 4 7.160200E 1 1.19 4 E 4 5 7.159475E 1 1.571 E 4 7.160991E 1 1.16 5 E 4 6 7.165154E 1 1.54 6 E 4 7.163848E 1 1.17 8 E 4 7 7.162641E 1 1.53 3 E 4 7.166461E 1 1.203E 4 8 7.161802E 1 1.538E 4 7.162801E 1 1 .196E 4 9 7.163624E 1 1.53 1 E 4 7.162927E 1 1. 200 E 4 10 7.161230E 1 1.54 6 E 4 7.164100E 1 1.154 E 4 Average 7.162620E 1 4.85 1 E 5 7.162763E 1 3.74 4 E 5 Average speedup was signi ficantly negative ( 16.1 ). Overall error achieved for keff67 for this set of replications favors the ME CADIS methodology meaning a more precise calculation of the eigenvalue is obtained per particle via ME CADIS, however each particle takes a significant time longer to simulate. Table gives the fission matrix element s corresponding to a 2 X 2 fission matrix determined from the two source region problem. Table 6 7. Fission m atrix e lement s and s tandard e rror for ben chmark problem 2 Fission Matrix Element ID Type A A 11 A 12 A 21 E CADIS 22 7.132 E 01 3.485 E 03 3.475 E 03 7.129 E 01 E CADIS Rel. Error 1.59% 48.81% 49.04% 1.62% ME CADIS 7.12 6 E 01 3.469 E 03 3.476 E 03 7.129 E 01 ME CADIS Rel. Error 1.65% 10.98% 11.16% 1.65% While the ME CADIS methodology may provide an increase in the accuracy of calculation of fission matrix element s, the effort required to achieve this gain may or may not be worthwhile depending on the problem, and the goal of the simulation. Surely ME CADIS does provide a mechanism for sampling less probabl e region to region transfers which may be important t o obtain an unbiased solution for some problems

PAGE 139

139 Note that as in the E CADIS technique the ME CADIS simulations also failed the KPSS diagnostic Although the fission matrix element s were calculated with less variance, it seems that this was not enough to yield a conve r ged source. Shown in Figure 610 is the fission matrix element A12 for both E CADIS and ME CADIS. Figure 6 10. Benchmark problem 2 f ission m atrix e lement A12Figure for both E CADIS and ME CADIS 610 shows visually the reduction in the variance of the A12 610 fission matrix element Figure also shows that while this problem has exhibited strong source correlation indicative of high DR problems, it appears that the fission matrix element A12 does not appear correlated An autocorrelation plot of the A12611 fission matrix element for the E CADIS approach is given in Figure

PAGE 140

140 Figure 6 11. Benchmark problem 2 f ission m atrix e lement A12Figure E CADIS a utocorrelation plot 611 confirms that the A12 fission matrix element for the E CADIS approach is independent due to the very small lag 1 autocorrelation (as we ll as higher lags). Other fission matrix coefficients should also be investigated and will be studied in more detail in chapter 7. An autocorrelation plot of the A12612 fission matrix element for the ME CADIS technique is given in Figure E CADIS

PAGE 141

141 Figure 6 12. Benchmark problem 2 f ission m atrix e lement A12Similarly to the E CADIS technique Figure ME CADIS a utocorrelation plot 6 12 confirms that the A12Since the fission matrix element s are not correlated, it should be possible to utilize the fission matrix method to obtain reliable estimates to all derived parameters associated from the converged fission matrix eigenvector (source ) for high DR problems. Chapter 7 investigates this finding in much more detail and utilizes it in conjunction with a recently proposed fission matrix based Monte Carlo calculation (FMBMC) ( fission matrix element for the ME CADIS approach is independent due to the very small lag 1 autocorrelation (as well as higher lags). A ShapiroWilk normality test indicates the data may not come from a normal distribution. This means that the fission matrix element s are independent but not normal. Dufek, 2009 )

PAGE 142

142 E CADIS/ME CADIS Final Remarks Testing of different options (source biasing schemes with and without updated response normalization) was done for the same two benchmark problems. For benchmark problem 1 (3 source regions), ten replications were simulated for the standard algorithm and each of the different implemented ME CADIS variations. The best performance in terms of speedup was between 2 and 3 times slower than the standard method. For test problem 2 (2 source regions), achieving nearly the same FOM as the standard calculation was obtained, but still lower. This is the reason for the chosen ME CADIS weight window implementation sin c e the use of ME CADIS for acceleration in the traditional sen se is ineffective as discussed earlier However, by applying ME CADIS to normalize all starting particles to be within the weight window for each source location/energy region to each detector location/energy region (effectively meaning each sourcedetector relationship), all transfers will be nearly as accurate as possible for the given configuration. This may allow a more precise and accurate simulation of fission matrix element s since those element s whose probability is lower yet still important will be determined much more accurately. The final result may be a more robust and reliable solution. If speedup is desired and a biased result is unlikely, then E CADIS should be chosen, while if bias is a concern, ME CADIS is the better choice. While the M E CADIS algorithm was chosen to obtain the best possible accuracy, normalizing the weight window to include all starting particles will result in weight windows with extremely low values when particles are starting in a low local importance region. More s tudies should be done to obtain a more efficient methodology which can still take into account the significant importance increase between source and detector location such as the use of a lower weight hard limit, or a moderate increase in starting weight window values. This will result in a less accurate result, yet the speed gain may prove extraordinary.

PAGE 143

143 Fission Matrix Biasing Method with E CADIS/ME CADIS In the essence of brevity, only some remarks are given for the combination of E CADIS/ME CADIS techniques with traditional fission matrix biasing. While the source variations were reduced significantly to the point of the source appearing much more random, stationarity and autocorrelation tests confirmed that the source is not converged, while bias was observed in the final keff and source. Several variants of biasing were also attempted utilizing weighted percentages of the fission matrix eigenvector (source) with the standard Monte Carlo source, to no avail. Lastly, source correlation was reduced but not eliminated and it appears that utilizing the fission matrix acceleration methods as a source feedback mechanism alone is not adequate for solving high DR and other difficult source convergence problems yet should be more thoroughly examined for other problems

PAGE 144

144 CHAPTER 7 DEVELOPMENT OF FMBMC FOR HIGH DOMINANCE R ATIO PROBLEMS Fission Matrix Based Monte Carlo Method The b enchmark problem 2 results in C hapter 6 indicated poor source convergence (at the very least high DR behavior ). In hig h DR problems, the source distribution will be highly correlated with itself resulting in non random source behavior. This chapter investigates a fission matrix based approach for high DR problems Recently, a complete fission matrix based Monte Carlo cal culation was proposed ( Dufek, 2007; Dufek, 2009; Dufek and Gudowski, 2009) In this approach, only the result of a final converged fission matrix over all generations is utilized to calculate all desired information. It is shown that the spatial dependence of the calculated fission matrix element s is only an artifact of discretization of the original eigenvalue equa tion. In the limit of very small meshes, the source distribution should be irrelevant in the calculation of the fission matrix element s, in which case no skipped cycles are necessary and all particles can contribute to the final result. In fact, the t raditional power method solution procedure isnt required and particles can be sampled randomly throughout the phase space if desired. Practically speaking, however, the mesh size required may make this approach intractable. Instead, this chapter will in vestigate the use of an initialized source distribution and coarser meshing to the level of information desired, and apply this methodology to high DR problem s Since from Chapter 6, the fission matrix element autocorrelation appeared insignificant statis tically speaking, co a rser meshing may be adequate and the source dependence may be weak with fission matrix mesh size. Th is issue will be examined here.

PAGE 145

145 Test Problems for High Dominance Ratio Problems Three test problems are studied in this chapter. The first test problem is a simple monoenergetic single slab problem with vacuum boundaries and is in a slightly super critical configuration. It was chosen since it has been studied before and it was determined to have a dominance ratio of ~0.991 ( Urbatsch, 1995 ) The reference keff71 for this problem was given as 1.02082. Geometry and material properties are shown in Figure Figure 71. Geometry and m aterial data for y est p roblem 1 The second test problem is a simple infinitely long single fuel pincell. This problem is very similar in the physic al sense to test problem 1 (1 D slab representation) A problem of this type was studied extensively recently due to the observation of odd source behavior typical of high DR problems as the simulated length of the pincell increases ( Dumonteil and Courau, 2008; LAbbate et al., 2007) Utilizing reflective boundaries on the ends of a pincell, if all source material is identical, a completely flat source distribution along the length of the fuel pin is expected. Contrary to the expect ation however, as the simulated length (top and bottom boundaries) is increased, usual convergence behavior bre aks down ( Dumonteil and Courau, 2008; LAbbate et al., 2007) The simulated model utilizes homogenized one group c ross sections with t =1.13, 60 cm t = 1.0 s = 0.7, f = 0.3071 DR ~ 0.991

PAGE 146

146 s=0.85 and fMathematically, for problem 2, the concept of the dominance ratio may not apply in this case due to th e fact that there is only one solution and it is a flat source solution, meaning the DR does not exist. If one considers 1 group diffusion theory, however, a formulation for the DR can be obtained for a 1D slab with vacuum boundaries. If the length of t he slab approaches infinity, the DR tends to 1.0 asymptotically, regardless of the material properties (although the details of this asymptotic behavior do change) yet if reflective boundaries are applied, again the DR is not meaningful This problem is analyzed due to the behavior observed only and its possible mitigation with FMBMC. =0.395 and isotropic scattering. The problem is simul ated with two different lengths of L= 10 cm and L=10 0 cm. The third and last test problem for the FMBMC method is test problem 2 from Chapter 6. This problem has 3 energy groups and 3 geometric regions with the center region a moderating material and two identical source regions on either side Fission Matrix Based Monte Carlo Method (FMBMC) Solution Procedure As discussed earlier, the 1Dcrit code includes the fission matrix algorithm. The fission matrix is calculated every ge neration and a cumulative fission matrix is calculated. In addition to the fission matrix method, a source initialization option is available utilizing the converged flux from a converged PENTRAN eigenvalue solution D eterministic importance is also optionally obtained for use with ME CADIS/E CADIS while utilizing the FMBMC implementation if desired Processing codes were written to convert the PENTRAN eigenvalue calculation output to a source particle distribution for 1Dcrit to read as source i nput for the first generation and also to convert PENTRAN importance distributions to a format usable by 1Dcrit. These different options are illustrated in Figure 72.

PAGE 147

147 Figure 7 2. 1Dcrit i nput f low paths for E CADIS/ME CADIS and s ource i nitialization (dashed lines represent optional input flow path) The final results of the FMBMC method are obtained by numerically solving for the first eigenvalue and eigenvector corresponding to the final resulting fission matrix within 1Dcrit for the cumulative fission matrix utilizing the power method. The power method utilized in 1Dcrit may not be the most efficient, but was used for simplicity. The second eigenvalue and eigenvector are also calculated including an estimation of the DR for the discretized fission matrix. Note that the dominance ratio estimated for the discretized matrix will differ from the true DR of the problem. Figure 73 shows this procedure within the framework of 1Dcrit at the end of a source generation. Post Processed bias.dat file for 1D crit E CADIS or PENTRAN EIGENVALUE SIMULATION PENTRAN FIXED SOURCE INPUT (ME CADIS or E CADIS) Script (PENDATA + Source Processing Code) Driver Script for E CADIS or multiple E CADIS PENTRAN Importance 1dcrit.src file containing source particle information 1Dcrit input file 1Dcrit (Standard or Standard and Fission Matrix)

PAGE 148

148 Figure 7 3. Standard and f ission m atrix keffConfidence Estimation with FMBMC determination f lowchart Although it is easy to calculate the resulting keffDufek and Gudowski, 2009 and source eigenvector from the final converged fission matrix, determination of the final confidence on these values is not as straightforward. The variance of each fission matrix element is stored in 1Dcrit; however rigorous propagation of the error in the calculated eigenvalue and eigenvector of the numerical solver cannot currently be calculated ( ) Instead, the use of the plus and minus fission matrix was proposed as bounding values, where the plus fission matrix corres ponds to adding the variance estimate to each fission matrix element for the entire fission matrix, and the same but subtracting for the minus fission matrix The plus and minus fission matrix in eq uation form c an be expressed by, Call Fission Matrix Subroutine Simulate Generation Histories Calculate k eff (total particles generated divided total particles started) Update Cumulative Fission Matrix Solve Numerically for the Fission Matrix First Eigenvalue and Eigenvector as Well as Second Eigenvalue and Eigenvector

PAGE 149

149 ( ) = ( ) + s ( ) = ( ) s i j (7 1) The resultant eigenvalues and eigenvector s for the plus and minus fission matrix then form a very conservative confidence bound. In Fission Matrix Based Monte Carlo Calculations, ( Dufek and Gudowski, 2009) Dufek rigorously derives a formulation for the variance of the fission matrix element s. In this work, however, an estimate of the fission matrix element variance is obtained by ta bulating the generation to generation sample variance for the cumulative fission matrix. Note this is only accurate if the fission matrix element distributions are independent Although a rigorously derived confidence bound cannot currently be obtained a n alternate confidence estimation procedure can be devised s ince the process itself can be simulated utilizing the sample mean and sample standard deviation of the fission matrix element s In this procedure, each fission matrix element is sampled utilizing the average and standard deviation of each element assuming that the element s are independent and normally distributed. The resulting eigenvalue and eigenvector are calculated for the fission matrix generated from the sampling procedure. This pr ocedure is repeated 10000 times If the distribution being sampled is not very skew ed a Monte Carlo iterated confidence interval can be derived with the ends of the interval following a B eta Type I distribution ( Buckla nd, 1984 ) As a benchmark calculation, in an example where sampling was undertaken from a symmetric distribution with known mean and variance, approximate 1and 2 confidence intervals were found to actually be 67.2% to 69% and 94.6% to 95.4% with 95% probability for a random sample of 10000 trials This procedure is utilized throughout the remainder of this dissertation and is referred to as the Monte Carlo iterat ed confidence interval.

PAGE 150

150 FMBMC Test Problem Results Test p roblem 1 analysis Importance related to neutron production was calculated with PENTRAN for test problem 1. P3 cross sections were utilized with S874 level symmetric quadrature with a spatial discretiz ation of 60 equally spaced meshes. The r esulting importance to production is shown in F igure Figure 7 4. Importance to production for FMBMC test problem 1 From Figure 74, it is apparent that the importance to production for this problem is very flat, except near the problem boundary. Wi th this in mind, E CADIS will most likely be ineffective at increasing speedup with nearly all particles being of the same importance. The FMBMC method was performed with E CADIS for a 5 fission source region (12 cm source mesh) and 12 fission matrix regi on (5 cm source mesh) case. In total, t en replications of test problem 1 were simul ated for the standard algorithm and E CADIS concurrently with 1, 5 and 12 0.00 0.20 0.40 0.60 0.80 1.00 1.20 0 10 20 30 40 50 60 Importance To Production x cm

PAGE 151

151 fission matrix source region s Note that E CADIS and FMBMC results are obtained within the same s imulation however E CADIS weight window meshing is dependent of the input mesh structure from the weight window information obtained from PENTRAN and is independent of the fission matrix meshing 50000 histories per generation were chosen for all simulations for the following reasons: s imulation time per history is low avoid undersampling facilitate comparison with the known keffUrbatsch, 1995 ( ) Additionally, a total of 350 skipped generations with 5350 total generations a uniform random ly sampled source and a keff guess of 1.0 were chosen. keff71 results are given in Table for the s tandard a lgorithm and E CADIS with 1 source region. Table 7 1. keff Rep Comparison for FMBMC t est problem 1 with the s tandard and E CADIS a lgorithm with 1 s ource region Std. Alg k Std. Alg. Std. Error eff E CADIS k E CADIS Std. Error eff 1 1.0207053 2.75E 5 1.0206392 3.36E 5 2 1.0206988 2.77E 5 1.0207201 3.31E 5 3 1.0206975 2.74E 5 1.0206185 3.31E 5 4 1.0206774 2.70E 5 1.0206007 3.34E 5 5 1.0207263 2.74E 5 1.0207101 3.29E 5 6 1.0207022 2.74E 5 1.0207367 3.37E 5 7 1.0207248 2.78E 5 1.0206784 3.35E 5 8 1.0207339 2.69E 5 1.0206894 3.33E 5 9 1.0207144 2.72E 5 1.0206841 3.35E 5 10 1.0207258 2.69E 5 1.0206472 3.32E 5 *1 region E CADIS Data in table 71 show that the standard algorithm and E CADIS (1 source region) are consistent. Table 72 gives keff results for E CADIS with 1 source region, E CADIS with 5 source regions and E CADIS with 12 source regions

PAGE 152

152 Table 7 2. keff c omparison for FMBMC t est problem 1 with the E CAD IS a lgorithm with 1, 5 and 12 s ource regions E CADIS k E CADIS Std. Error eff E CADIS k E CADIS Std. Error eff E CADIS k E CADIS Std. Error eff Rep 1 region 5 region 12 region 1 1.0206392 3.36E 5 1.0207330 3.28E 5 1.0207007 3.35E 5 2 1.0207201 3.31E 5 1.0207060 3.42E 5 1.0207600 3.39E 5 3 1.0206185 3.31E 5 1.0207317 3.36E 5 1.0206580 3.43E 5 4 1.0206007 3.34E 5 1.0206709 3.34E 5 1.0206943 3.38E 5 5 1.0207101 3.29E 5 1.0206665 3.38E 5 1.0206538 3.40E 5 6 1.0207367 3.37E 5 1.0207057 3.35E 5 1.0207137 3.39E 5 7 1.0206784 3.35E 5 1.0207022 3.36E 5 1.0206937 3.34E 5 8 1.0206894 3.33E 5 1.0207095 3.37E 5 1.0206899 3.42E 5 9 1.0206841 3.35E 5 1.0207291 3.38E 5 1.0207499 3.30E 5 10 1.0206472 3.32E 5 1.0207309 3.37E 5 1.0206978 3.44E 5 Data in table 72 shows that different E CADIS results with differing numbers of fission matrix regions are indeed comparable as expected. The error value indicates similar performance in which case FOM analysis will be a better indicator of performance. Table 73 gives keffTable 7 3. k results for E CADIS with 5 source region s and the FMBMC method with 5 source regions for both the conservative and Monte C arlo based confidence intervals. eff Rep c omparison for FMBMC t est problem 1 with the E CADIS a lgorithm (5 matrix regions) and the FMBMC algorithm with 5 f ission m atrix r egions ( c onservative and Monte Carlo iterated confidence intervals) E CADIS k eff E CADIS Std. Error FMBMC k eff 5 reg err ( +, cons ) (5 reg) 5 reg err ( cons) 5 reg err ( +, MC) 5 reg err ( ,MC ) 1 1.0207330 3.2 8E 5 1.0207378 1.3378E 4 1.3379E 4 4.012E 5 4.021E 5 2 1.0207060 3.42E 5 1.0207167 1.3462E 4 1.3463E 4 4.073E 5 4.058E 5 3 1.0207317 3.36E 5 1.0207351 1.3363E 4 1.3363E 4 3.974E 5 4.073E 5 4 1.0206709 3.34E 5 1.0206747 1.3586E 4 1.3587E 4 4.040E 5 4.040E 5 5 1.0206665 3.38E 5 1.0206751 1.3452E 4 1.3459E 4 4.168E 5 4.059E 5 6 1.0207057 3.35E 5 1.0207087 1.3728E 4 1.3729E 4 4.130E 5 4.134E 5 7 1.0207022 3.36E 5 1.0207074 1.3423E 4 1.3424E 4 4.076E 5 4.050E 5 8 1.0207095 3.37E 5 1.0207114 1.343 7 E 4 1.3438E 4 3.992E 5 4.034E 5 9 1.0207291 3.38E 5 1.0207283 1.3494E 4 1.3495E 4 4.092E 5 4.013E 5 10 1.0207309 3.37E 5 1.0207388 1.3300E 4 1.3301E 4 3.979E 5 3.995E 5 *5 region E CADIS Data in Table 7 3 shows that the keffs resulting from the FMBMC method are in fact consistent with the E CADIS results (the standard methodology as well). More inte re sting,

PAGE 153

153 however is the magnitude of the resulting confidence intervals with the FMBMC method for both the conservative and Monte Ca r lo based confidence intervals. Clearly the Monte Carlo based interval is far superior. Table 74 gives keffTable 7 4. k results for E CADIS with 12 source regions and the FMBMC method with 12 source regions for both the conservative and Monte Carlo based confidence intervals eff Rep c omparison for FMBMC t est problem 1 with the E CADIS a lgorithm (12 f ission m atrix r egions) and the FMBMC a lgorithm with 12 f ission m atrix r egions ( c onservative and MC c onfidence i ntervals) E CADIS k E CADIS Std. Error eff FMBMC k eff 12 reg err ( +, cons) (5 reg) 12 reg er r ( cons) 12 reg err ( +, MC ) 12 reg err ( ,MC ) 1 1.0207007 3.35E 5 1.0207133 2.4131E 4 2.4141E 4 4.257E 5 4.246E 5 2 1.0207600 3.39E 5 1.0207519 2.4227E 4 2.4236E 4 4.250E 5 4.303E 5 3 1.0206580 3.43E 5 1.0206534 2.4232E 4 2.4241E 4 4.299E 5 4.282E 5 4 1.0206943 3.38E 5 1.0206976 2.4270E 4 2.4279E 4 4.262E 5 4.341E 5 5 1.0206538 3.40E 5 1.0206731 2.4185E 4 2.4194E 4 4.273E 5 4.387E 5 6 1.0207137 3.39E 5 1.0207199 2.4238E 4 2.4247E 4 4.187E 5 4.343E 5 7 1.0206937 3.34E 5 1.0206955 2.4193E 4 2.4201E 4 4.316E 5 4.306E 5 8 1.0206899 3.42E 5 1.0207061 2.4124E 4 2.4133E 4 4.213E 5 4.338E 5 9 1.0207499 3.30E 5 1.0207564 2.4099E 4 2.4107E 4 4.226E 5 4.311E 5 10 1.0206978 3.44E 5 1.0207105 2.4227E 4 2.4236E 4 4.186E 5 4.248E 5 *12 region E CADIS Data in Table 7 4 shows that the keffs resulting from the FMBMC method are again consistent with the E CADIS results (the standard methodology as well). Again the Monte Carlo generat ed confidence interval is far superior to the conservative; however the increase from 5 to 12 source regions did have an impact on the interval width (inc reased). Also, note that the keff results obtained from both PENTRAN and 1D crit were consistent, yet slightly lower than the 1.02082 reference value. PENTRAN results yielded a keffIf th e standard and FMBMC method are comparable in eigenvalue determination, the motivation to utilize the FMBMC must come from elsewhere. Table of 1.020691, more consistent with the results shown in this work. 7 5 give s the final average results for speedup analysi s for FMBMC test problem 1. To calculate speedup for FMBMC, the

PAGE 154

154 error interval is divided by two and utilized as the relative error in equation (2 57) All speedup is calculated relative to the completely separate code without E CADIS. Note that the standard algorithm may be biased. Table 7 5. Speedup (relative to the standard algorithm) for FM BMC t est problem 1 with the E CADIS and FMBMC a lgorithm with 1, 5 and 12 f ission m atrix r egion s Algorithm 1 region 5 region 12 region E CADIS 0.48 0.40 0.34 FMBMC (Cons.) 0* 0.025 0.008 FMBMC (MC) 0* 0.27 0.21 *Not Determined for 1 source region Data in Table 7 5 demonstrates several things. First, if accurate, the standard algorithm is more effective for this problem than w hen E CADIS with FMBMC is utilized. This is not unexpected due to the almost constant importance and any overhead associated with the E CADIS algorithm will become evident as the time is wasted, yet 0.48 seems to be too low. Investigating the cause of th is difference in speedup resulted in two findings. First, particle information bank variables were allocated to values unnecessarily large and their use within the framework of 1Dcrit caused unnecessary calculation. Second, weight window width is slightl y different than in the standard algorithm causing some differences. Improving the memory management and adjusting the weight window values for this problem result in speedup between 0.9 and 1.0, which is more sensible given there will still be expected overhead due to weight window location searches being performed at each collision. Note that all the results in this dissertation are with the unimproved memory management. Continuing analysis of data in Table 75, the Monte Carlo based confidence interval is significantly more efficient than the conservative plus and minus fission matrix based interval. Comparing the relative speedup of the Monte Carlo based confidence interval with the E CADIS

PAGE 155

155 results give a ratio of about 0.6 to 0.7. Table 76 gives the final average results for speedup analysis for FMBMC test problem 1. Table 76. Source c onvergence diagnostic r esults for FMBMC t est problem 1 with the E CADIS and FMBMC a lgorithm with 1, 5 and 12 f ission m atrix r egions Src. Conv. Diag. 1 region 5 region 12 region KPSS (Fail) 9 9 9 Entropy Test 1 (Fail) 0 0 0 Entropy Test 2 (Fail) 0 0 0 Entropy Test 3 (Fail) 0 0 0 Data from Table 76 indicates, through the KPSS diagnostic, that the source has not converged. To show the source behavior and poor convergence of this problem, COM behavior is shown in Figure 75 for replication 5 (chosen at random) with E CADIS with 5 fission matrix regions. Figure 75. Replication 5 FMBMC COM behavior

PAGE 156

156 Figure 75 c learly shows the source is not behaving as a typical converged stochastic source. To investigate the FMBMC effectiveness, the mesh based source fractio n convergence obtained for the same replication (12 cm meshes) is shown in F igure 76 for the first source mesh Figure 76. FMBMC t est problem t wo r eplication one s ource m esh one c onvergence From Figure 76 it can be seen that although there is significant initial variation, the source fractions begin to converge relatively quickly. Note that until the skipped generations are finished, the cumulative fission matrix is not utilized which results in more random behavior at the beginning of the simulation. A zoomed view of source region 3 is also shown in Figure 76 for the second half of the simulation, for clarity. From generation 3500 to the simulation end a maximum total change of 0.05% is observed. The final converged fission matrix source fractions for all replications are given in Table 77. 0.1030 0.1035 0.1040 0.1045 0.1050 0.1055 0.1060 0.1065 0.1070 0.1075 0 1000 2000 3000 4000 5000 Source fraction Generation # 0.1034 0.1035 0.1036 0.1037 0.1038 0.1039 0.1040 2500 3500 4500 5500 Source fraction Generation #

PAGE 157

157 Table 7 7. FMBMC t est problem 1 FMBC f inal c onverged s ource f ractions for the 5 s ource r egion c ase Replication Mesh 1 Mesh 2 Mesh 3 Mesh 4 Mesh 5 1 0.1020 0.2484 0.3038 0.2459 0.0998 2 0.1038 0.2502 0.3018 0.2441 0.1001 3 0.1037 0.2503 0.3032 0.2438 0.0991 4 0.1007 0.2456 0.3031 0.2486 0.1020 5 0.1035 0.2495 0.3022 0.2446 0.1001 6 0.1026 0.2490 0.3025 0.2450 0.1009 7 0.1031 0.2492 0.3023 0.2449 0.1006 8 0.1003 0.2459 0.3036 0.2482 0.1021 9 0.1011 0.2462 0.3024 0.2486 0.1019 10 0.1001 0.2453 0.3030 0.2492 0.1023 Data in Table 7 7 shows the consistency to which the FMBMC final converged source fractions are obtained. Maximum variations are on the order of 0.5% for any source mesh Final combined source fractions with combined standard deviation over the replications is given in Table 78 Table 78. FMBM C t est problem 1 c ombined c onverged s ources with a ssociated c ombined s tandard deviation FMBMC Standard Mesh Src. Fraction Std. Dev. Src. Fraction Std. Dev. 1 1.02091E 1 7.26E 5 1.02270E 1 3.32E 05 2 2.47953E 1 1.03E 4 2.48083E 1 4.79E 05 3 3.02792E 1 5.94E 5 3.02654E 1 3.68E 05 4 2.46275E 1 1.03E 4 2.46114E 1 4.77E 05 5 1.00890E 1 7.29E 5 1.00880E 1 3.34E 05 Table 78 shows that while the source is calculated with high precision, the symmetric values are not within statistics. This indicates that the standard method and the FMBMC method a re biased, indicating that most likely the fission matrix element s are correlated with this mesh structure Table 79 shows a com parison of meshes 1 and 3 for the final converged source for the standard algorithm (avg source in mesh) and FMBMC.

PAGE 158

158 Table 7 9. FMBMC t est problem 1 f inal c onverged s ource f raction c omparison for s ource m esh 1 and 3 with the s tandard a lgorithm and FMBMC (5 total source meshes) Standard FMBMC Replication Mesh 1 Mesh 3 Mesh 1 Mesh 3 1 0.1004 0.3029 0.1020 0.3038 2 0.1018 0.3030 0.1038 0.3018 3 0.1024 0.3033 0.1037 0.3032 4 0.1029 0.3025 0.1007 0.3031 5 0.1022 0.3030 0.1035 0.3022 6 0.1019 0.3018 0.1026 0.3025 7 0.0993 0.3027 0.1031 0.3023 8 0.1019 0.3021 0.1003 0.3036 9 0.0987 0.3032 0.1011 0.3024 10 0.1029 0.3018 0.1001 0.3030 Data in Table 7 9 indicates that for this problem, the final converged sources for 5 cm meshes are similar for both the standard and FMBMC methods This may lead one to believe that the mesh based source may be acceptable utilizing the standard method. While this may be true, a confidence estimate on the source fractions should not be reliable with the standard method due to source autocorrelation or with the FMBMC method if fission matrix element s are not independent Data for the 12 mesh case (5 cm meshes) source fractions are presented in Table 710 for cases with source mesh es 1, 6 and 12. Table 7 10. FMBMC t est problem 1 f inal c onverged s ource f raction c omparison for s ource m esh 1 and 6 and 12 with the s tandard a lgorithm and FMBMC ( 12 total source meshes) Standard FMBMC Replication Mesh 1 Mesh 6 Mesh 12 Mesh 1 Mesh 6 Mesh 12 1 0.0206 0.0513 0.0789 0.0211 0.0526 0.0810 2 0.0208 0.0521 0.0802 0.0205 0.0512 0.0789 3 0.0209 0.0525 0.0801 0.0209 0.0522 0.0801 4 0.0211 0.0527 0.0803 0.0205 0.0515 0.0792 5 0.0209 0.0523 0.0804 0.0207 0.0518 0.0795 6 0.0208 0.0522 0.0801 0.0206 0.0517 0.0795 7 0.0203 0.0508 0.0783 0.0209 0.0524 0.0803 8 0.0208 0.0521 0.0800 0.0208 0.0520 0.0799 9 0.0201 0.0505 0.0781 0.0207 0.0519 0.0797 10 0.0211 0.0526 0.0808 0.0209 0.0522 0.0798

PAGE 159

159 Data in T able 7 10 suggests that for all cases of source meshes, the standard methodology source fractions and FMBMC source fractions appear fairly consistent with each other Looking back at Figure 75, it can be seen that although the source oscillates with significant trending f or all cases, the overall magnitude is only moderately large and for the replication shown, much of the variation averages out over the replication. While this is true, autocorrelation in the source should have an effect on the standard deviation accuracy for confidence estimates Since the E CADIS and FMBMC results are obtained utilizing the same simulation, the source distributions obtained between these results are now analyzed for the following cases shown in Table 7.11. The E CADIS methodology in the se cases represents the accuracy of the standard methodology since it has already been shown that fission matrix element variation can be reduced with E CADIS, however the source distribution itself comparing to the standard method is similar. Table 7 11. Case i nput p arameter s ummary for E CADIS and FMBMC Case Source Meshes Skipped Gens. Neutrons per Gen. Total Gens. 1 10 5 350 50k 5350 11 20 12 350 50k 5350 21 30 12 350 100k 5350 3 1 40 24 350 100k 5350 41 50 24 350 100k 20350 Although the purpose of this methodology is to provide accurate localized data, keff77 is shown for comparison for all cases in Figure Note that several of these cases are the same replications already analyzed.

PAGE 160

160 Figure 7 7. FMBMC t est problem 1 keffFigure s ummary for cases 1 50 77 shows that the keff values are fairly consistent. At one both the E CADIS method and the FMBMC method yield similar results, yet the error bars for the FMBMC method are slightly wider. This could be in part due to the effect of the high dom inance ratio and some small correlation in keff may be present biasing the standard keff78 confidence To continue the analysis where the source is concerned, the source distributions for cases 1, 11, and 31 are shown in Figure 1.0206 1.02062 1.02064 1.02066 1.02068 1.0207 1.02072 1.02074 1.02076 1.02078 1.0208 0 10 20 30 40 50 k eff Cases E CADIS FMBMC

PAGE 161

161 Figure 7 8. Converged s ource bins for c ases 1, 11 and 31 Figure 78 shows visually that the results appear symmetric and therefore some measure of accuracy is expected. Other replications yield similar results. Contrasting this result, however is the fact that the KP SS source convergence diagnostic indicates possible source convergence difficulty in 44 of the 50 replications. Recall also the source behavior seen in F igure 7 5, which is representative of all cases. In order to observe the impact of the source behavior on the tallied results, source mesh 1 is plotted for both the E CADIS and FMBMC method f or all cases. Cases 1 through 10 are shown in Figure 79. 0.00E+00 5.00E 02 1.00E 01 1.50E 01 2.00E 01 2.50E 01 3.00E 01 0.00 10.00 20.00 30.00 40.00 50.00 60.00 Mesh Source Percentage Location (x cm) 5 mesh E CADIS (Case1) 5 mesh FMBMC (Case 1) 12 mesh E CADIS (Case 11) 12 mesh FMBMC (case 11) 24 mesh E CADIS (case 31) 24 mesh FMBMC (case 31)

PAGE 162

162 Figure 7 9. M esh 1 s ource f raction f or c ases 1 10 Cases 1 10 represent the coarsest source meshing with 12 cm source meshes. Figure 79 shows that the final obtained source fraction from mesh 1 for the ten r eplications are inconsistent even for 2 sigma ( ) While the overall variation in the data is small, the calculated uncertainty for both the E CADIS and FMBMC method are completely erroneous. Note that without such detailed analysis, current methodologies would not indicate that any problem exists, and with the small uncertainties obtained, it would be easy to improperly accept the solution as accurately converged. Cases 11 20 are shown in Fig ure 710 9.95E 02 1.01E 01 1.02E 01 1.03E 01 1.04E 01 1.05E 01 0 2 4 6 8 10 Source Fraction (mesh 1, cases 1 10) Case # E CADIS FMBMC

PAGE 163

163 Figure 7 10. Mesh 1 s ource f raction for c ases 11 20 Cases 11 20 represent an increase of 5 to 12 source meshes compared to cases 1 10. Figure 710 shows that the final obtained source fraction from mesh 1 for the ten cases are again inconsistent for s everal cases even at 2 There appears to be an improvement in the calculation of the uncertainty with the FMBMC methodology as can be seen by the wider intervals obtained. Cases 21 30 are shown in Figure 711. 2.03E 02 2.04E 02 2.05E 02 2.06E 02 2.07E 02 2.08E 02 2.09E 02 2.10E 02 2.11E 02 2.12E 02 2.13E 02 10 12 14 16 18 20 Source Fraction (mesh 1, cases 11 20) Case # E CADIS FMBMC

PAGE 164

164 Figure 7 11. M esh 1 s ource f raction for c ases 21 30 Cases 21 30 represent an increase of particle per generation compared to cases 11 20. Figure 711 shows that the final obtained source fraction from mesh 1 for the ten replications are again inconsi stent for the E CADIS method, yet consistency has improved even more for the FMBMC results, yet results still appear incons is tent at 1 It appears that the changing source meshing is having a significant impact, however, on the FMBMC final results. Cases 31 40 are shown in Figure 712. 2.05E 02 2.06E 02 2.07E 02 2.08E 02 2.09E 02 2.10E 02 2.11E 02 20 22 24 26 28 30 Source Fraction (mesh 1, cases 21 30) Case # E CADIS FMBMC

PAGE 165

165 Figure 7 12. M esh 1 s ource f raction for c ases 31 40 Cases 31 40 represent an increase in the number of fission matrix meshes to 24. Figure 712 shows that the final obtained source f raction from mesh 1 for the ten replications are again inconsistent for the E CADIS method, yet the FMBMC results now appear acceptable. This is a very significant finding. To reduce the size of the error bars for the FMBMC results further, c ases 41 50 a re shown in Figure 713 for an increased total active generations from 5000 to 20000. 6.22E 03 6.25E 03 6.28E 03 6.31E 03 6.34E 03 6.37E 03 6.40E 03 30 32 34 36 38 40 Source Fraction (mesh 1, cases 31 40) Case # E CADIS FMBMC

PAGE 166

166 Figure 7 13. Mesh 1 s ource f raction for c ases 41 50 Figure 713 shows that the final obtained source fraction from mesh 1 for the ten replications are still inconsistent for the standard method. The FMBMC results still appear acceptable for all of the FMBMC cases, however, as was seen in 712. Again, thi s is a very significant finding. Since independence of the fission matrix element s is required in order to accurately calculate the eigenvalue and eigenvector, the autocorrelation element of the A12714 element is shown for case 1, 11, 21, 31 and 41 in Figure 6.22E 03 6.25E 03 6.28E 03 6.31E 03 6.34E 03 6.37E 03 6.40E 03 40 42 44 46 48 50 Source Fraction (mesh 1, cases 41 50) Case # E CADIS FMBMC

PAGE 167

167 Figure 7 14. Autocorrelation of the A12Figure f ission m atrix e lement for r epresentative c ases 1, 11, 21, 31 and 41 714 illustrates the effect of source discretization and density on the A12Recall that in order to form accurate confidence intervals the element s need to be uncorrelated, and their average values must be able to be represented by a normal distribution. With such large sample sizes, however, normality tests are often unreliable as they identify insignificant deviations from the normal distribution. This is because with a large sample size, fission matrix element. Elements one off of the diagonal ofte n have the highest correlation, which is why this element was chosen for illustration. As mesh size is decreased, the correlation is reduced. Note that for case 11 and 21, the same size fission matrix is utilized with different particles per generation, yet the fission matrix autocorrelation is not changed significantly. Cases 31 and 41 indicate that increasing the number of generations has little effect on the autocorrelation. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 1 4 7 10 13 16 19 22 25 28 31 34 Autocorrelatoin Lag Case 41 Case 31 Case 21 Case 11 Case1

PAGE 168

168 normality tests have enough data at hand to identify very small deviations from normality. Even if the element being sampled is drawn from a normal distribution, the sample itself may at times appear slightly nonnormal. Fortunately, for large sample sizes, so long as the fission matrix correlation is small, the CLT should apply and the fission matrix elements average values should tend toward a normal distribution and be accurately quantified. A ShapiroWilk Normality test was utilized, however, for each element and is even more informative for smaller sample sizes. Using the final converged fission matrix, it is possible to estimate the fraction of the final keff obtained from nonindependent fission matrix element s (at a chosen significance). If the lag 1 autocorrelation is too high (as compared to a standard normal curve with the same mean and standard deviation), the element is considered correlated and its contribution to the final keff is estimated by Ai,jF or case 1, it was determined that all of t he fission matrix elements but one w ere not independent This corresponded to an approximate 100% contribution from correlated elements. Similarly the k *S (S~fractional source) A similar contribution from non normal fission matrix elements can be calculated (at a chosen significance) utilizing the Shapiro Wilk (or another) normality test. effTest p roblem 2 analysis for nonnormal meshes was determined to be ~13%. Note that if the fission matrix element series i s correlated, the normality test for that mesh is not valid. Comparatively, for case 41, the approximate nonindependent percentage was 17% while the nonnormal percentage was 26%. Ten replications of test problem 2 were simulated. A total of 350 skipped generations with 5350 total generations, a uniform random source, and a keff guess of 1.0 were chosen. For test problem two, the E CADIS algorithm was utilized with constant importance (as in the standard

PAGE 169

169 methodology, where all p articles are of equal importance). Both the 10 cm and 100 cm ref lected pincell were initially simulated with 5 source zones (meshes, 2 cm and 20 cm respectively). Figure 7 15. FMBMC t est problem 2 keffFigure s ummary (5 source meshes) 715 shows results of keff715 for all cases across the 10 replications. Error bars are purposely omitted since their inclusion inhibits the ability to distinguish between simulations. Standard error is between approximately 4e 5 and 6e 5 for all simulations except for the conservative FMBMC intervals which are much wider ( standard error ~2.6e 4). From Figure it is observed that all keff values appear relatively consistent as most would be within one standard deviation, and all within two standard deviations As in FMBMC problem 1, no indication of high DR behavior is observed with keff analysis alone. No te that for the E CADIS/FMBMC final resulting keff values (10 cm and 100 cm) for the same replication the FMBMC method appears slightly higher (<1e5) on average

PAGE 170

170 Table 712 gives the final average results for speedup analysis for the FMBMC test problem two. along with KPSS and entropy source convergence information. S peedup for the FMB M C cases are calculated as before relative to the standard methodology Table 712. Speedup d ata for FMBMC t est problem 2 with the E CADIS and FMBMC a lgorithm (10 cm and 100 cm model length) Algorithm 10 cm 100 cm E CADIS 0.59 .57 FMBMC (Cons.) .024 0.041 FMBMC (MC) 0.44 0.43 all speedup is relative to the 10 cm standard case Data from Table 712 indicates that the differences between the 10 cm and 100 cm cases concerning speedup are marginal if not inconsequential, with the exception of the conservative confidence interval case, which is much lower. Table 712 also shows that the standard method is superior to the E CADIS method modified to mimic the standard method To investigate this source of discrepancy, the fission matrix was turned off and the same simulations performed with only E CADIS. Similar speedup was obtained indicating that the algorithm is not as efficient without importance data and the FMBMC meth od provides minimal impact. There is no limitation on the FMBMC being included into the standard algorithm in which case this loss in efficiency would be avoided. Table 713 gives the final average results for speedup analysis for the FMBMC test problem two. Table 713. Source c onvergence diagnostic r esults for FMBMC r est proble m 2 with the E CADIS and FMBMC a lgorithm (10 cm and 100 cm model length) Src. Conv. Diag. 10 cm 100 cm KPSS (Fail) 2 10 Entropy Test 1 (Fail) Not valid Not valid Entropy Test 2 (Fail) 0 0 Entropy Test 3 (Fail) 0 0

PAGE 171

171 KPSS failures in Table 712 for the 100 cm replications show that the simulated physics may be in a different regime than that of the 10 cm replications A COM plot for replication 1 for both the 10 cm and 100 cm case is shown in Figure 7 16 with the 10 cm COM renormalized by multiplying by a factor of 10 to show both data series on the same figure Figure 7 16. FMBMC benchmark p roblem 2 r eplication 1 COM for the 10 cm (renormalized for plot) and 100 cm cases Figure 716 shows that the 10 cm replication COM behaves more randomly than the 100 cm replication. This should be evident since during the course of the simulation, the source migrates more to the upper half of the fuel rod, and often significantly so, final source fractions obtained from the standard methodology will be biased. While mathematically the DR may not have true meaning for this problem, it is the simulated length that appears to be re lated to the

PAGE 172

172 source convergence behavior. To investigate the FMBMC effectiveness, the mesh based source fraction for replication 1, mesh 1 of the 100 cm pincell is shown in Figure 717. Figure 7 17. FMBMC t est problem t wo s ource m esh one c onvergence (50000 particles per g eneration) Figure 717 shows that the mesh 1 FMBMC source fraction appears not yet converged. In fact, it appears that the fission matrix element dependence on the source may be stronger than is acceptabl e for an accurate determination by the FMBMC for this mesh size and the analysis of correlation of fission matrix element s should shed some light on the overall source behavior and will be investigated. The final converged fission matrix source fractions for all replications are given in Table 714. 0.1600 0.1650 0.1700 0.1750 0.1800 0 1000 2000 3000 4000 5000 Source fraction Generation #

PAGE 173

173 Table 7 14. FMBMC t est problem 2 f inal c onverged s ource f raction for s ource m esh 1, 3 and 5 for b oth the 10 cm and 100 cm pincells 10 cm 100 cm Replication Mesh 1 Mesh 3 Mesh 5 Mesh 1 Mesh 3 Mesh 5 1 0.2000 0.2001 0.2000 0.1700 0.1917 0.2079 2 0.2001 0.2001 0.2000 0.1767 0.1818 0.1902 3 0.1999 0.1999 0.2000 0.2121 0.2085 0.2009 4 0.2001 0.2000 0.1999 0.1793 0.1821 0.1955 5 0.1999 0.2000 0.2000 0.1890 0.1911 0.1995 6 0.2001 0.2000 0.2000 0.2270 0.2122 0.1933 7 0.2001 0.2001 0.2001 0.2292 0.2155 0.1960 8 0.2001 0.2000 0.1999 0.1994 0.2025 0.1999 9 0.2001 0.2001 0.2001 0.2026 0.2071 0.1987 10 0.1999 0.1999 0.2001 0.1803 0.1871 0.2030 Data from Table 714 indicates that the 10 cm FMBMC case is well converged to the expected 20% source per source mesh. The 100 cm FMBMC case while not as well converged, fluctuates around the true ave rage. Longer simulation time should tighten convergence without any biasing if the fission matrix element s are not correlated A comparison of the final converged source for mesh es 1, 3 and 5 for the standard algorithm and the FMBMC source for the 100 cm pincell is given in Table 715. Table 7 15. FMBMC t est problem 2 f inal c onverged s ource f raction c omparison for s ource m esh 1, 3 and 5 for the s tandard a lgorithm and FMBMC (5 total source meshes) for the 100 cm pincell Standard FMBMC Replication Mesh 1 Mesh 3 Mesh 5 Mesh 1 Mesh 3 Mesh 5 1 0.1647 0.1772 0.2009 0.1700 0.1917 0.2079 2 0.1938 0.1946 0.2004 0.1767 0.1818 0.1902 3 0.2196 0.2096 0.1919 0.2121 0.2085 0.2009 4 0.2201 0.2190 0.2023 0.1793 0.1821 0.1955 5 0.1838 0.1948 0.1994 0.1890 0.1911 0.1995 6 0.1933 0.1923 0.1973 0.2270 0.2122 0.1933 7 0.1857 0.1915 0.2035 0.2292 0.2155 0.1960 8 0.1994 0.1979 0.1974 0.1994 0.2025 0.1999 9 0.2080 0.2014 0.1950 0.2026 0.2071 0.1987 10 0.1959 0.1998 0.2015 0.1803 0.1871 0.2030 Data in Table 7 15 indicates similar behavior for final converged source fractions for both the standard method and FMBMC. Neither the FMBMC nor the standard methodology should

PAGE 174

174 be considered converged. Figures 716 and 717 confirm this as the source for the standard methodology is fluctuating wildly, while f or the FMBMC source appears somewhat unstable To investigate dependency in the fission matrix source, an autocorrelation plot for fission matrix element A12718 is shown in Figure Figure 7 18. FMBMC t est problem 2 r eplication 1 A12Figure f ission m atrix e lement a utocorrelation plot 718 shows that the A127 16 fission matrix element autocorrelation for this case is moderately high at lag 1 and decreas es slowly with increasing lag In order to investigate the cause of the correlation, which is present in varying degrees in other matrix element s as well the effect of source meshing and particle population per generation was investigated. Table gives the different subcases investigated delineated by input

PAGE 175

175 Table 7 16. FMBMC t est problem 2 s ubcase i nput parameters (E CADIS no importance) Subcase Fission Matrix Meshes Replications Skipped Generations Histories per Generation Total Generations 1 5 10 350 50000 5350 2 5 1 350 500000 5350 3 20 10 350 50000 5350 4 20 1 350 500000 5350 5 50 1 350 50000 5350 6 50 1 350 500000 5350 7 100 1 350 50000 5350 8 100 1 350 500000 5350 *This subcase is the original case replications. In subcase 2, a n increase of particles per generation by a factor of 10 was utliized. This was done since in the 100 cm case, the pincell is 10 times larger by volume and the source density should be equilvant to the original 10 cm subcase Final source fractions for subcase 2 replication 1 with no biasing (the standard method was not run since the E CADIS method without biasing behaves the same as the standard meth od concerning source convergence) corresponding to source meshes 1 through 5 were 0.1996, 0.1994, 0.2005, 0.2013 and 0.1994. The corresponding results for the FMBMC method were 0.1996, 0.1996, 0.2008, 0.2011 and 0.1989. By simply comparing this data with the expected result, b oth methods appear much better converged. A COM plot of subcase 2 is shown in Figure 719. Addi tionally, to look at the cumulative behavior of the FMBMC source, the FMBMC mesh based source fraction for subcase 1 mesh 1 is shown in Figure 720.

PAGE 176

176 Figure 7 19. FMBMC t est problem 2 s tandard a lgorithm COM for s ubcase 2 r eplication 1 Figure 7 20. FMBMC t est problem 2 s ubcase 2 r eplication 1 m esh 1 s ource f raction e volution 48.40 48.90 49.40 49.90 50.40 50.90 51.40 51.90 52.40 0 1000 2000 3000 4000 5000 X COM (cm) Generation #

PAGE 177

177 While it appears that an increase in particles per generation did dampen the level of fluctuation more so for the FMBMC methodology than the standard methodology, correlation is still evident in the COM shown in figure 719. Figure 720 shows that the mesh 1 FMBMC based source fraction over time accur acy increases slowly with total simulated particles yet still appears somewhat erratic More generations may be necessary to see long term fluctuations if present. To investigate autocorrelation in t he FMBMC method for subcase 2, an autocorrelation plot of fission ma t rix element A12721 is shown in figure Figure 721. FMBMC t est problem 2 r eplication 1 subcase 2 A12Figure f ission m atrix e lement a utocorrelation plot 721 shows that although increasing particles per generation reduces overall variations, significant correlation is still evident and in fact for this element actually increased Note t hat fission matrix element A13 has insignificant autocorrelation while A11 has significant

PAGE 178

178 but reduced autocorrelation. To further investigate this issue, subcases 3 and 4 with 20 fission matrix meshes are examined Each source mesh will be expected to contain 5% of the overall source. Figure 722 shows a COM plot for subcase 4. Figure 722. FMBMC t est problem 2 s tandard a lgorithm COM for s ubcase 4, r ep lication 1 Figure 722 shows that the source behavior seen in Figure 719 is the same type of source behavior seen for this subcase, which is a strong source correlation. Figure 723 shows the source fraction behavior for mesh 1, subcase 4. 48.40 48.90 49.40 49.90 50.40 50.90 51.40 51.90 52.40 0 1000 2000 3000 4000 5000 X COM (cm) Generation #

PAGE 179

179 Figure 7 23. FMBMC t est problem 2 s ubcase 4 r eplication 1 m esh 1 s ource f raction e volution Figure 723 shows that the mesh 1 source fraction indicates increasing accuracy with particles simulated. This is to be expected as the number of terms utilized to calculate the fission matrix elements increases by generation. Figure 7 24 gives the final source f ractions for both the standard and FMBMC method for subcase 3 (replication 1) and subcase 4. 0.047 0.048 0.049 0.050 0.051 0.052 0 1000 2000 3000 4000 5000 Source Fraction Generation #

PAGE 180

180 Figure 7 24. FMBMC t est problem 2 s ubcase 3 and 4 s ource f ractions Figure 724 shows that both subcase s 3 and 4 source fractions exhibit similar behavior when compared to subcase s 1 and 2 respectively. Subcase 4 results show an improvement over subcase 3 as expected. A conclusion drawn here is that increasing particles per generation yields less variation as expected due to more simulated particles The difficulty is that for a problem with a high DR, convergence, assuming no undersampling and adequate skipped cycles, may still not be apparent due to the high source autocorrelation. Figure 725 shows an autocorrelation plot of elem ent A12 for subcase 4.

PAGE 181

181 Figure 7 25. FMBMC t est problem 2 s ubcase 4 A12Figure f ission m atrix e lement a utocorrelation plot 725 shows that the autocorrelation of fission matrix element A12726 has reduced significantly to under 0.2 at lag 1. Better convergence is indicated with the increased mesh size but constant particles per generation and total generations This supports the fac t that as mesh size reduces the fission matrix element s should be independent of the source distribution, but it also shows that the fission matrix dependence on mesh size for larger meshes may be able to be quantified in terms of fission matrix element a utocorrelation. Continued analysis of subcases 5 and 6 for 50 source meshes shows more of the same trend as was seen for 20 meshes. A source fraction plot is shown in Figure

PAGE 182

182 Figure 7 26. FMBMC t est problem 2 s ubcase 5 and 6 s ource f ractions Figure 726 shows subcase 6 varies less about the expected source fraction of 0.0200. An autocorrelation plot for fission matrix element A12727 is shown in Figure Figure 7 27. FMBMC t est problem 2 s ubcase 6 A12 f ission m atrix e lement a utocorrelation plot

PAGE 183

183 Clearly, subcase 6 with more source particles outperforms subcase 5 as expected while the significant autocorrelation is reduced even more as seen from the autocorrelation plot in Figure 727. This indicates th at at 50 meshes, confidence estimates derived from the final converged fission matrix should be more accurate using the procedure described earlier in this chapter. Still of interest is the fact that reducing mesh size also appears to increase the accurac y of the standard methodology. Normality of fission matrix element s A11, A12, A13 and A14 were tested with the Cramer von Mises (CVM) and KolmogorovSmirnov (KS) normality tests. The Shapiro Wilk normality test was not utilized since it the CVM and KS t ests have more desirable properties at large sample sizes (5000 in this case) yet as noted before, such large sample sizes are problematic in general concerning normality tests Element s A11, A12 passed both tests at the 10 percent significance level, while the A13 element failed the CVM test and the A14This implies that utilizing more source particles impacts the distribution of the fission matrix element s not only in thei r autocorrelation, but increases their convergence to a normal distribution. It appears that the lower the fission matrix element value (representing probability of fission from cell j to cell i ), the more source particles that are necessary per generatio n to allow the element to converge to a normal distribution. Fission matrix element A element failed both. 16 Subcases 7 and 8 have a further increase in fission ma trix meshes to 100. This means an expected 1% of the source will be in each source mesh. Similar behavior was observed for these for subcase 6 is the first element which contains a significant number of zero values, representing zero probability of neutrons being generated from that fission matr ix transfer during the generation in which the zero value occurred. Continuing to farther off diagonal fission matrix elements, the fraction of zero values are increased unti l all zeroes are observed.

PAGE 184

184 subcases as was observed for subcases 5 and 6. Note that with increasing mesh density, fewer particles will contribute to th e average value achieved, yet an increase in reliability is seen. This is most likely a result of the loss of dependence of the source distribution on the fission matrix element s Consequently, however, as the grid density increases further, more particl es simulated ( per generation ) are necessary More analysis of this problem will follow in the development of diagnostics to evaluate the FMBMC method mesh density and particle population size. Fission matrix element independence and normality is investig ated more thoroughly. Test p roblem 3 analysis FMBMC test problem 3 is the same as E CADIS/ME CADIS test problem 2. A detailed keff, source convergence, and fission matrix element analysis for the standard, E CADIS and ME CADIS algorithm was given in Chapter 6. Subcases utilizing 10, 20, 40 and 80 equally spaced fission matrix meshes were simulated to provide a similar analysis as was done for test problem 1 and 2. Utilizing thes e cases it was determined that for this problem, no significant correlation was present in any fission matrix elements Although correlation is insignificant for all subcases it appears that the simulation s may be suffering from undersampling within each generation This conclusion was drawn due to a large number of the fission matrix element s often resulting in zero values for a significant percentage of generations, yet nonzero for most when the mesh grid density increases (more, smaller meshes). To t est whether this is the case, p article population was increased for all subcases beyond 20 meshes and fission matrix element s remain uncorrelated with better sampling as expected Note that since the importance sharply decreases near the problem boundary, the poorest sampled region corresponds to the regions nearest to the boundar ies Although the final converged source distribution is unknown, symmetry is expected.

PAGE 185

185 To compare the standard and FMBMC method, source fractions are given in Table 717 for th e subcase with 10 fission matrix meshes and 5000 particle per generation for the standard and FMBMC method for mesh 1, 2 and 3. Table 7 17. FMBMC Test Problem 3 Final Converged Source for Source Meshes 1 3 Standard Source Fraction FMBMC Source Fraction Replication Mesh 1 Mesh 2 Mesh 3 Mesh 1 Mesh 2 Mesh 3 1 0.0312 0.0623 0.0889 0.0315 0.0629 0.0898 2 0.0304 0.0607 0.0862 0.0284 0.0565 0.0805 3 0.0296 0.0591 0.0841 0.0294 0.0587 0.0837 4 0.0288 0.0574 0.0819 0.0292 0.0581 0.0829 5 0.0296 0.0591 0.0842 0.0285 0.0571 0.0816 6 0.0296 0.0590 0.0841 0.0302 0.0602 0.0858 7 0.0305 0.0608 0.0864 0.0304 0.0607 0.0867 8 0.0294 0.0587 0.0834 0.0295 0.0589 0.0838 9 0.0296 0.0592 0.0845 0.0304 0.0607 0.0865 10 0.0293 0.0584 0.0833 0.0281 0.0562 0.0799 Table 717 shows similar final source fractions are obtained for both the standard and FMBMC source even when sampling may be poor. Note that it was also observed that a lthough symmetric values are similar, they are not even close to being within statistics. Since symmetry is expected, another simple metric utilized for accuracy of the final result can be the difference between symmetric meshes (as well as consistency of a particular mesh across replications as in test problem 1) In the 40 fission matrix mesh subcase, a maximum of 1.73% difference was observed between any two reflected meshes for replication 1 which again shows the increasing accuracy when fission matrix density is increas ed FMBMC test problem 3 results indicate that although the standard method had trouble with source correlation, only a minimal amount of fission matrix meshes was required to provide an accurate solution and provide results within statistics. Essentially, the same behavior as in problem 1 is observed and therefore detailed analysis is not presented here for brevity.

PAGE 186

186 Cyclewise Fission Matrix Analysis While the cycle wise fission matrix is not utilized in this work, it was implemented into the 1D crit code. During the course of the analysis of the FMBMC method, it was observed that for some problems the cycle wise fission matrix produced significantly biased results. A note is being made here since the reason for this bias should be made clear. The cause is due to the fluctuation in fission matrix element s having a nonuniform effect on the outcome and is the same reason the plus and minus fission matrix do not yield a symmetric confidence interval, yet utilizing the final converged fission matr ix results in a much more accurate result since the element s are much closer to their true value and individual element variation has less of an overall effect. Effect of Initialized Source Distribution 1D crit allows for the utilization of an initialized s ource distribution as discussed. Two major factors affect the effectiveness of an initialized source distribution for the FMBMC method. The first is the only moderate dependence of the source on the fission matrix element s which decreases with fission m atrix mesh size This is not just a n additional benefit but a requirement if the FMBMC method is going to yield accurate results As a result, the dependenc e on the source distribution to yield uncorrelated fission matrix element s is vastly reduced for the FMBMC method. The second is the fact that for high dominance ratio problems, the actual generation to generation Monte Carlo source distribution is highly irregular if tallied, resulting in a source which is not converged in the normal sense. This does not altogether eliminate the effect of an initialized source; but its impact is reduced. In large high dominance ratio problems where the initial source must migrate to a particularly reactive region and if source particles are not started in the hi ghly reactive region, or are started in limited amount, it may take many generations for the source to gather to the highly reactive problem phase space

PAGE 187

187 For the analysis of this problem the result may be skewed from poor sampling An initialized source will remedy this problem. For this fact, an initialized source can still be very beneficial and is very beneficial for all other problems Note that undersampling effects may still materialize and must be accounted for. Automatic Independence Method wit h Minimum Sampling (AIMS) Since it is critical that independence of the fission matrix element s is maintained, an automatic procedure would be desirable to test for fission matrix element independence internally. In order to achieve this goal, FMBMC test problem two was reevaluated with another set of cases. These cases are not as lengthy (total generations) as 1000 active generations appears adequate to provide necessary information. A more systematic analysis concerning fission matrix convergence is performed. Cases simulated are shown in Table 718. Table 7 18. FMBMC t est problem 2 AI MS c ases i nput Case Meshes Skip Total Histories per Generation Expected Particles per Source Mesh 1 100 50 1050 20 million 200000 1a 100 50 1050 2 million 20000 1b 100 50 1050 200000 2000 2 50 50 1050 10 million 200000 2a 50 50 1050 1 million 20000 2b 50 50 1050 100000 2000 3 20 50 1050 20 million 1 million 3a 20 50 1050 4 million 200000 3b 20 50 1050 400000 20000 3c 20 50 1050 40000 2000 4 5 50 1050 5 million 1 million 4a 5 50 1050 1 million 200000 4b 5 50 1050 100000 20000 4c 5 50 1050 10000 2000 4d 5 50 1050 50 million 10 million 5 200 50 1050 10 million 50000 In a posterior manner each simulation was evaluated utilizing two major criteria These criteri a are the expected contribution to the final keff from nonindependent fission matrix element s, and the expected contribution to the final keff from nonnormally distributed fission

PAGE 188

188 matrix element s. Nonindependence is determined by calculating the lag 1 autocorrelation for every fission matrix element flagging any mesh whose correlation was greater than the max lag autocorrelation (0.058) that would be obtained from a randomly distributed uncorrelated normal r andom variable with 1000 values, with a 95% confidence. Non normally distributed element s are determined using a Shapiro Wilk normality test at 0.05 significance. These two metrics should give an i ndication of the dependence on mesh size and particle population to a final converged r esult. Results of the cases concerning these metrics are given in Table 719. Table 7 19. FMBMC t est problem 2 AIMMS c ases i nput Case Meshes Histories per Generation Expected Particles per Source Mesh Non Independent % Non Normal % 1 100 20 million 200000 7.29 14.10 1a 100 2 million 20000 6.08 65.69 1b 100 200000 2000 18.16 100.00 2 50 10 million 200000 7.75 7.92 2a 50 1 million 20000 11.19 35.22 2b 50 100000 2000 18.49 95.76 3 20 20 million 1 million 22.04 5.42 3a 20 4 million 200000 22.32 7.48 3b 20 400000 20000 46.81 18.74 3c 20 40000 2000 29.59 79.04 4 5 5 million 1 million 22.77 0.47 4a 5 1 million 200000 41.01 19.77 4b 5 100000 20000 41.29 2.85 4c 5 10000 2000 3.91 59.50 4d 5 50 million 10 million 22.94 0 5 200 10 million 50000 7.47 65.40 In general, Table 7 19 shows that as particle population for a set meshing structure increases, the non independent fission matrix contribution tends to decrease. This is seen for all cases except cases 1, 3b, 4c and 4d Case 4d has nearly the same non independent percentage as case 4 (an increase in particle density from case 4 to 4d) in which this behavior is nothing out of the ordinary. For the other cases, within each set of cases with the same meshing (same case number, different letter or no letter), the behavior is complex

PAGE 189

189 With low particle density, a false sense of independence may be observed if source meshes are appearing to be independent of each other due to undersampling (lack of coupling with neighbors). As this density increases, eventually transfers between meshes which were not well sampled will be better sampled, resulting in a better estimate of the non independence fraction unless the mesh size is small enough to negate the source spatial dependence. Further increases in particle density should see undersampling reduce to the point that the meshes should not have undersampling. At this point the nonindependent percentage should stabilize if the fissio n matrix mesh size is not small enough, or reduce to an inconsequential percentage if the mesh density is large enough. For cases 4c and 3b, a large increase in non independent percentage is seen as th e phase space is better sampled and stabilizes to about 22 percent as particle density increases. Case 4c is particularly interesting as a false sense of independence is observed. This is due to the fact that with only 2000 source particles for 20 cm meshes, the off diagonal fission matrix elements are minimally contributing, whereas their true contribution if sampled properly is higher. A check for the total contribution to keffFor case 1, only a small increase in nonindependence is observed with the decreasing nonnormal percentage. Note also that these are statistically based metrics cased on 95% confidences, meaning there will be some expected fission matrix element s there will be mislabeled as both nonindependent a nd non normal, a nd vice versa. from the diagonal elements can be utilized to check for undersampling. Note that the nonnormal percentage reduces as particle density is increased with constant mesh size for all cases except those with 5 source meshes. This is due to the aforementioned undersampling seen with 5 source meshes. Also note that although it is desired to obtain

PAGE 190

190 normally distributed fission matrix elements in principle, as long as the fission matrix elements are independent and well sampled, the CLT can be applied to obtain proper confidence estimates of the fission matrix elements. The nonnormal metric is important, however, to aid in the understanding of the problem behavior. Further analysis with a single processor was limited due to memory limitations. Case 1 is at a maximum particle density since 1D crit was structured as a learning tool and therefore stores most information for the entire calculation. When increasing mesh density and particle population to these levels, the memory requirements necessary precluded further study. Note that with parallel processing these cases could be further studied as the overall memory required is shared among processors, yet due to different initial seeds, an extra level of randomness will be obtained from parallel processing (adding two separately generated autocorrelated series for the same element ). In order to utilize the metrics previously a procedure was developed to incorporate their usage onthe fly. A schematic of the procedure is shown in F igure 728 and is known as the Automatic Independence Method and Minimum Sampling testing method (AIMS).

PAGE 191

191 Figure 7 28. AI MS flowchart Non Independent Contribution > 10% Normal Execution to nskip Generations 100 generations Yes, 1 st pass? no Continue Normally to Problem End Double Particles per Generation, reset nskip Correlated FM coeffs > 10% yes Double FM Density, reset nskip no Yes no Principal Diagonal Contribution >66% no Meshes < 1 mfp? Yes No

PAGE 192

192 In words, the procedure is as follows: A sample of the total solution must be utilized to make a decision on whether the simulation is acceptable. It was determined that 100 active generations of data would be utilized for statistics related testing. For the first metric, 100 active generations is utilized to calculate the lag 1 autocorrelation for which the max lag autocorrelation tha t would be obtained from a randomly distributed uncorrelated normal random variable is approximately 0.2. The second metric is the same as before, utilizing the Shapiro Wilk normality test on each fission matrix element series. Initially each tolerance is set to 10%. When metric 1 fails, the fission matrix density is doubled for the first failure. For successive failures of metric 1, particle population is increased until there is less than a 5% relative decrease in the fractional contribution from noni ndependent fission matrix element s, in which case the fission matrix density is again doubled. When only metric 2 fails, particle density is doubled. Two other criteria were added, one which determines the contribution from the principal diagonal fission matrix element s. If this is greater than 66%, the fission matrix mesh size is automatically doubled even if metric 1 and 2 pass, since this indicates that the fission matrix meshes are too large and may lead to a false convergence The other is that the minimum mesh size is 1 mfp. Note that doubling meshing requires reallocation of many variables, and t his is done internally in 1 D crit. Also, doubling of the particle source requires the same type of problem for variable allocation since bank sizes and other variable storage requirements are functions of the particle population per generation. This reallocation is handled i nternally as well by 1dcrit. Another difficulty occurs with increasing particle density, namely where the extra particles come from In this case (doubling), each particle location is used twice, with half weight to conse rve the total particle weight. In other words, if N particles are initially required, total starting source

PAGE 193

193 particle weight for the generation is N yet when doubli ng the source population, a total weight of 2N is necessary to describe the total neutron source weight. At the beginning of a generation, all particles are loaded with the same weight, in this case each particle weight is essentially doubled and the part icle split in two. No change is done to the source distribution, yet the particle density is increased. The AIM S technique was tested with benchmark problem 2 in both the 10 cm and 100 c m configuration with 5 equally spaced fission matrix meshes as the initial meshing structure. Other input parameters are as follows: 50 skipped generations, 2000 and 20000 particles per generation for the 10cm and 100 cm cases respectively 1050 total generations and a keffThe 10 cm case required a change to 10 fission matrix meshes and 25600 particles per generation before passing all criteria. The 100 cm case required a change to 40 fission matrix meshes and 256000 pa rticles per generation. The AI MS methodology shows promising potential, and must be more thoroughly investigated. Note that for the 10 cm case, an upgrade to 10 meshes should not be necessary While AI MS appears to guide the solution toward a path with independent and normally distributed fission matrix element s, it is not necessary for t he elements to be truly nonormal if they are independent and well sampled Future studie s should focus on increasing AI MS efficien cy in choosing the most effective particle population and mesh size. guess of 1.0. Extension to Parallel Computing Environment Although the fissi on matrix method differs from the standard methodology, final results are still dependent on tracking of independent particles, thereby making the FMBMC method with or without AIM S extremely suitable for implementation on a parallel computing environment. To show its viability, a n algorithm was developed without the use of advanced

PAGE 194

194 message passing with MPI or PVM. Instead, a driver code was written to start independent simulations through rsh or ssh, without any change in the input file. The drive r code requires as input only the number of processors to request, and a file containing the host names for the other processors (currently hardwired for convenience) Independent simulations are started, and when complete, final results are combined sta tistically and summarized into a final output file. Parallel implementation of the AIMS algorithm is slightly more complicate d, since there is no guarantee the same fission matrix mesh structure will result from each independent calculation and was not ye t imple mented. One strategy for pa rallel implementation of the AI MS methodology would be that one calculation can serve as the master calculation whereby all other simulations are not started until the mesh size and particle density of the master is set. Subsequently, the slave simulations are started and locked to the same fission matrix and particle density. This has been left for future development. Testing with the parallel processing was performed with FMBMC benchmark problem 2. For this testing, the following input parameters were utilized: 350 skipped generations, initial randomly sampled source, 5350 totally generations (5000 active), 50 equally spaced fission matrix meshes and 500000 particles per generation. 350 skipped generations were utili zed to show their impact on parallel performance.

PAGE 195

195 S peedup obtained for 3 and 5 processors was observed to be 2.57 and 3.71, respectively. Since 350 skipped cycles were utilized, each processor is also performing 350 skipped cycles, and a penalty in paral lel speedup is taken due to this. If an initialized source is being utilized (which for this problem is the case by default ), skipped cycles are unnecessary (or minimal ly necessary). The estimated extra time spent on skipped cycles for each parallel case was calculated and removed from the total time to approximate the ideal speedup case without the loss of speedup from skipped cycles. The adjusted ideal speedups for the se cases were 2.91 and 4.68 for the 3and 5processor cases, respectively, as shown in Figure 7 29 Figure 7 29. Parallel s peedup with FMBMC for 3 and 5 processors Note that in the 5 processor case, the individual processor times were 2230 seconds, 2147 seconds, 2152 seconds, 2150 and 2148 seconds, with a total time (with processing of results) of 2235 seconds. This means 4 of the 5 processors were finished 78 seconds before the slowest. If the slowest processor had finished at 2152 seconds (second slowest) 5 processor adjusted speedup would have been 4.85. 1 2 3 4 5 1 2 3 4 5 Speedup Processors speedup adjusted speedup Ideal

PAGE 196

196 CHAPTER 8 CONCLUSIONS AND FUTURE WORK Conclusions This work has introduced new methodologies concerning Monte Carlo eigenvalue problems. First, a new stationarity detection methodology called the KPSS test for stationarity has been developed for use in Monte Carlo eigenvalue problems. This methodology contains the following features: Use of the pa rticle population center of mass as a single metric for describing overall source movement (also a good stand alone visual diagnostic) No need for meshing structure as needed in other tests such as the information theoretic tests Utilizes an adapted form of the KPS S statistical stationarity test The methodology was incorporated into the KENOV.a code in the SCALE 5 package. Also added to the KENOV.a code were three information theoretic (Shannon Entropy) based diagnostics for stationarity/undersampling dete ction. Important features of this implementation are: No new user input s required (User must turn on the fission matrix option) COM determined and printed for the system as an output table Posterior KPSS test automatically implemented into the KENOV.a code Meshing structure for information theoretic diagnostics automatically generated by use of the fission matrix option Final output table provides summary of stationarity diagnostic results In addition to the stationarity detection methods, a new soluti on algorithm was developed to utilize multiple importance functions with a modified CADIS methodology. Features of this solution algorithm include:

PAGE 197

197 Application of multiple importance calculations to provide importance for reduce d variance for particular r egion to region transfers Reus e of source particles to calculate more exactly the source transfer probability from region to region Coupling with well known fission matrix variance reduction methods (both utilized at the same time) Furthermore, a new varia tion of a fission matrix based solution algorithm was developed for use with high dominance ratio problems. Traditional algorithm tally estimators will be biased for high dominance ratio problems. This methodology should reduce and possibly eliminate this bias and has the following features: Solution based on final converged fission matrix only Posterior evaluation based on two metrics o the fraction of the keffo the fraction of the k obtained from nonindependent fission matrix element s eff On the fly particle population change and fission matrix mesh increase beta version based on the evaluation of the previous 2 metrics during the simulation (Automatic Independence m ethod an d Minimum Sampling testing AI M S) obtained from nonnormal fission matrix element s Confidence estimates based on Monte Carlo iterated confidence approach Moreover, the new solution algorithm and the FMBMC approach were tested in a 1dimensional multigroup Monte Carlo eigenvalue code developed for this purpose. The code was written in FORTRAN90/95 and has both a traditional power iteration and fission matrix solver. The traditional CADIS approach was incorporated as well as the modified solution algorithm. Traditional fission matrix biasing is included as an option for source biasing. A vers ion of the code contains the AI MS algorithm as well as the posterior diagnostic test. Processing codes were developed to format PENTRAN deterministic S n calculation output into a starting source distribution, as well as utilize importance information for the CADIS

PAGE 198

198 methodologies tested. It was determined that the FM BMC approach with or without AI MS is a viable solution approach to obtain unbiased eigenvalue (multiplication factor) and eigenvector (discretized source), whereby the standard approach will yield biased confidence estimates. T he algorithm was extended in parallel with high efficiency. Testing of the KPSS diagnostic test on both simple and difficult source convergence benchmark problems showed its ability to identify source convergence p roblems where other tests fail. While false positives are observed, the test is fairly capable of source non stationarity diagnosis. For high DR problems, both the KPSS and entropy tests have some difficulty. The KPSS test appears to identify more false positive tests when there is strong source correlation. In cases of false positive identification, the source COM, entropy and relative entropy series are available to visually inspect as well as the skewness and kurtosis of the source COM distribution. Variance reduction methods tested with 1Dcrit included the two extensions of the CADIS methodology to eigenvalue problems. A straightforward extension, E CADIS, util iz ed the neutron production cross section as adjoint source globally. Modest speedup up t o about 20 percent was obtained for the sample problems. This algorithm was not optimized to the fullest extent and some overhead in implementation is most likely unnecessary for which speedup can be increased. The modified source algorithm developed was used in conjunction with regionwise importance calculations. This methodology was called ME CADIS, where importance sampling is utilized to reduce variance in lower probability source regions thereby increasing coupling in loosely coupled source regions. Off diagonal fission matrix element s standard deviation was significantly reduced with the utilization of ME CADIS versus E CADIS, at the expense of computer time.

PAGE 199

199 The modified FMBMC method was successfully applied to high dominance ratio problems wehre it was shown that the proper choice of particle density and source mesh size can eliminate any autocorrelation in the fission matrix element s, facilitating the usage of the Monte Carlo generated confidence estimates for the source eigenvalue and eigenvecto r developed for use with the fission matrix algorithm. Future Work There are many pursuits that can be undertaken to continue this work. Concerning the stationarity diagnostics, other versions of the KPSS test have become available as the work was being done and can be evaluated The robustness of this test can be evaluated with further testing and analysis. The COM visual diagnostic can be made into a visual plot to view the source move ment throughout the simulation. Continuing with the source diagnost ic area, the lag 1 autocorrelation coefficient can be used as an indicator of independence. Although independence is not guaranteed by uncorrelatedness, it appears a strong indicator for Monte Carlo eigenvalue problems. The source COM can be tested for a utocorrelation, and if present, other stationarity diagnos tics (such as the information theoretic and KPSS) diagnostics can attempt to diagnose whether stationarity has been achieved. Monte Carlo solutions algorithm with ME CADIS should be effective for solving loosely coupled problems for which the ME CADIS algorithm should be extended to 3D and a detailed analysis of problems of this type in the way of benchmark studies should be undertaken. Improvements can be made to ME CADIS to mitigate the effect of extremely low weights being utilized due to very low value weight windows that may result from particles starting in locally unimportant regions that actually reach the detector. The modified solution algorithm with the modified CADIS approach, although slower than the traditional approach for many problems, will provide strong coupling between loosely

PAGE 200

200 coupled regions. Problems of this nature should be studied with this approach to assess this methods effectiveness concerning loosely coupled systems. The particular fission matrix based Monte Carlo (FMBMC) approach developed here should be implemented in a full 3D Monte Carlo code with implicit parallel capability for analysis of high dominance ratio problems. The Monte Carlo Iterated Confidence approach can be improved with the utilization of a bootstrapping methodology which corrects the method for skewed sampling distributions. Another simple possibility for the reduction in skip generation time necessary for problems with slow convergence can be through the use of increasing particle population on the fly. It was noticed during the testing of the particle doubling feature of 1Dcrit that the source converges loosely to the final result in approximately the same number of generations regardless of the source population size. If this is the case, a smaller number of particle per generation can be initially started, and increased after a period of time.

PAGE 201

201 APPENDIX A 1DCRIT INPUT DESCRIP TION Appendix A contains the input required for 1Dcrit in the form of a user guide, for all versions. 1Dcrit is evoked at the command prompt with two arguments. Argument 1 is the input file name and argument 2 is the output file name. The output file name is overwritten if it already exists. Stan dard Input File Format The standard input file is composed of all the data needed to run 1Dcrit without any CADIS biasing. Input is required in a particular order, and a new input variable should start on a new line, however, the data within each variable can be freely formatted. Also, the slash / character serves a line input terminator. Anything after a slash on a line is considered a comment. The following cards describe the input 1. Replication This is an integer number describing the total number of replications of the same problem to be run. 1Dcrit has built in seeds for repeatability. 2. Regions or Seed Identifer This is an integer giving the total regions in the problem unless it is alternatively the keyword yes. If the keyword yes is here, the next line (card 2a) should contain a random user defined seed instead of materials. All other input proceeds linearly as below regardless. 3. Materials This is an integer giving the total number of materials used in the problem 4. Groups This is an int eger giving the number of energy groups used in the problem 5. Legendre Order This currently should always be set to 0 for isotropic scattering and was utilized if 1Dcrit were to be adapted to higher order Legendre scattering

PAGE 202

202 6. Position Bounds This is an ar ray of double precision data of length regions +1 giving the position bounds for each region starting with the most negative and increasing from most negative to most positive 7. Material IDs This is an array of integers of length regions indicating th e material associated with each region, starting from region 1 (most negative) to region regions, the most positive 8. Group Energy Boundaries An array containing the energy group boundaries is here, however the energy group boundaries themselves are not utilized in the code and are currently placeholders only 9. Cross Sections Card 9 contains the cross section information. Each line reads "abs, nusigfis, sigmaT, Chi, Sigmas, where sigmas is the up and downscatter matrix in the following format (5 group e xample) sigs1 1 sigs1 2 sigs1 3 sigs1 4 sigs1 5 sigs2 1 sigs2 2 sigs2 3 sigs2 4 sigs2 5 sigs3 1 sigs3 2 sigs3 3 sigs3 4 sigs3 5 sigs4 1 sigs4 2 sigs4 3 sigs4 4 sigs4 5 sigs5 1 sigs5 2 sigs5 3 sigs5 4 sigs5 5 10. Parameters Card 10 contains the initialization parameters: nskip, Particles per generation, total generations, k guess, in that order 11. Source Input Flag Card 11 is a source input flag, where a 0 indicates random sampling of the source in all fissionable regions for the first generation, and a 1 indicates reading the source from a file names 1dcrit.src 12. Boundary Conditions Card 12 contains the left and right albedo conditions and can be any number 13. Source Meshes Card 13 is an integer identifying the num ber of source meshes utilized in the virtual mesh for the information theoretic source convergence diagnostics

PAGE 203

203 14. Source Mesh Boundaries Card 14 is the final card and contains an array of double precision data of length source meshes +1 giving the position bounds for each virtual mesh source region starting with the most negative and increasing from most negative to most positive Standard Input File Example An example input file is given below: /replications 10 /regions 19 /materials 4 /groups 3 /legendre order/ 0 /position bounds/ 25.88969994 13.71699995 13.65984997 13.65199995 12.86759996 12.85974994 12.80259997 0.457199991 0.400050014 0.392199993 0.392199993 0.400050014 0.457199991 12.80259997 12.85974994 12.86759996 13.65199995 13.65984997 13.71699995 25.88969994 /mat id's 4 3 2 1 2 3 4 3 2 1 2 3 4 3 2 1 2 3 4 /group energy boundaries/ 20 1.01 6.25e 07 0

PAGE 204

204 /uo2 1.2262E 02 2.9049E 02 2.7252E 01 6.93819E 01 2.1031E 01 5.0390E 02 6.2494E 15 3.7030E 02 1.8967E 02 4.8784E 01 3.06181E 01 0.0000E+00 4.4966E 01 1.1621E 03 2.6086E 01 4.7299E 01 6.7343E 01 2.12628E 10 0.0000E+00 0.0000E+00 4.1258E 01 /he 0.0000E+00 0.0000E+00 4.8645E 02 0.0000E+00 3.2952E 02 1.5693E 02 0.0000E+00 0.0000E+ 00 0.0000E+00 1.4335E 02 0.0000E+00 0.0000E+00 1.4196E 02 1.3857E 04 0.0000E+00 0.0000E+00 9.3942E 03 0.0000E+00 0.0000E+00 0.0000E+00 9.3942E 03 /zr 3.5382E 04 0.0000E+00 2.0041E 01 0.0000E+00 1.7752E 01 2.2616E 02 0.0000E+00 1.8635E 03 0.0000E+00 3.4054E 01 0.0000E+00 0.0000E+00 3.3829E 01 3.8932E 04 4.1000E 03 0.0000E+00 2.7463E 01 0.0000E+00 0.0000E+00 0.0000E+00 2.7053E 01 /h2o 3.4248E 04 0.0000E+00 1.7133E 01 0.0000E+00 9.3562E 02 7.7425E 02 2.2913E 08 2.4692E 04 0.0000E+00 7.4306E 01 0.0000E+00 0.0000E+00 7.0800E 01 3.4832E 02 7.8111E 03 0.0000E+00 1.7017E+00 0.0000E+00 0.0000E+00 0.0000E+00 1.6938E+00 /nskip, Particles per generation, total generations, kg uess 50, 500, 5050, 1 /Source Type 0 /Boundary conditions 0.57 1.0 /source meshes 5 /source mesh locations 13.65199995041 12.867599964142 0.392199993134 0.392199993134 12.867599964142 13.65199995041 1Dcrit.src Input File Structure The file 1dcri t.src is necessary for a user input source. This file has minimal amount of data and should contain on each line the energy group of the particle and the location. 1Dcrit will then determine which region the particle resides. Bias.dat Input File Structur e When utilizing E CADIS or ME CADIS, importance information is necessary and is stored in a file names bias.dat. this file has the following format: 1. Number of importance regions the first line should contain the number of importance regions (meshes)

PAGE 205

205 2. Biasing Position Bounds This is an array of double precision data of length number of importance regions +1 giving the position bounds for each importance region starting with the most negative and increasing from most negative to most positive 3. Energy Groups Card 3 contains the number of energy groups 4. Energy Group Boundaries Card 4 contains the energy group boundaries (not utilized, but necessary placeholder) 5. Importance data Importance data is input for each region, for each group. First data for group 1 for each mesh, then proceeding to group 2 for each mesh, and so on until all groups are input Seedwrapper Parallel Driver Information Parallel execution of 1Dcrit was only implemented utilizing the versions containing the fission matrix algorithm. Execution is performed through a separate FORTRAN driver code which initializes the independent runs and collects data upon exit. The same input file structure is utilized for 1Dcrit as above except the seed is read in as an option for card 2 and 2a. First the keyword yes is looked for and if found, a user defined seed is read, if not the calculation will fail for parallel execution. The driver code (seedwrapper) has the following command line arguments 1. Number of processors 2. File name containing se ed for each processor, one per line 3. Input file name 4. Hosts file containing location of other processors (currently not necessary since this was hardwired internally because all parallel analysis was done on a single cluster)

PAGE 206

206 Final output is hardcoded to the file output.out for the standard output file and combined.out for the fission matrix combined output data.

PAGE 207

207 LIST OF REFERENCES Alexev, B. V., 2004. Generalized Boltzmann Physical Kinetics. Elsevier Science Publishing. Andrews, D. W. K., 1991. Heteroskedasticity and autocorrelation consistent covariance matrix estimation. Econometrica 59, 817 858. Atkinson, K. E., 1988. An Introduction to Numerical Analysis, 2nd Edition. John Wiley and Sons. Bell, G. I., Glasstone, S., 1985. Nuclear Reactor Theory. Robert E. Krieger Publishing, Malabar, FL. Billingsley, P., 1968. Convergence of Probability Measures. John Wiley and Sons. Blomquist, R. N., 2000. M onte C arlo source convergence and the W hitesides problem. In: Proc. PHYSOR 2000. Blomquist, R. N., Armishaw, M., Hanlon, D., Smith, N., Naito, Y., Yang, J., Mioshi, Y., Yamamoto, T., Jacquet, O., Miss, J., 2006. Source convergence in criticality safety analyses, phase i: Results for four test problems. Tech. Rep. 5431, OECD/NEA. Blomquist, R. N., Gelbard, E. M., 2000. Fission source algorithms and Monte Carlo variances. In: Trans. Am. Nucl. Soc. Blomquist, R. N., Nouri, A., 2002. The OECD/NEA source convergence benchmark program. In: Trans. Am. Nucl. Soc. Brissenden, R. J., Garlick, A. R., 1986. Biases in the estimations of keff and its errors by Monte Carlo methods. Ann. Nucl. Energy 13 (2), 63. Brown, F. B., 2007. Wielandt acceleration for mcnp5 Monte Carlo eigenvalue calculations. In: M&C+SNA 2007. No. LA UR 071194. Buckland, S. T., 1984. Monte C arlo confidence intervals. B iometrics 40, 811817. Carlson, B. G., 1953. Solution of the transport equation by the Sn approximations. Tech. Rep. LA 1599, Los Alamos National Laboratory. Carlson, B. G., Lee, C. E., 1961. Mechanical quadrature and the transport equation. Tech. Rep. LA 2573, Los Alamos Scientific Laboratory. Cercignani, C., 1975. Theory and Application of the Boltzmann Equation. Elsevier Science Publishing, New York.

PAGE 208

208 Christoforou, S., Hoogenboom, J. E., 2006. A zerovariancebased scheme for variance reduction in Monte Carlo criticality. In: Proc. PHYSOR 2006. Cover, T. M., Thomas, J. A., 1991. Elements of Information Theory. John Wiley and Sons, New York, NY. Devore, J., 2000. Probability of Statistics for Engineering and the Sciences. Duxbury, California. Dionne, B., 2007. Automated variance reduction technique for 3D Monte Carlo coupled electron photonpositron simulation using deter ministic importance functions. Ph.D. thesis, University of Florida. Dufek, J., 2007. Accelerated Monte Carlo eigenvalue calculations. In: Proceedings of Reactor Physics Calculations in the Nordic Countries. Dufek, J., 2009. Developm ent of new Monte Carlo methods in reactor physics, criticality, nonlinear steady state and burnup problems. Ph.D. thesis, Royal Institute Of Technology, Stockholm, Sweden. Dufek, J., Gudowski, W., 2009. Fission matrix based Monte Carlo criticality calculations. Ann. Nucl. Energy 36 (8), 12701275. Dumonteil, E., Courau, T., 2008. A new method to compute the dominance ratio in Monte Carlo simulations application to a simple pin cell with the 3D Monte Ca rlo code tripoli4. In: Proceedings of ICAPP08. Finch, J. P., 2006. Vacation matrix method for fission source convergence in Monte Carlo criticality calculations. Ph.D. thesis, Purdue University. Gelbard, E. M., 1990. Computations of standard deviations in eigenvalue calculations. Prog. Nucl. Energy. 24, 237. Gelbard, E. M., 1991. Monte carlo eigenvalue biases: Generalization beyond the absorption estimate. In: Trans. Am. Nucl. Soc. p. 302. Gelbar d, E. M., Gu, A. G., 1994. Biases in Monte Carlo eigenvalue calculations. Nucl. Sci. Eng. 117, 1. Gelbard, E. M., Prael, R. E., 1974. Monte C arlo work at A rgonne N ational L aboratory. Tech. Rep. ANL 752(NEACRP L 118), Argonne National Laboratory. Gelbard, E. M., Roussel, B., 1995. Proposed solution to the keff of the world problem. In: Trans. Am. Nucl. Soc.

PAGE 209

209 Heidelberger, P., Welch, P. D., 1981. A spectral method for confidence i nterval generation and run length control in simulations. Communications of the Association for Computing Machinery 30, 233245. Heidelberger, P., Welch, P. D., 1983. Simulation run length control in the presence of an initial transient. Operations Research 31 (6), 1109 1144. Hobin, B., Franses, P. H., Ooms, M., 2004. Generalizations of the KPSStest for stationarity. Statistica Neerlandica 58, 483 502. Jenkins, G. M., Watts, D. G., 1968. Spectral Analysis and Its Applications. HoldenDay. Kadotani, H., Hariyama, Y., Shiota, M., Takeda, T., 1991. Acceleration of fission distribution convergence using eigenvectors from matrix k calculations in the KENO code. In: Proc ICNC 91. Kalos, M. H., Whitlock, P. A., 1986. Monte Carlo Methods Volume I: Basics. John Wiley and Sons. Kaplan, E. L., 1958. Monte C arlo methods for equilibrium solutions in neutron multiplication. Tech. rep., Lawr ence Livermore National laboratory. Kwiatkowski, D., Phillips, P. C. B., Schmidt, P., Shin, Y., 1992. Testing the null hypothesis of stationarity against the alternative of a unit root. Journal of Econometrics 54, 159178. LAbbate, A., Courau, T., Dumonteil, E., 2007. Monte C arlo criticality calculations: Source convergence and dominance ratio in an infinite lattice using MCNP and TRIPOLI4 In: First International Conference on Physics and Technology of Reacto rs and Applications. Lathrop, K. D., Carlson, B. G., 1965. Discrete ordinates angular quadrature of the neutron transport equation. Tech. Rep. LA 3186, Los Alamos National Laboratory. Lewis, E. E., Miller, W F., 1993. Computational Method of Neutron Transport. American Nuclear Society, La Grange Park, IL. Longoni, G., 2004. Advanced quadrature sets, acceleration and preconditioning techniques for the discrete ordinates method in parallel comput ing environments. Ph.D. thesis, University of Florida. Longoni, G., Haghighat, A., 2001. Development of new quadrature sets with the ordinate splitting technique. In: Proc. Int. Conf. on Mathematical Methods and Supercomputing f or Nuclear Applications.

PAGE 210

210 Longoni, G., Haghighat, A., 2002. Development of the regional angular refinement and its application to the ct scan device. In: Trans. Am. Nucl. Soc. Vol. 86. p. 242. Lux, I., Koblinger, L., 1991. Monte Carlo Particle Transport Methods: Neutron and Photon Calculations. CRC Press. MacMillian, D. B., 1973. Monte C arlo confidence limits for iterated source calculations. Nucl. Sci. Eng. 50, 73. Metropolis, N., Ulam, S., 1949. The Monte Carlo method. Journal of the American Statistical Association 44, 335 341. Morton, K. W., 1956. Criticality calculations by Monte Carlo methods. Tech. Rep. AERE T/R 1903, Great. Britain Atomic Ene rgy Research Establishment. Newey, W. K., West, K. D., 1994. Automatic lag selection in covariance matrix estimation. Review of Economic Studies 61, 631653. Peck, R., Devore, J., 2005. Statistics, The Exploration and Analysis of Data. Thomson Brooks/Cole. Richet, Y., Jacquet, O., Bay, X., 2003. Automated suppression of the initial transient in Monte Carlo calculations based on stationarity detection using the B rownian B ridge theory. In: Proceedings of the Seventh International Conference on Nuclear Criticality Safety. SCALE, 2005. SCALE: A Modular Code System for Performing Standardized Computer Analyses for Licensing Evaluations, ORNL/TM 2005/39, Version 5, Vols. I III. Oak Ridge N ational Laboratory. Schruben, L. W., 1982. Detecting initialization bias in simulation output. Operations Research 30, 569590. Shim, H. J., Kim, C. H., 2007. Stopping criteria of inactive cycle Monte Carlo calculations. N ucl. Sci. Eng. 157 (2), 132141. Sjoden, G., 2007. An efficient exponential directional iterative difference scheme for 3D Sn computations in xyz geometry. Nucl. Sci. Eng. 155, 179189. Sjoden, G. E., 1997. P ENTRAN: A parallel 3D Sn transport code with complete phase space decomposition, adaptive differencing, and iterative solution methods. Ph.D. thesis, The Pennsylvania State University. Sjoden, G. E., Haghighat, A., 2004. PENTRANTM: Parallel Environm ent Neutral particle TRANsport SN in 3 D Cartesian Geometry, Users Guide to Version 9.30c. H&S Advanced Computing technology.

PAGE 211

211 Thomas, J. T., 1973. Critical three dimensional arrays of U (93.2) metal cylinders. Nucl. Sci. Eng. 52, 350. Ueki, T., 2005. Information theory and undersampling diagnostics for Monte Carlo simulation of nuclear criticality. Nucl. Sci. Eng. 151, 283292. Ueki, T., Brown, F. B., 2002. Stationarity diagnostics using Shannon Entropy in Monte Carlo criticality calculation I : F test. In: Trans. Am. Nucl. Soc. Ueki, T., Brown, F. B., 2003. Informatics approach to stationarity diagnostics of the Monte Carlo fission source distribution. In: Trans. Am. Nucl. Soc. Ueki, T., Brown, F. B., Parsons, K. D., Kornreich, D. E., 2003. Autocorrelation and dominance ratio in Monte Carlo criticality calculations. Nucl. Sci. Eng. 145, 279 290. Ueki, T., Mori, T., Nakagawa, M., 1997. Error estimations and their biases in Monte Carlo calculations. Nucl. Sci. Eng. 125, 1. Urbatsch, T. J., 1995. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations. Ph.D. thesis, Los Alamos National Labor atory (University of Michigan). Wagner, J. C., 1997. Acceleration of Monte Carlo shielding calculations with an automated variance reduction technique and parallel processing. Ph.D. thesis, The Pennsylvania State University. Wagner, J. C., Blakeman, E. D., Peplow, D. E., 2007. Forwardweighted CADIS method for global variance reduction. In: Trans. Am. Nucl. Soc. Wagner, J. C., Haghighat, A., 1998. Automated variance reduction of Monte Carlo shieldi ng calculations using the discrete ordinates adjoint function. Nucl. Sci. Eng. 128, 186208. Wenner, M., Haghighat, A., 2007. Study of methods of stationarity detection for Monte Carlo criticality analysis with KENOV.a. In: Trans Am. Nucl. Soc. Wenner, M., Haghighat, A., 2008. A generalized KPSS test for stationarity detection in Monte Carlo eigenvalue problems. In: Proc. PHYSOR 2008. Wenner, M. T., Haghighat, A., Gardner, S., 2000. Source initialization of the Monte Carlo criticality calculation via discrete ordinates ( S n) methods. In: Trans. Am. Nucl. Soc. Whitesides, G. E., 1971. A difficulty in computing the keffective of the world. In: Trans. Am. Nucl. Soc. Vol. 14. p. 680.

PAGE 212

212 X 5 Monte Carlo Team, L. C. ., 2003. MCNP A General Monte Carlo N Particle Transport Code, Version 5 Volume II: Users Guide. Los Alamos National Laboratory. Yamamoto, T., Miyoshi, Y., 2004. Reliable method for fission source convergence of Monte Carlo criticality calculation with W ielandts method. Journal of Nuclear Science and Technology 41 (2), 99107. Yang, J., Naito, Y., 2004. The sandwich method for determining source convergence in Monte Carlo calculation. Journal of Nuclear Science and Technology 41 (5), 559 568.

PAGE 213

213 BIOGRAPHICAL SKETCH Michael Todd Wenner was born on 1978 in Wilkes Barre, Pennsylvania. He grew up in St. Johns, Pennsylvania, the youngest of four childre n in a blended family. Michael graduated from Hazleton Area High School in 1996 and enrolled at the Pennsylvania state University. Four years later in May, 2000, Michael graduated as the College of Engineering Nuclear engineering Student Marshall and continued his education at Penn State, receiving his masters degree in August of 2003. Before graduation, Michael had already enrolled at the University of Florid a, and had finished his studies at Penn State while at the University of Florida, where he continued his education via enrollment in their PhD program. While at the University of Florida, Michael was able to intern at Oak Ridge National Laboratory, in Oak Ridge, TN, in 2005. He also served as a consultant for NUCSAFE, INC in Oak Ridge TN and was an instructor for reactor physics related Power Plant Operator continued education training classes at the Crystal River Nuclear Training Center in 2007. He also served as a co instructor for an undergraduate class during the Spring 2009 semester. Michael left the University of Florida on a full time basis in the Spring of 2010 and moved to western Pennsylvania with Jennifer Fiddner and took a position at Westinghouse Electric Company. Michael currently works in their Research and Technology Unit.