Front Cover
 About the author
 Title Page
 Table of Contents
 Statistics for random noise...
 Neutron-counting techniques in...
 Basic relations of random noise...
 Reactor noise theory
 Noise measurement techniques
 Noise instrumentation and measurement...
 Acquisition, transmission, and...
 Pseudorandom noise techniques
 Digital processing of data
 Experimental noise measurements...
 Special noise techniques and applications...
 Deterministic variables

Title: Random noise techniques in nuclear reactor systems
Full Citation
Permanent Link: http://ufdc.ufl.edu/UF00054387/00001
 Material Information
Title: Random noise techniques in nuclear reactor systems
Physical Description: x, 490 p. : illus. ; 24 cm.
Language: English
Creator: Uhrig, Robert E., 1928-
U.S. Atomic Energy Commission
Publisher: Ronald Press
Place of Publication: New York
Publication Date: 1970
Subject: Nuclear reactors -- Noise   ( lcsh )
Genre: bibliography   ( marcgt )
non-fiction   ( marcgt )
Bibliography: Includes bibliographies.
Statement of Responsibility: by Robert E. Uhrig.
General Note: "Prepared under the auspices of the United States Atomic Energy Commission."
 Record Information
Bibliographic ID: UF00054387
Volume ID: VID00001
Source Institution: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
Resource Identifier: oclc - 00086614
lccn - 71110558

Table of Contents
    Front Cover
        Cover 1
        Cover 2
    About the author
        Page i
    Title Page
        Page ii
        Page iii
        Page iv
        Page v
        Page vi
    Table of Contents
        Page vii
        Page viii
        Page ix
        Page x
        Page 1
        Page 2
        Random processes in nuclear reactor systems
            Page 3
            Page 4
        Motivation for random noise techniques in measurements on nuclear reactors
            Page 5
            Page 6
        Random processes and variables
            Page 7
            Page 8
        Stationary and ergodic processes
            Page 9
            Page 10
            Page 11
            Page 12
    Statistics for random noise analysis
            Page 13
        Elementary probability theory
            Page 13
            Page 14
            Page 15
            Page 16
        Mean value, variance, and standard deviation
            Page 17
            Page 18
        Probability, probability density, and probability distribution functions
            Page 19
            Page 20
            Page 21
            Page 22
            Page 23
        Average values and probability moments
            Page 24
            Page 25
        Probability distributions in radioactive decay
            Page 26
            Page 27
            Page 28
            Page 29
            Page 30
            Page 31
        Special probability densities and distributions
            Page 32
            Page 33
            Page 34
            Page 35
            Page 36
            Page 37
            Page 38
            Page 39
            Page 40
        Parameter estimation
            Page 41
            Page 42
            Page 43
            Page 44
        Correlation functions
            Page 45
            Page 46
            Page 47
            Page 48
            Page 49
    Neutron-counting techniques in nuclear reactor systems
            Page 50
        Probability distribution of fission neutrons
            Page 51
            Page 52
            Page 53
        Rossi-alpha technique
            Page 54
            Page 55
            Page 56
            Page 57
            Page 58
            Page 59
        Variance-to-mean (Feynman) method
            Page 60
            Page 61
            Page 62
            Page 63
            Page 64
        Bennett variance method
            Page 65
        Count probability methods
            Page 66
            Page 67
            Page 68
        Interval distribution (Babala) method
            Page 69
            Page 70
            Page 71
            Page 72
        Dead-time (Srinivasan) method
            Page 73
        Correlation analysis techniques
            Page 74
            Page 75
        Covariance measurements
            Page 76
            Page 77
        Endogenous-pulsed-source technique
            Page 78
            Page 79
            Page 80
            Page 81
            Page 82
    Basic relations of random noise theory
            Page 83
        Autocorrelation function
            Page 83
            Page 84
        Autocovariance function
            Page 85
        Power spectral density
            Page 86
            Page 87
            Page 88
        Special autocorrelation functions and power spectral densities
            Page 89
            Page 90
            Page 91
            Page 92
            Page 93
            Page 94
            Page 95
        Cross-correlation function
            Page 96
            Page 97
        Cross-covariance function
            Page 98
        Cross spectral density
            Page 99
        Input-output relations
            Page 100
            Page 101
            Page 102
            Page 103
        Practical considerations
            Page 104
        One-sided spectral densities
            Page 105
            Page 106
            Page 107
            Page 108
            Page 109
        Influence of mean value on correlation functions and spectral densities
            Page 110
            Page 111
            Page 112
        Coherence functions
            Page 113
        Two-detector correlation and spectral-density measurements
            Page 114
            Page 115
            Page 116
            Page 117
            Page 118
        multiple-input linear systems
            Page 119
            Page 120
            Page 121
            Page 122
            Page 123
            Page 124
            Page 125
            Page 126
            Page 127
            Page 128
            Page 129
    Reactor noise theory
            Page 130
        Noise-equivalent source
            Page 130
            Page 131
            Page 132
            Page 133
        Langevin procedure-lumped-parameter model
            Page 134
            Page 135
            Page 136
        Space-dependent reactor noise
            Page 137
            Page 138
            Page 139
            Page 140
            Page 141
            Page 142
            Page 143
        Space-dependent noise in an infinite medium
            Page 144
            Page 145
            Page 146
            Page 147
            Page 148
            Page 149
            Page 150
        Effect of boundaries on correlation
            Page 151
            Page 152
        Space-dependent noise in an unreflected parallelepiped
            Page 153
            Page 154
            Page 155
            Page 156
            Page 157
            Page 158
            Page 159
            Page 160
    Noise measurement techniques
            Page 161
            Page 162
        Correlation measurements
            Page 163
            Page 164
        Spectral-density measurements
            Page 165
            Page 166
            Page 167
        Measurement of transfer functions
            Page 168
            Page 169
        Direct harmonic analysis
            Page 170
            Page 171
            Page 172
        Finite length of record
            Page 173
            Page 174
        Lag and spectral windows
            Page 175
            Page 176
            Page 177
        Spectral-density analyses
            Page 178
            Page 179
        Statistical degrees of freedom
            Page 180
            Page 181
            Page 182
            Page 183
        Influence of uncorrelated noise on transfer-function measurements
            Page 184
            Page 185
            Page 186
            Page 187
        Precision of transfer-function measurements
            Page 188
            Page 189
            Page 190
            Page 191
            Page 192
            Page 193
            Page 194
            Page 195
            Page 196
    Noise instrumentation and measurement techniques
        Instrumentation for reactor-noise measurements
            Page 197
        Analog-computer techniques for continuous data analysis
            Page 198
            Page 199
            Page 200
            Page 201
            Page 202
            Page 203
        Probability density measurement
            Page 204
            Page 205
        Measurement of correlation functions
            Page 206
            Page 207
        Spectral-density measurements
            Page 208
            Page 209
            Page 210
            Page 211
            Page 212
        Filtering techniques in spectral-density measurements
            Page 213
            Page 214
            Page 215
            Page 216
            Page 217
            Page 218
            Page 219
            Page 220
            Page 221
            Page 222
        Spacing of spectral-density estimates
            Page 223
            Page 224
        Periodic data analysis
            Page 225
            Page 226
        Transient spectrum analysis
            Page 227
            Page 228
            Page 229
            Page 230
    Acquisition, transmission, and recording of data
            Page 231
        Acquisition of data
            Page 232
            Page 233
        Measurement transducers
            Page 234
            Page 235
            Page 236
            Page 237
            Page 238
            Page 239
            Page 240
            Page 241
            Page 242
            Page 243
        Data transmission
            Page 244
            Page 245
            Page 246
            Page 247
            Page 248
            Page 249
            Page 250
            Page 251
            Page 252
            Page 253
            Page 254
            Page 255
            Page 256
        Analog data recording
            Page 257
            Page 258
            Page 259
            Page 260
            Page 261
            Page 262
            Page 263
            Page 264
        Analog-to-digital conversion
            Page 265
            Page 266
            Page 267
            Page 268
            Page 269
            Page 270
            Page 271
        Multiplexing: Time sharing of equipment
            Page 272
        Digital data-acquisition systems
            Page 273
            Page 274
            Page 275
            Page 276
    Pseudorandom noise techniques
            Page 277
            Page 278
        Input variables for cross correlation
            Page 279
            Page 280
            Page 281
            Page 282
            Page 283
            Page 284
            Page 285
        Maximum-length linear-shift-register sequence (m sequence)
            Page 286
            Page 287
            Page 288
            Page 289
            Page 290
            Page 291
            Page 292
            Page 293
            Page 294
            Page 295
            Page 296
            Page 297
            Page 298
            Page 299
        Residue of the square pseudorandom variable
            Page 300
            Page 301
        Multifrequency binary input signals
            Page 302
            Page 303
            Page 304
        Inverse-repeat pseudorandom binary variable
            Page 305
            Page 306
        Use of pseudorandom variables as a substitute for random noise
            Page 307
            Page 308
            Page 309
            Page 310
        Cross correlation with pseudorandom binary signals
            Page 311
            Page 312
            Page 313
            Page 314
            Page 315
            Page 316
            Page 317
        Use of pseudorandom ternary variables for nonlinear systems
            Page 318
            Page 319
            Page 320
            Page 321
            Page 322
            Page 323
    Digital processing of data
            Page 324
        Trend removal
            Page 324
            Page 325
            Page 326
        Digital processing of periodic data
            Page 327
            Page 328
            Page 329
        Digital filtering
            Page 330
            Page 331
            Page 332
            Page 333
            Page 334
            Page 335
            Page 336
            Page 337
            Page 338
            Page 339
            Page 340
            Page 341
            Page 342
            Page 343
            Page 344
        Statistical analysis
            Page 345
            Page 346
            Page 347
            Page 348
        Fourier-series representation (classical procedure)
            Page 349
        Correlation functions and spectral densities
            Page 350
            Page 351
            Page 352
            Page 353
            Page 354
        Transfer functions and coherence functions
            Page 355
        Parameter selection for reactor tests
            Page 356
            Page 357
            Page 358
        Fast Fourier transforms
            Page 359
            Page 360
            Page 361
            Page 362
            Page 363
            Page 364
    Experimental noise measurements in nuclear reactor systems
            Page 365
        Neutron-pulse counting experiments
            Page 365
            Page 366
            Page 367
            Page 368
            Page 369
            Page 370
            Page 371
            Page 372
            Page 373
            Page 374
            Page 375
            Page 376
            Page 377
            Page 378
            Page 379
            Page 380
            Page 381
            Page 382
        Noise measurements in critical reactors
            Page 383
            Page 384
            Page 385
            Page 386
            Page 387
            Page 388
            Page 389
            Page 390
            Page 391
            Page 392
            Page 393
        Reactivity measurements
            Page 394
            Page 395
            Page 396
            Page 397
            Page 398
            Page 399
            Page 400
            Page 401
            Page 402
            Page 403
            Page 404
            Page 405
            Page 406
            Page 407
            Page 408
            Page 409
            Page 410
            Page 411
            Page 412
            Page 413
            Page 414
            Page 415
            Page 416
            Page 417
        Noise measurements in power reactors
            Page 418
            Page 419
            Page 420
            Page 421
            Page 422
            Page 423
            Page 424
            Page 425
            Page 426
            Page 427
            Page 428
            Page 429
            Page 430
            Page 431
            Page 432
            Page 433
            Page 434
            Page 435
            Page 436
            Page 437
            Page 438
            Page 439
            Page 440
    Special noise techniques and applications in nuclear systems
            Page 441
        Optical demonstration of correlation
            Page 441
            Page 442
        Reactor noise analysis using polarity correlation
            Page 443
            Page 444
            Page 445
            Page 446
        In-core flow-velocity and vibration measurements made by use of electrodes and cross correlation
            Page 447
            Page 448
        Use of exponential cosine autocorrelation functions in processing nuclear-system test data
            Page 449
            Page 450
        Noise analysis of nuclear reactors by use of gamma radiation
            Page 451
            Page 452
            Page 453
            Page 454
            Page 455
        Acoustical noise measurements in nuclear reactors
            Page 456
        Pseudorandom noise measurement of neutron cross sections
            Page 457
            Page 458
            Page 459
            Page 460
            Page 461
            Page 462
            Page 463
            Page 464
    Deterministic variables
        Page 465
        Page 466
        Page 467
        Page 468
        Page 469
        Page 470
        Page 471
        Page 472
        Page 473
        Page 474
        Page 475
        Page 476
        Page 477
        Page 478
        Page 479
        Page 480
        Page 481
        Page 482
        Page 483
        Page 484
        Page 485
        Page 486
        Page 487
        Page 488
        Page 489
        Page 490
Full Text








ROBERT E. UHRIG, Ph.D., Iowa State University, is Dean
of the College of Engineering and Director of the Engineer-
ing and Industrial Experiment Station at the University of
Florida, where he has served as Chairman of the Depart-
ment of Nuclear Engineering. Dr. Uhrig previously held
academic positions at Iowa State University and the United
States Military Academy. As Deputy Assistant Director
for Research for the Department of Defense (1967), he was
concerned with the management of the fundamental
research program in the physical sciences and engineering.





Prepared under the auspices of the
United States Atomic Energy Commission

2Q- ^7-2foo

C 4

Copyright @ 1970

All Rights Reserved

No part of this book may be reproduced in
any form without permission in writing from
the publisher.

This copyright has been assigned to and is
held by the General Manager of the United
States Atomic Energy Commission. All
royalties from the sale of this book accrue
to the United States Government.


Library of Congress Catalog Card Number: 71-110558

To Paula


This book is an outgrowth of several years' experience in teaching a
course dealing with the application of random noise theory to nuclear
reactor systems, as well as research and industrial consulting work in the
field. It is designed to serve the dual purpose of supplementing a course
in nuclear reactor noise at the first-year graduate level and as a reference
book for practicing engineers and scientists interested in applying random
noise techniques to nuclear reactor systems. The first five chapters
provide background for those not familiar with random noise theory. It
is presumed that the reader has a working knowledge of nuclear reactor
theory and transfer mathematics.
There has been a deliberate attempt to make the book as self-contained
as possible. Much of the material has been drawn from scattered sources
dealing with both the science and the technology of the field. An effort
has also been made to integrate the wide spectrum of subjects that are
important to persons working in the area of nuclear reactor noise, but the
problem of retaining the conventional nomenclature that exists in the
various fields and of avoiding confusing duplication could not be com-
pletely resolved.
The random fluctuations of the measured variables of the system
(fission rate, temperature, pressure, flow rate, displacement, etc.) are
related to the dynamic characteristics of the nuclear systems. However,
only Chapters 3, 5, and 11 deal specifically with nuclear processes. Most
of the material in the other nine chapters is concerned wit' basic relations
of random noise theory and the techniques and instrumentation for
acquisition, transmission, recording, and processing of data from random
noise experiments. Hence, it has application to a broad range of physical
and engineering systems, and it is my hope that the material in this book
will also be useful to persons working in such fields as random vibration,
oceanography, medicine, communications, and information sciences.
In preparing this manuscript I have drawn from many sources, includ-
ing the Proceedings of the 1963 and 1966 University of Florida Symposia
dealing with nuclear reactor noise, Noise Analysis in Nuclear Systems and
Neutron Noise, Waves, and Pulse Propagation (USAEC Reports TID-7679
and CONF-660206, respectively), and have made a conscientious effort
to give proper credit for all such material. It has been my privilege to


know most of the scientists and engineers working in nuclear noise, and
I sincerely regret that it has not been possible to include all of the excellent
work that has been carried out in this interesting and stimulating field.
I am indebted to a large number of people who contributed generously
of their time in reviewing and discussing the manuscript. Appreciation
is expressed to those who served as reviewers of the manuscript in draft
form, specifically to Robert Albrecht, Alan Jacobs, G. Robert Keepin,
Edward Kenney, M. N. Moore, Philip Pluta, Andrew Sage, M. A. Schultz,
and Joseph Thie. Special recognition should be given to Nicola Pacilio,
who provided original material for Chapter 3 and reviewed it in the final
form; to Bruno Bars, who devoted much time to an extensive review and
criticism of the manuscript; and to Robert Albrecht and James Sheff for
the original developments presented in Chapter 5. I am also indebted
to Julius S. Bendat not only for his review of the manuscript but also for
many helpful discussions about the original techniques developed by him
and his associates at the Measurement Analysis Corporation.
The Atomic Energy Commission supported the preparation of the
manuscript, and I am indeed grateful to the AEC, particularly personnel
of the Division of Technical Information: James D. Cape initiated
preparation of the book, and John Inglima and Robert F. Pigeon admin-
istered its preparation. Editorial work was done by Charles Carroll,
Jean Smith, and Margaret Givens of the Division of Technical Informa-
tion Extension, Oak Ridge; their meticulous care has contributed very
significantly to the internal consistency and readability of the manu-
script. Credit is also due members of the Graphics Art Branch who are
responsible for the excellence of the art work.
The herculean task of typing this manuscript, parts of it as many as
four times, was ably carried out by Joan Boley. Finally, I am particularly
appreciative of the understanding of my wife, Paula, during the writing
and preparation of the manuscript, without which this book would not
have been possible.
Gainesville, Florida
April, 1970


1 Introduction 3
1-1 Random Processes in Nuclear Reactor Systems, 3
1-2 Motivation for Random Noise Techniques in Measurements on
Nuclear Reactors, 5
1-3 Random Processes and Variables, 7
1-4 Stationary and Ergodic Processes, 9

2 Statistics for Random Noise Analysis 13
2-1 Introduction, 13
2-2 Elementary Probability Theory, 13
2-3 Mean Value, Variance, and Standard Deviation, 17
2-4 Probability, Probability Density, and Probability
Distribution Functions, 19
2-5 Average Values and Probability Moments, 24
2-6 Probability Distributions in Radioactive Decay, 26
2-7 Special Probability Densities and Distributions, 32
2-8 Parameter Estimation, 41
2-9 Correlation Functions, 45

3 Neutron-Counting Techniques in Nuclear Reactor Systems 50
3-1 Introduction, 50
3-2 Probability Distribution of Fission Neutrons, 51
3-3 Rossi-Alpha Technique, 54
3-4 Variance-to-Mean (Feynman) Method, 60
3-5 Bennett Variance Method, 65
3-6 Count Probability Methods, 66
3-7 Interval Distribution (Babala) Method, 69
3-8 Dead-Time (Srinivasan) Method, 73
3-9 Correlation Analysis Techniques, 74
3-10 Covariance Measurements, 76
3-11 Endogenous-Pulsed-Source Technique, 78

4 Basic Relations of Random Noise Theory 83
4-1 Introduction, 83
4-2 Autocorrelation Function, 83



4-3 Autocovariance Function, 85
4-4 Power Spectral Density, 86
4-5 Special Autocorrelation Functions and Power
Spectral Densities, 89
4-6 Cross-Correlation Function, 96
4-7 Cross-Covariance Function, 98
4-8 Cross Spectral Density, 99
4-9 Input-Output Relations, 100
4-10 Practical Considerations, 104
4-11 One-Sided Spectral Densities, 105
4-12 Influence of Mean Value on Correlation Functions
and Spectral Densities, 110
4-13 Coherence Functions, 113
4-14 Two-Detector Correlation and Spectral-Density
Measurements, 114
4-15 Multiple-Input Linear Systems, 119

5 Reactor Noise Theory 130

5-1 Introduction, 130
5-2 Noise-Equivalent Source, 130
5-3 Langevin Procedure-Lumped-Parameter Model, 134
5-4 Space-Dependent Reactor Noise, 137
5-5 Space-Dependent Noise in an Infinite Medium, 144
5-6 Effect of Boundaries on Correlation, 151
5-7 Space-Dependent Noise in an Unreflected Parallelepiped, 153
5-8 Conclusions, 158

6 Noise Measurement Techniques 161

6-1 Introduction, 161
6-2 Correlation Measurements, 163
6-3 Spectral-Density Measurements, 165
6-4 Measurement of Transfer Functions, 168
6-5 Direct Harmonic Analysis, 170
6-6 Finite Length of Record, 173
6-7 Lag and Spectral Windows, 175
6-8 Spectral-Density Analyses, 178
6-9 Statisical Degrees of Freedom, 180
6-10 Influence of Uncorrelated Noise on Transfer-Function
Measurements, 184
6-11 Precision of Transfer-Function Measurements, 188

7 Noise Instrumentation and Measurement Techniques 197

7-1 Instrumentation for Reactor-Noise Measurements, 197
7-2 Analog-Computer Techniques for Continuous Data Analysis, 198


7-3 Probability Density Measurement, 204
7-4 Measurement of Correlation Functions, 206
7-5 Spectral-Density Measurements, 208
7-6 Filtering Techniques in Spectral-Density Measurements, 213
7-7 Spacing of Spectral-Density Estimates, 223
7-8 Periodic Data Analysis, 225
7-9 Transient Spectrum Analysis, 227

8 Acquisition, Transmission, and Recording of Data 231

8-1 Introduction, 231
8-2 Acquisition of Data, 232
8-3 Measurement Transducers, 234
8-4 Data Transmission, 244
8-5 Analog Data Recording, 257
8-6 Analog-to-Digital Conversion, 265
8-7 Multiplexing: Time Sharing of Equipment, 272
8-8 Digital Data-Acquisition Systems, 273

9 Pseudorandom Noise Techniques 277

9-1 Introduction, 277
9-2 Input Variables for Cross Correlation, 279
9-3 Maximum-Length Linear-Shift-Register Sequence
(m Sequence), 286
9-4 Residue of the Square Pseudorandom Variable, 300
9-5 Multifrequency Binary Input Signals, 302
9-6 Inverse-Repeat Pseudorandom Binary Variable, 305
9-7 Use of Pseudorandom Variables as a Substitute
for Random Noise, 307
9-8 Cross Correlation with Pseudorandom Binary Signals, 311
9-9 Use of Pseudorandom Ternary Variables
for Nonlinear Systems, 318

10 Digital Processing of Data 324

10-1 Introduction, 324
10-2 Trend Removal, 324
10-3 Digital Processing of Periodic Data, 327
10-4 Digital Filtering, 330
10-5 Statistical Analysis, 345
10-6 Fourier-Series Representation (Classical Procedure), 349
10-7 Correlation Functions and Spectral Densities, 350
10-8 Transfer Functions and Coherence Functions, 355
10-9 Parameter Selection for Reactor Tests, 356
10-10 Fast Fourier Transforms, 359


11 Experimental Noise Measurements in Nuclear Reactor Systems 365
11-1 Introduction, 365
11-2 Neutron-Pulse Counting Experiments, 365
11-3 Noise Measurements in Critical Reactors, 383
11-4 Reactivity Measurements, 394
11-5 Noise Measurements in Power Reactors, 418

12 Special Noise Techniques and Applications in Nuclear Systems 441
12-1 Introduction, 441
12-2 Optical Demonstration of Correlation, 441
12-3 Reactor Noise Analysis Using Polarity Correlation, 443
12-4 In-Core Flow-Velocity and Vibration Measurements
Made by Use of Electrodes and Cross Correlation, 447
12-5 Use of Exponential Cosine Autocorrelation Functions
in Processing Nuclear-System Test Data, 449
12-6 Noise Analysis of Nuclear Reactors by Use
of Gamma Radiation, 451
12-7 Acoustical Noise Measurements in Nuclear Reactors, 456
12-8 Pseudorandom Noise Measurement of Neutron Cross Sections, 457

Appendix: Deterministic Variables 465

Index 475




The first experiment in which energy was released from nuclear fission
in a controlled self-sustaining chain reaction was achieved by Enrico Fermi
and his associates in December 1942 in a squash court beneath Stagg
Stadium at the University of Chicago with a "nuclear pile" of graphite
and uranium. Fermi had predicted that this particular configuration of
materials would achieve criticality from calculations based on proba-
bilities (called neutron cross sections) of interaction between neutrons
and constituent materials. The probabilities of the various types of
interaction, i.e., scattering,, radiative capture, and fission, had been
measured in a series of experiments carried out in the preceding months.
Hence, even in the very earliest days of the nuclear era, the essentially
probabilistic nature of the fundamental processes involved was recognized
and used effectively in the calculations and experiments that led to the
world's first self-sustaining nuclear chain reaction.
In the approach to criticality of a nuclear-reactor system, the fluctua-
tions that take place in the power level can be observed from recordings
of the output of the neutron-detector instrumentation system. In a
typical subcritical reactor, an artificial neutron source randomly supplies
the neutrons necessary to initiate the chain reaction. Often this source
consists of plutonium and beryllium or polonium and beryllium in which
the decay of the alpha-emitting polonium or plutonium occurs randomly;
i.e., each disintegration is an event that is not dependent on the preceding
or following disintegrations. Hence the neutrons produced by the
alpha-neutron reactions are generated randomly. Although we often
speak in terms of the average number of neutrons emitted per unit time
from such a source, the number emitted in successive time intervals is a
randomly varying quantity. These neutrons travel throughout the
nuclear system where various types of interactions take place. Typically,
a fission neutron undergoes a number of scattering collisions with
moderating or coolant materials before eventually being absorbed or


escaping from the reactor. Each step in the life of a neutron (which is
strongly influenced by the amount, nuclear cross section, and geometrical
arrangement of the materials present) can be dealt with in a probabilistic
manner. In cases where nuclear fission occurs, the number of neutrons
released is again a probabilistic quantity, varying between zero and six,
with a mean value of about 2.5 for 235U fission.
Although microscopic details of the interactions govern the behavior of
a nuclear reactor system, practical observations are usually made on a
macroscopic basis. When we view the processes taking place in a
subcritical nuclear reactor from the macroscopic viewpoint, we find that
the system is being disturbed by a random phenomenon, the emission of
individual neutrons from the extraneous neutron source. Such neutrons
may start a very long fission chain, but ultimately the chain must die out
if the reactor is subcritical. However, each of the chains initiated by
external neutrons contributes to the neutron population in the reactor,
which is directly related to the power level. With a large number of
individual chain reactions, each initiated by the independently emitted
neutrons from the extraneous source, taking place simultaneously in the
reactor, it is clear that the neutron population is going to increase and
decrease in a stochastic, or random, manner. When the reactor is
highly subcritical, the neutron chains are quite short and the fluctuations
are relatively small. However, as the reactor approaches criticality, the
chains increase in average length. For instance, when the reactor
reaches an effective multiplication factor of 0.98, each neutron injected
into the system from an outside extraneous source generates, on the
average, 50 additional neutrons before the chain dies out. In a typical
uranium system, this means that about 20 fissions are required to produce
these 50 neutrons. Since some chains are relatively short, it is necessary
that others be very long to sustain this average.
In many situations the geometrical arrangement, amount, or effective
cross section of the materials will be changed as a function of time, and
this change will result in a variation of the neutron population. The
classical pile oscillator experiment is an example in which the neutron
absorber in the rotor is moved from one position to another in a prescribed
manner, thereby changing the geometrical arrangement of the neutron
absorber material, as well as its effective cross section due to self-shielding
effects. In such a case the input (movement of material and change of
effective cross section) may not necessarily be random. In fact, it often
is made to be deterministic; typically, reactivity is changed in a sinusoidal
manner, at least to a first approximation, by the movement of one
absorber with respect to another. Hence the input perturbation is
periodic and deterministic rather than random. However, the output
may be influenced so strongly by the statistical processes in the reactor


that the deterministic sinusoidall in this case) component may be
completely obscured by the random component.
The dynamic characteristics of a system can be studied by an analysis
of its output variables as a function of its input variables and time. In
some situations the phenomena involved may be random in the sense
that the observed fluctuations arise as a result of internal or external
random stimulation that cannot be controlled. In others the phenome-
non itself involves probabilities that cause the observed variables to
fluctuate in a randomlike manner, even though the input is deterministic.
Often an experimenter may externally stimulate a system with either a
random or a deterministic input to produce a variation of the output
that can be analyzed alone or in conjunction with the input. In many
situations more than one of these conditions may exist simultaneously;
e.g., a subcritical nuclear system may be stimulated by both an internal
indigenous neutron source and a neutron generator whose output is
controlled in a programmed manner.


The reasons for utilizing random noise techniques in measurements on
nuclear reactor systems may be one or more of the following:

1. To measure the dynamic behavior or monitor the status of a nuclear
system with a minimum of perturbation or interference with normal
2. To take advantage of the naturally occurring fluctuations of neutron
population to evaluate system parameters.
3. To utilize special techniques or special equipment that facilitate the
experiment and/or its data acquisition and processing.
4. To better describe and explain the nature of the phenomena producing
the fluctuations.
5. To use the theory of fluctuation to evaluate the errors in measurements.

The author does not view random noise techniques as a panacea for
all reactor-dynamics investigations. Rather, noise techniques supple-
ment the classical dynamic procedures such as reactor oscillator experi-
ments, burst-type excursions, pulsed neutron experiments, and other
more-or-less conventional procedures used in measuring parameters of
nuclear systems.
1-2.1 Microscopic Noise Techniques. Noise studies in nuclear
reactor systems can be carried out either on the microscopic level or on
the macroscopic level. On the microscopic level the occurrence of
counts in a detector, triggered by the individual chains that occur in a


nuclear reactor, are studied by statistical techniques. The early theoreti-
cal work in this field was carried out by Feynman,1,2 Fermi,2 and de
Hoffman,1-4 at Los Alamos about 1947 and led to the Rossi-alpha
experiments on fast critical assemblies described later by Orndoff.6
Various other microscopic techniques have been developed by Feynman,1'2
Mogilner and Zolotukhin,6 Bennett,7 P11,8,9 Pacilio,10 and others."-14
Several of these techniques involve describing a statistical distribution
and its deviation from a Gaussian distribution, and others deal directly
with the probabilities of detecting an event. In all cases the nature of
the mathematical treatment is influenced by the type of equipment used
for the measurement and by the fact that a detected event involves the
removal of a neutron from the system.
1-2.2 Macroscopic Noise Measurements. The macroscopic
approach to reactor noise measurements was introduced by Moore"",16
and verified experimentally by Cohn17,18 about 10 years after microscopic
noise work was initiated. The Langevin formulation of reactor noise by
Moore"1 is based on early work in Brownian motion in which the noise
in a system is considered to be the response of the system to a random, or
stochastic, driving function; i.e., the noise is the response of the system to
an input representing the statistical nature of the underlying process.
If the dynamic characteristics of the system are known, it is possible to
relate the correlation or spectral-density measurements (both defined
later) to the parameters of the system.
The driving functions may be random fluctuations either in one of the
variables or in one of the parameters of the system. For example, the
driving function that produces fluctuations in the neutron density in a
subcritical nuclear reactor may be the fluctuations in the rate at which
neutrons are emitted from an extraneous neutron source, one of the
variables of the system. On the other hand, the driving force that
produces fluctuations in a zero-power reactor may be the variations in
the delayed-neutron fractions, the number of neutrons released per
fission, and the effective neutron lifetime, all of which are parameters of
the system. Many driving functions of both types may be present in a
particular system, and all must be taken into account. However, in
many practical situations one or two particular driving functions may
predominate and all others can be neglected. For instance, a variable,
such as reactivity, can be deliberately perturbed in a random manner with
a root-mean-square amplitude 10 to 100 times as large as that of the next
most significant driving function.
The usefulness of both these techniques is dependent on a good under-
standing of the dynamic behavior of the system under investigation;
i.e., the processes involved can be adequately represented by mathe-
matical models. The degree of sophistication of the model will vary


from case to case, depending on the goals of the investigator. Often a
transfer function based on a lumped-parameter one-speed-neutron
representation in which delayed neutrons are ignored is adequate. In
other cases the use of a model based on the three-dimensional time-
dependent Boltzmann neutron-transport equation, approximated in 100
regions and 30 energy groups, is not adequate. Feedback effects and
other nonlinearities may be introduced into the mathematical model and
linearized since the root-mean-square amplitude of the random noise
driving functions is usually small enough that linear approximations
are justifiable.


Throughout this text we will be dealing with phenomena that occur in
nuclear systems. However, we can observe the behavior of the system
only by measuring certain of its "observables" (pressures, temperatures,
power level, etc.). These properties are measured with sensors or
transducers that convert the quantity being measured into a physical
quantity (electrical current, mechanical displacement, etc.). These can
be readily interpreted by the experimenter or recorded by his data
acquisition system; they are time variables that represent the phenome-
non being studied. Therefore it is appropriate to refer to the inputs and
outputs of a system as being variables and to designate them as random
or deterministic in accordance with their nature. In general, phenomena
are classified as random if their behavior can be described only in terms
of statistical quantities.
Let us consider the time-history records of a system (such as the power
of a base-load nuclear power generating plant in a metropolitan area with
a complex industrial and domestic load) as shown in Fig. 1-1. These
individual records (from which the steady components have been
removed) might represent the load pattern for several (not necessarily
consecutive) 24-hr periods. Such an array of records is called an
ensemble, and each time history is called a sample record. The collection
of all possible sample records produced by the random phenomenon under
consideration is called a stochastic process. The term "process," in a
strict sense, means a collection of sample records sufficiently large to
unambiguously establish the statistical properties of the quantity being
Such an ensemble of records as that shown in Fig. 1-1 can be obtained
by taking many individual measurements or by dividing a single record
into an arbitrary number of pieces. When the latter procedure is used,
there is, for most practical purposes, little difference in meaning between
the terms "process" and "variable." In this text, "process" will be


X, (t) _Xt-- \---/ --- t

X2(t) "N t

x, (t)= t

XN(t) M t

t, t2 ti tN
Fig. 1-1. An ensemble of time records.

used when an ensemble of sample records is involved. Since a large part
of random noise theory is based on the assumption of ergodicity or, at
least, stationarity (both defined later), which can be shown only if an
ensemble of sample records is available, the term "process" is more prop-
erly used. However, in practical situations it is usually necessary to
proceed with the analysis of data with assurance only of self-stationarity,
which involves only a single sample record. Hence the term "variable"
can also be properly applied to this situation. An effort is made to
retain the distinction between these two terms throughout this text,
although there are situations where the choice of the term to use is
completely arbitrary.
The classification of variables and processes as being either deter-
ministic or random is generally straightforward. If the variable is


reproducible or its future behavior predictable (i.e., if it can be repre-
sented with reasonable accuracy by explicit mathematical relations),
it is classified as deterministic. For example, the reactivity of a nuclear
reactor with a sinusoidal pile oscillator in operation is a variable that can
be described mathematically as a function of time. On the other hand,
the position of an individual neutron as it moves throughout its lifetime
inside a reactor is not predictable and therefore must be classified as a
random variable. At best, we can evaluate the average distance
traveled by all the neutrons in the reactor. In general, the future
behavior of random variables is described only in terms of probabilities
and statistical quantities rather than by explicit mathematical relations.
If one were to take an extreme position, he might argue that there is no
such thing as a deterministic variable; i.e., on a "microscopic-enough"
scale, every phenomenon yields observables that must be classified as
random variables. It can also be argued that many random variables
could be described by a mathematical relation and their future behavior
predicted if the phenomena involved were sufficiently well understood.
While conceding the possibilities of these extreme interpretations, we
can readily differentiate between deterministics and random variables in
most practical situations. For situations in which this differentiation is
not possible, methods of mathematical determination are described later.


1-4.1 Stationarity. A function, x(t), is said to be a random variable
if its value at any instant of time can be described only in terms of its
statistical properties. The principal classification of random variables
is that of stationarity or of nonstationarity. A random variable is said
to be stationary if its statistical characteristics do not change with time.
An assumption of stationarity is usually justified for systems in which
the basic mechanisms giving rise to the fluctuations are invariant over a
reasonable period of time.
The matter of the particular statistical properties that must remain
constant as a function of time to demonstrate stationarity in a process
or variable is an integral part of the definition of the stationary process or
variable. Some authors have (erroneously) indicated that it is sufficient
to determine that the ensemble mean and ensemble mean square value
remain constant as a function of time to establish stationarity. Others
(e.g., Bendat and Piersol'1) contend that it is necessary to show that the
ensemble mean value and the autocorrelation function (of which, as we
will see later, the mean square value is a special case) must be constant as
a function of time to demonstrate weak stationarity, or stationarity in a
general sense. They further indicate that an infinite collection of higher


order moments and joint moments for the random process is necessary
to establish the complete family of probability distribution functions
describing the process and that, for the special case where all possible
moments and joint moments are time invariant, the random process can
be said to be strongly stationary, or stationary in a strict sense. They
also indicate, however, that for many practical applications verification
of weak stationarity justifies an assumption of strong stationarity.
Clearly it is not practical to demonstrate strong stationarity even under
the most ideal situations. Even demonstrating weak stationarity is
difficult; therefore a range of values must be established for the mean
value and mean square value, or autocorrelation function, which will be
acceptable for a finite-length record.
1-4.2 Ergodic Processes. All stationary processes can be further
defined as being ergodic or nonergodic. This property can be demon-
strated by referring to the ensemble of records in Fig. 1-1. Let us
consider an ensemble average of the array of records at any given time,
ti. The ensemble average indicated by (x(ti)) is calculated by
y xQi(tl)
(x(ti)) = lim i=1 (1-1)
N-* N
Ensemble averages at other times (t2, t3, t4, etc.) can be calculated in a
similar manner. If the process is stationary, each of these ensemble
averages should be the same; i.e., the ensemble averages remain constant
regardless of time.
Now let us consider the time average of a single sample record, xi(t).

1 fT
Xi(t) = lim xi(t) dt (1-2)
r-. 2T J-T

If the sample records of Fig. 1-1 are of the same stationary process, the
time averages should be the same. If the process is ergodic, the common
value for the ensemble average at any time (x(t)) must be equal to the
common value for the time average of any record xi(t). Again, obtaining
identical numerical values in a given experimental situation is impossible,
and therefore acceptable tolerances for these values must be established.
It is also necessary that the autocorrelation function and other properties
based on time averages be equal to the corresponding characteristics
based on ensemble averages.
Ergodic random processes are clearly an important class since all their
properties can be determined by performing the time average over a
single record. Fortunately, in actual practice, random variables
representing stationary physical phenomena are often ergodic. For this


reason the properties of the stationary random phenomena can be
measured quite satisfactorily from a single observed time-sample record.
1-4.3 Self-Stationarity. Individual time records of a random varia-
ble are sometimes said to be stationary. This means that the properties
computed over short intervals of time within a single time record do not
vary significantly from one interval to the next. However, these varia-
tions are usually greater than would normally be expected owing to the
normal statistical sampling variation. This type of stationarity is
sometimes called self-stationarity to avoid confusion with the more-
classical definition.
The sample record obtained from an ergodic random process is self-
stationary. Furthermore, sample records for most physically interesting
nonstationary random processes are self-nonstationary. Bendat and
Piersol'1 have indicated that if an ergodic assumption is justified, as it
is for most stationary physical phenomena, verification of self-station-
arity for a single sample record effectively justifies an assumption of
ergodicity for a random process from which the sample record is obtained.
Therefore we will proceed with the development of a theory which,
strictly speaking, is valid only for ergodic processes but which can
be applied to variables and processes that have been shown to be


1. R. FEYNMAN, F. DE HOFFMAN, and R. SERBER, J. Nucl. Energy, 3: 64 (1956).
2. E. FERMI, R. P. FEYNMAN, and F. DE HOFFMAN, Theory of the Criticality of
the Water Boiler and the Determination of the Number of Delayed Neutrons,
USAEC Report MDDC-383(LADC-269), Los Alamos Scientific Laboratory,
December 1944.
3. F. DE HOFFMAN, Intensity Fluctuations of a Neutron Chain Reactor, USAEC
Report MDDC-382(LADC-256), Los Alamos Scientific Laboratory, October
4. F. DE HOFFMAN, Statistical Aspects of Pile Theory, in The Science and
Engineering of Nuclear Power, C.D. GOODMAN (Ed.), Vol. II, p. 116, Addison-
Wesley Publishing Company, Inc., Reading, Mass., 1949.
5. J. D. ORNDOFF, Prompt Neutron Periods of Metal Critical Assemblies, Nucl.
Sci. Eng., 2: 450-460 (1957).
6. A. I. MOGILNER and V. G. ZOLOTUKHIN, Measuring the Characteristics of
Kinetics of a Reactor by the Statistical p-Method, At. Energ. (USSR),
10(4): 377-379 (1961).
7. E. F. BENNETT, The Rice Formulation of Pile Noise, Nucl. Sci. Eng., 8: 53-61
8. L. PAL, Determination of the Prompt Neutron Period from the Fluctuations
of the Number of Neutrons in a Reactor, Central Research Institute of
Physics, Hungarian Academy of Sciences, Budapest, 1962.


9. L. PAL, Statistical Fluctuations of Neutron Multiplication, in Proceedings of
the Second United Nations International Conference on the Peaceful Uses of
Atomic Energy, Geneva, 1958, Vol. 16, p. 687, United Nations, New York,
10. N. PACILIO, Short Time Variance Method for Prompt Neutron Lifetime
Measurements, Nucl. Sci. Eng., 2: 266 (1965).
11. W. MATTHES, Statistical Fluctuations and Their Correlation in Reactor
Neutron Distribution, Nukleonik, 4: 213 (1962).
12. D. R. HARRIS, The Sampling Estimate of the Parameter Variance/Mean
in Reactor Fluctuation Measurements, USAEC Report WAPD-TM-157,
Westinghouse Electric Corp., Bettis Plant, August 1958.
13. D. H. BRYCE, Measurement of Reactivity and Power Through Neutron
Detection Probabilities, in Noise Analysis in Nuclear Systems, Gainesville,
Fla., Nov. 4-6, 1963, Robert E. Uhrig (Coordinator), AEC Symposium
Series, No. 4 (TID-7679), 1964.
14. A. FURUHASHI and S. IZUMI, A Proposal on Data Treatment in the Feynman
Alpha Experiment, J. Nucl. Sci. Tech. (Tokyo), 4: 99 (1967).
15. M. N. MOORE, The Determination of Reactor Transfer Functions from
Measurements at Steady Operation, Nucl. Sci. Eng., 3: 387-394 (1958).
16. M. N. MOORE, The Power Noise Transfer Function of a Reactor, Nucl. Sci.
Eng., 6: 448-452 (1959).
17. C. E. COHN, Determination of Reactor Kinetic Parameters by Pile Noise
Analysis, Nucl. Sci. Eng., 5: 331-335 (1959).
18. C. E. COHN, A Simplified Theory of Pile Noise, Nucl. Sci. Eng., 7: 472 (1960).
19. J. BENDAT and A. PIERSOL, Measurement and Analysis of Random Data,
John Wiley & Sons, Inc., New York, 1966.


Statistics for Random

Noise Analysis

Random noise analysis has its basis in statistics; indeed, an under-
standing of the fundamental concepts of statistical techniques is essential
to the understanding of how random noise theory is used to analyze the
behavior of dynamic systems. Since there is extensive literature
available for statistics, this chapter includes only the concepts directly
related to random noise analysis and those needed to establish nomen-
clature for future work. No attempt is made to be rigorous in derivations.

2-2.1 Simple Probability. The simplest case of probability in which
all events are equally likely to occur is considered. If an event can
happen in n ways, of which m are favorable to the occurrence of a par-
ticular event, then the probability, p, of its occurrence in a single trial is

S=m (2-1)
Elementary probability is often associated with the throwing of dice.
For instance, the probability that a four will appear when a die is thrown
is 6 since the total of ways a die can fall is six and of these ways only one
is favorable to the occurrence of a four. It is, of course, presumed that
the die is not "loaded" and will fall in any one of the six possible ways
with equal probability. If this is not the case, the appropriate probability
must be determined experimentally. It is obvious that, if an event is
certain to happen, then the probability of its occurrence is unity; if the
event is certain not to happen, the probability is zero.
Events are said to be mutually exclusive if the occurrence of one of them


Statistics for Random

Noise Analysis

Random noise analysis has its basis in statistics; indeed, an under-
standing of the fundamental concepts of statistical techniques is essential
to the understanding of how random noise theory is used to analyze the
behavior of dynamic systems. Since there is extensive literature
available for statistics, this chapter includes only the concepts directly
related to random noise analysis and those needed to establish nomen-
clature for future work. No attempt is made to be rigorous in derivations.

2-2.1 Simple Probability. The simplest case of probability in which
all events are equally likely to occur is considered. If an event can
happen in n ways, of which m are favorable to the occurrence of a par-
ticular event, then the probability, p, of its occurrence in a single trial is

S=m (2-1)
Elementary probability is often associated with the throwing of dice.
For instance, the probability that a four will appear when a die is thrown
is 6 since the total of ways a die can fall is six and of these ways only one
is favorable to the occurrence of a four. It is, of course, presumed that
the die is not "loaded" and will fall in any one of the six possible ways
with equal probability. If this is not the case, the appropriate probability
must be determined experimentally. It is obvious that, if an event is
certain to happen, then the probability of its occurrence is unity; if the
event is certain not to happen, the probability is zero.
Events are said to be mutually exclusive if the occurrence of one of them


precludes the occurrence of the others. In throwing a die, the occurrence
of a four certainly precludes the occurrence of any other number. In
the case of n mutually exclusive events, the probability that any one of
m events will occur is m/n; i.e., the sum of the probabilities of the
individual events. The probability of throwing either a 2 or a 5 with a
die is -, or 4, since the probability of either event is T.
Events are said to be independent if the occurrence of one of them does
not influence the occurrence of the other. When two dice are thrown, the
occurrence of a particular number with one die does not influence the
number that comes up with the other die.
When events are independent, the probability of several of them
occurring as a group, i.e., the joint probability, is the product of the
probabilities of each event occurring independently. When two dice are
thrown, the probability of getting two fives is X = I since the two
events are independent. This procedure can be extended to evaluate the
probability of obtaining any given sum (between 2 and 12) when two
dice are thrown. For example, a sum of 6 can be obtained from the
following combinations: 5-1, 4-2, 3-3, 2-4, and 1-5. Since the occur-
rence of any one of these pairs would preclude the occurrence of the other
four combinations listed, these five possibilities are mutually exclusive,
and the probability of one of these combinations occurring in a single
throw is ', i.e., the sums of the individual probabilities. The proba-
bility of each of the 11 possible sums occurring in a single throw of two
dice is tabulated in Table 2-1. Note the sum of all probabilities is unity
since one of the combinations must occur.

Table 2-1
Individual Probabilities

Sum of Number of Ways
Two Dice Sum Can Occur Probability

1 0 0
2 1 --
3 2
4 3 -
5 4
6 5 A-

7 6
8 5 A3
9 4 9
10 3 A
11 2 A1
12 1 A


These data are presented in the form of a bar graph in Fig. 2-1, giving
a display of the probabilities. The height of each bar represents the
probability of that particular number resulting from the throwing of two
dice. The total length of all bars will be equal to unity since it is certain
that one of these sums will be obtained when two dice are thrown.
Curves such as those in Fig. 2-1 are called discrete probability curves
because the probability function p(xi) (i.e., the probability that any
particular quantity x will be xi) is plotted against the discrete variable x.
If one considers the probability that the sum of dice will be equal to or
less than a certain number, the result is the sum of the probabilities of all

J 6

0 X

1 2 3 4 5 6 7 8 9 10 11 12
Fig. 2-1. Probability curve for sum of two dice.

of the possibilities up to and including that number. For example, the
probability that the sum of two dice will be equal to or less than 5 is
F + T T + 9 = Y (see Table 2-1), which is called the cumulative
probability. The cumulative probability for each possible result when two
dice are thrown is given in Table 2-2. The probability of obtaining one
as a sum is zero since it is impossible. The cumulative probability is
plotted vs. the sum of two dice in Fig. 2-2; this plot is called the probability
distribution curve. It is apparent that Fig. 2-2 is the integral of the curve
in Fig. 2-1. This relationship will be discussed later.
2-2.2 Conditional Probabilities. If events are not independent or
mutually exclusive, the joint probability of the various events is still the
product of the probabilities of the individual events provided that the cor-
rect probabilities are used. For instance, the probability of drawing two
aces in successive draws from a deck of cards is dependent on whether the
first card is replaced before the second card is drawn. Thus it is necessary
to introduce the concept of conditional probability, i.e., the probability of
event B happening if event A has occurred.


Table 2-2
Cumulative Probabilities

Sum of Cumulative
Two Dice Probability

1 0
2 A
5 A
6 A

7 A
8 H
9 i
10 I
11 5o
12 1

Let us consider an experiment in which n mutually exclusive events can
occur of which mA are favorable to event A, mB are favorable to event B,
and mAB are favorable to the occurrence of events A and B. The
corresponding probabilities are p(A), p(B), and p(A,B), respectively.
Now let us define the conditional probability p(AIB), the probability

z 5I I

0 LL .

2 I-

0 0

1 2 3 4 5 6 7 8 9 10 11 12

Fig. 2-2. Probability distribution curve for sum of two dice.


that event A will occur if event B has occurred previously, as
mAB MAB/n p(A,B)
p(AIB) n (2-2)
mB mB/n p(B)
Similarly, the conditional probability p(BIA), the probability that
event B will occur if event A has occurred previously, is
mAB p(A,B)
p(BIA) = p(AB) (2-3)
mA p(A)
Therefore we can rearrange these equations to give
p(A,B) = p(A) p(BJA) = p(B) p(AIB) (2-4)
i.e., the joint probability of events A and B occurring is the product of the
unconditional probability of the occurrence of one event and the con-
ditional probability that the other event will occur if the first event has
occurred previously.
Let us consider the original problem of drawing two aces from a deck
of cards in two successive drawings. We will define event A as the
occurrence of an ace in the first drawing and event B as the occurrence
of an ace in the second drawing.
Case 1: The two drawings are both from shuffled decks with all cards
in place, i.e., the two events are independent. Hence
p(A) = p(B) = p(AIB) = p(BIA) = A = A
p(A,B) = p(A) p(B) = A X A = TW
Case 2: The second drawing is made from the same deck without
returning the first card. The probability of the first card being an ace is
the same as in case 1, i.e., p(A) = A. If the first card is an ace, then
the probability of the second card being an ace is
P(BIA) = I = r
since there are now 51 cards left, of which only 3 are aces. Therefore
p(A,B) = p(A) p(BIA) = A X A = IT

Data from a test or a series of tests are often generalized in the sense
that they are considered typical of situations of a similar kind. However,
repeated tests do not give exactly the same results, and statistical methods
are needed to interpret the results.
There are two kinds of information in most sets of data: evidence of


uniformity and of variability. Uniformity is represented by the average
value or the root-mean-square value, and variability is usually represented
by an index of precision such as standard deviation or variance.
Let us consider the stationary process x(t) represented by the infinite
ensemble in Fig. 1-1. The average or mean value of the infinite ensemble
of records at time tl is the average of the values x1(ti), x2(tl), xa(t1), .
Xi(tl), N(t); i.e.,
z xi(t1)
(x(t,)) = lim = 1 (2-5)
N- N
The mean-square value of the ensemble of records at time tl is
y xi(tl)
(x2(t)) = lim i=1 (2-6)
N--. N
The variance of the ensemble of records at time t1 is given by
( [Xi(tl) (X(t)]
) = lim i
( xi(t ) 1 i xt)
= lim 1 2(x(t)) lim i=1 + (x(ti))2 (2-7)
N- w N N- N N
The first term is seen to be the mean-square value of the ensemble at
time tl as defined by Eq. 2-6, and the other two terms involve the mean
value as defined by Eq. 2-5. Hence the variance is

2(,i) = (x2(t)) 2(x(ti))2 + (x(t1))2
= (x2(t)) X(tl))2
and the standard deviation is

OX(t,) = [(x2(t)) (Z(tl))2]1/2 (2-9)
When the average value of the ensemble is equal to zero, the variance
and standard deviation are, respectively, the mean-square and root-
mean-square values, i.e.,
= (x2(tl)) (2-10)
and (i)
a (t = [(x2(ti))1/2 (2-11)
Similar expressions can be written for all the statistical properties of the
ensemble of records at other times t2, ts, t4, ty. If the process is


stationary, the ensemble mean and mean-square values, indeed all the
statistical characteristics, of the ensemble will remain constant for all
values of time.
Now let us consider a single infinitely long sample record xi(t). The
temporal mean and mean-square values are given by

x (t) = lim x,(t) dt = (2-12)
T-_ 2T T

(t) = lim x(t) dt = (2-13)
T-_. 2T J-TI
respectively. The bar over the symbols indicates temporal averaging
over infinitely long records. The symbols lx, and l2. are commonly used
to represent true temporal mean and mean-square values. By the same
procedure used for the ensemble properties, we can derive the temporal
values of the variance and standard deviation of the infinite record to
be, respectively,
u, = [x(t)] [xi(t)] = [4 ] (2-14)
,.i = {[x(t)J [xe(t)12/2 = [ i]1/2 (2-15)
If the mean value is equal to zero, the variance and standard deviation
of the infinite record again become equal to the mean-square and root-
mean-square values, i.e., respectively,
2, = X(t) = (2-16)
x. = [x(t)]1/2 = [,p2]1/2 (2-17)
Since xi represents any of the infinitely long records, of the stationary
process, all statistical characteristics of each sample record are the same.
If the process is ergodic, the temporal statistical properties will be equal
to the corresponding ensemble properties, and we can use either type of
statistical properties.


2-4.1 Discrete Random Variables. When the random variable can
assume only a finite number of values in any finite interval, it is a discrete
random variable. In the case of the sum of two dice, only 11 discrete
(and integral) values are possible. Figure 2-1 is a graph of the probability
function p(xi) of this discrete random variable, i.e., the probabilities of x
assuming each value x, is plotted against x,.


The probability distribution function P(x < X) can be defined as the
probability that x will assume some value equal to or less than X, a
specifically designated value. We can express this relation for a discrete
variable by
P(x < X) = 2 p(xi) (2-18)
Obviously, when X = + oo, P(x < X) = 1, the probability distribution
function for the discrete random variable of Fig. 2-1 is given in Fig. 2-2.
The probability distribution function can be given by

P(x < X) = J p(xi) dx (2-19)

In either case, it is apparent that P(x < X) will have discontinuities at
the discrete values of xi.
The two-dimensional joint probability function and joint probability
density function can be readily demonstrated by flipping two coins.
We can reduce the problem to numerical terms by assigning the following
numerical values: heads = 1; tails = 2. Since the probability of either
state on a coin is T, the probability of each of the four possible combina-
tions of two coins (1-1, 1-2, 2-1, and 2-2) is I since the two events are
independent. This relation is shown in the joint probability function
graph of Fig. 2-3(a).
The joint probability density function P(x < X, y 5 Y) of this random
function is shown in Fig. 2-3(b). It is found by the expression

P(x X, y Y) = 1 p(xa,yk) (2-20)
xi Here again this equation can be expressed in the form of an integral:

P(x < X, y < Y) = f i_ p(xe,yk) dx dy (2-21)

2-4.2 Continuous Random Variables. The concepts developed in
the preceding section can be applied to continuous random variables also.
Let us consider one of the continuous random sample records, xi(t), from
the ensemble of Fig. 1-1, where the amplitude can (theoretically) vary
continuously between o and + o. The probability distribution
function P(x < X), as for the discrete random variable, is defined as the
probability that x will assume a value equal to or less than X. We can
further define the probability density function p(x) of a continuous ran-
dom variable to be the rate of change of P(x < X), i.e.,

S d[P(x < X)]
p() = dx(2-22)


p(xi, Yk) Y(k)

^ / xi

0 1 2

P(x X,y ) Y)


0 1 2

Fig. 2-3. A two-dimensional joint discrete probability function and joint
discrete probability distribution function for the tossing of two coins (H = 1,
T = 2). (a) Joint probability function. (b) Joint probability distribution.

The inverse relation is also very useful in dealing with continuous random
P(x < X) = /x p(x) dx (2-23)

We can obtain the relation for the probability that x is greater than a


and less than or equal to b, where a and b are arbitrary values, to be

P(a < x < b) = p(x) dx (2-24)

Since all of x lies between o and + oo,

P(- < x < o) = j p(x) dx = 1 (2-25)

Furthermore, it is apparent from the definition of the probability dis-
tribution function that it is a nondecreasing function, and hence we can
see from Eq. 2-22 that the probability density function p(x) is always
non-negative. Figures 2-4 and 2-5 show probability density and proba-
bility distribution curves, respectively, for a continuous variable.
To visualize the physical meaning of probability density function, let


AT x =

0 x
X X+dX

Fig. 2-4. Probability density curve for a continuous variable.

i P(x

1.0 -----------------

0 X x
Fig. 2-5. Probability distribution curve for a continuous variable.


us use the definition of a derivative as a limit to give

() li. P[x < (X + AX)] P(x X)
p(X) = hm
AXo0 ( AX

Slirm P[X = hm (2-26)
AX-- [ AX !
In differential form Eq. 2-26 becomes
p(X) dX = P[X < x < (X + dX)] (2-27)
where p(X) dX represents the probability that the random variable falls
in the interval X < x < (X + dX). This is shown graphically in Fig.
We can extend the concept of probability density to the multidimen-
sional case by defining the joint probability density function p(x,y) as
p(x,y) [P(x < X, y Y)] (2-28)
ax Oy

The corresponding reciprocal relation is

P(x < X, y < Y) = i f,(x,y) dx dy (2-29)

Again we can use the limiting process for the partial derivative of Eq.
2-28 to give the differential equation

p(X,Y) dX dY = P[(X < x < (X + dX), Y < y (Y + dY)] (2-30)
where p(X,Y) dX dY represents the probability that a sample point falls
in the incremental area dX dY about the point (X,Y).
By analogy with Eq. 2-25, we can write

P(- o < x < o, < y < oo) = f p(x,y) dxdy = 1 (2-31)

If individually we allow only one of the upper limits to go to zero, the
results are

/ J p(x,y) dx dy = P(x < X, y : co) = P(x < X) (2-32)

i f_ p(x,y) dxdy = P(x o,y 5 Y)= P(y Y) (2-33)

The probability that the random variable y < Y, subject to the
hypothesis that a second random variable x = X, can be called the
conditional probability distribution function P(y < YIX). Now we can


define the conditional probability density function as
d[P(y < Y(X)]
p(YIX) = d[( (2-34)
The corresponding reciprocal relation is

P(y < YIX) = fY p(ylX) dy (2-35)

If we differentiated Eq. 2-35 with respect to Y, we get

p(YlX) =p ) (2-36)
p(X,Y) = p(Y[X) p(X) (2-37)
This indicates that the joint probability of a random variable f(x,y) being
equal to f(X,Y) is the product of the conditional probability p(YIX)
and an elementary probability p(X).

The term "average value" is usually used to represent mean value, or
the first moment of the probability density function. However, it can
be used to represent other average values such as the mean-square value
(second moment of the probability density function) or other weighed
function of the probability density function, e.g., the characteristic
function having an exponential weighting of the probability density
For a discrete random variable, the first and second moments (mean
and mean-square values), respectively, of the probability density function
I Xi p(xi) N
O = --- x p(xi) = E(x) (2-38)
Sp(xi) i=O
2 xi p(xi) N
NP = N = x p(xi) = E(x2) (2-39)
Sp(xi) i=O

where E(x) and E(x2) are the expectation values of x and x2, respectively.
The denominator is equal to unity when N is the total number of events

in the discrete random process. Similar expressions can be written for
the various higher moments, x3, x x etc. These relations for p and #
(as well as for the higher moments) are valid only for large values of
N; i.e., the statistical average or expectation value is reached only as
N -o.
Similar expressions for the mean and mean-square values of a con-
tinuous random variable are, respectively,

Sx p(x) dx
x = = x p(x) dx = E(x) (2-40)
Sp(x) dx -
cx1 p(x) dx r
f = :-- -p = x2 p(x) dx = E(x) (2-41)
:_. p(x) dx

Values for the root mean square (rms), variance (ao), and standard
deviation (ao) can now be obtained by using the relations of Sec. 2-3, i.e.,
rms = (2)1/2 (2-42)
2 2, X (2-43)

) = ( p- J)1/2 (2-44)

Another statistical average that is useful in random noise theory is the
characteristic function Mx(jv), which is a complex exponential weighting
of the probability density function of a continuous random variable:

J ej" p(x) dx
M(jv) = -- ep)dx = ej" p(x) dx (2-45)
j_ p(x) dx -f

where v is real. Since Eq. 2-45 has the general form of a Fourier inte-
gral,* we can, under proper circumstance, use the inverse Fourier rela-
tion to obtain the probability density

p(x) = M(jv) e-ji dv (2-46)
27r .
When x is a discrete random variable, Eq. 2-45 becomes

Mi(jv) = 2 p(x,) ei<" (2-47)
In this case we can obtain the usual forms of the Fourier integral transform pair
by letting 7 equal jv and w equal jx.



If we take the derivative of the characteristic function with respect to v,

d[M(jv) j f xeix p(x) dx (2-48)
dv .
and evaluate both sides at v = 0, the integral becomes the mean value

S-dM(jv) (2-49)
dv o=
We see that the first moment of the random variable x can be obtained by
differentiating the characteristic function with respect to v and evaluating
the result at v = 0. The higher moments of a random variable can be
found by taking successive derivatives of the characteristic function with
respect to v and evaluating the result at v = 0:

S= (-j) d (jv)] (2-50)
dv'n ,=o

Such a process is generally called moment generation.
When two random variables are involved, we can define a joint char-
acteristic function of the joint probability distribution of the continuous
random variable x and y:

M(jvi,jv2) = f e(j,f i, p(x,y) dx dy (2-51)

In a manner analogous to the one-dimensional case, we can use the two-
dimensional Fourier transform to obtain the joint probability density
functions of a pair of random variables when we know their joint char-
acteristic function M(jvi,jv2); i.e.,

p(x,y) = -(2) M(jvljv2) e(iV- 2y) dv, dv2 (2-52)


2-6.1 Binomial Distribution. The phenomenon of radioactive decay
is amenable to analysis by elementary probability theory. Radioactive
decay also offers an opportunity to demonstrate the binomial, Poisson,
and Gaussian (normal) probability distributions.'
If there are a large number No of radioactive atoms with a probability
of decay p, the probability of m atoms disintegrating in time t can be
evaluated. For the moment, consider only m of the No atoms. The
probability that the first of these m atoms will decay is p; that the first
and second will decay is p2; that the first, second, and third will decay is
p3; etc. The probability that all m of the atoms will decay is pm. If


exactly m of these atoms are to decay, the remaining (No m) atoms
must not decay. This probability is (1 p)No-" since the probability of
not decaying is 1 p. Hence, for a particular group of m atoms, the
probability of exactly m disintegrations in time t is pm X (1 p)No-.
However, this particular group of m atoms is not the only group of atoms
that can decay. The first of m atoms might be any one of the No atoms,
the second might be any one of No 1 atoms; etc.; the mth atom might
be any one of No m + 1 atoms. The product of these terms,
No(No 1)(No 2) (No m + 1)
m-1 No!
= n (No i) = (2-53)
i=o (No m)!
is the total number of arrangements in which m atoms of No can dis-
integrate in time t. Since this product also includes the order of selection
of the m atoms, it is necessary to divide by the number of permutations
of m atoms, which is m! Therefore the probability p(m) that m atoms
out of No atoms will disintegrate in time t is

p(m) = (N )! p(1 p)No- (2-54)

This expression for p(m) is usually called the binomial probability dis-
tribution (even though the proper name for p(m) is the binomial proba-
bility density function) because the coefficient in brackets is the coefficient
of the xa term in the binomial expansion of (1 + x)No.
The probability 1 p that an atom will not decay in time t is given
by the ratio of the number of atoms N that survive the time interval t to
the initial number of atoms No:

N 1 p = q (2-55)
where q is defined as the probability that an atom will not decay in time t.
The rate at which nuclei disintegrate at time t is proportional to the
number of nuclei N remaining:

d XN (2-56)

where X, the constant of proportionality, is the characteristic decay con-
stant for the radioactive material. The solution of Eq. 2-56 is

-= e-xt (2-57)


We can combine Eqs. 2-54, 2-55, and 2-57 to obtain

p = 1 No= 1 e-xt= 1 q (2-58)

p(m) N Nm ](1 e- ) (e-t)(No-
p (No m)! m!
= N 0! ] pmq(Yo-m) (2-59)
(No -m)!m!
(a) Average Disintegration Rate. The expected average disintegration
rate of a radioactive material can be obtained from the application of the
binomial distribution law. Substituting Eq. 2-59 into Eq. 2-38 gives
the mean value of m, the average number of disintegrations in time t:
No Na N!
M = m p(m) = I m -N pmqN-m) (2-60)
-M=o = o (No m)! m!
This expression can be evaluated from the binomial expansion of
(px + q)No:

(Pxh q)N= r l F N
(px + q)No = pmq-m)x = xm "p(m) (2-61)
m=o (No )! m! po
Differentiating with respect to x gives
Nop(px + q)No-1 = 2 mx"-1 p(m) (2-62)
For x = 1, which makes Eq. 2-61 an expansion of unity,
Nop(p + q)N-' = Np = 2 m p(m) = Am (2-63)
Substituting Eq. 2-58 gives the average number disintegrations in a time
t to be
Am = Nop = No(1 e-t) (2-64)
For observation times that are short compared to the half-life of the
radioactive material, the approximation
e-xt 1 Xt (2-65)
can be used to give
M, = NoXt (2-66)
For observation times greater than approximately one-hundredth of the
half-life, the expression in Eq. 2-64 should be used.
(b) Standard Deviation of Counting Measurements. The standard
deviation and variance of the number of disintegrations in a time t can


be obtained from the binomial expansion of Eq. 2-61 by taking the second
derivative with respect to x:
No(No 1)p2(px + q)No- = 2 m(m 1) xZ-2 p(m) (2-67)
which for x = 1 reduces to
No(No l)p2 = S m(m 1) p(m)
Nb No
= m2 p(m) m p(m) (2-68)
m=O m=0
With the use of Eqs. 2-38 and 2-39, the preceding expression is further
reduced to
No(No l)p2 = p2, ,, (2-69)
The variance is given by Eq. 2-43 to be
q, = .2 (2-70)

which can be combined with Eqs. 2-64 and 2-69 to obtain
S No(No 1)p2 + m /U

= Nop(1 p) = Nopq = Am(1 p) = Amq (2-71)
For radioactive decay where p is given by Eq. 2-58, Eq. 2-71 reduces to
S= m me-xt (2-72)
If the time of observation is short compared to the half-life, i.e., Xt is
small, Eq: 2-72 can be reduced to

2 = m (2-73)
am = V/ (2-74)
i.e., the standard deviation of the number of disintegrations in a time t is
the square root of the average number of disintegrations that occur in that
interval of time.
2-6.2 Poisson Distribution. The binomial distribution of Eq. 2-59
can be simplified if the limitations
m << No (2-75)
No > 1 (2-76)
Xt and the approximations
ext 1 + Xt (2-78)


x! (2rx)1/2 e-xxx (Stirling's approximation) (2-79)
( mNo r mN
Mbm)N. N lim (1 -- =e-m (2-80)
No N6-. No

Sm No(1 e-x) = NoXt (2-81)
are imposed. The result is

p(m) = (2-82)

which is known as the Poisson distribution and is valid for No as low
as 200 and Xt as large as 0.01. It is nearly symmetrical about 1mA if values
of m far from im are excluded and tends to become more symmetrical as
j.m increases. The principal advantage of the Poisson distribution is that
it can be completely defined by a single parameter im.
2-6.3 Gaussian, or Normal, Distribution. If the additional
m > 200 (2-83)

\Im m << pm (2-84)
and the approximation

In 1 + 2 (2-85)
Sm m 2m2
are imposed, Eq. 2-82 for the Poisson distribution reduces to

p(m = 1 "exp [ m) 2 (2-86)
(21r/m)1/2 2e m

This expression is called the Gaussian or normal distribution and is sym-
metrical about the mean value, Am.
(a) Central Limit Theorem. The importance of the normal distribu-
tion in many physical problems is directly related to the use of the central
limit theorem, which states that the sum of independent random variables
under fairly general conditions is approximately normally distributed
regardless of the underlying distributions. Since many physically
observed phenomena represent the result of numerous contributing
variables, the normal distribution constitutes a good approximation to
many commonly occurring distribution functions. This theorem is
extremely useful in many practical applications; e.g., in a nuclear reac-
tor, the resultant neutron density at a particular point may be made up
of neutrons whose origins are in chains that are virtually uncorrelated.
(b) Standard Deviation. For large values of u.m, the standard deviation


is the same as that in Eq. 2-74:

0m = V/Am (2-87)
for the binomial distribution. Substituting Eq. 2-87 into Eq. 2-86 gives
the more common form for the normal distribution of
1 (A. M) I
p(m) = exp ( )2 (2-88)
\/27 2a J

A normal distribution curve is completely defined by the average value
,m and the standard deviation m. of the random variable m. Normal
distribution curves for large and small values of variance are shown in
Fig. 2-6. It should be borne in mind that the area under the probability
density function curve is unity, regardless of the value of variance. If
the average value Im is zero, the normal distribution curves of Fig. 2-6




0 pm-a m m + a m

Fig. 2-6. Gaussian (normal) probability distribution.

are symmetrical about m = 0. The integral of the probability density
function from Im a to Mm + a gives the probability that m will be
within lal of )m and is represented by the cross-hatched area under the
curves in Fig. 2-6. The value of a that makes the integral

Sp(m) dm (2-89)

equal to one-half is the probable error of m; i.e., half of the experimental
data are expected to fall within the interval of plus and minus one prob-
able error of the mean value. It can be shown that for a normal distri-


bution the probable error and the standard deviation are related by

Probable error = 0.6745om (2-90)

and that 68.27% of the data lie within +om, of the average value im.
The integral of p(m), which gives the probability distribution function
P(m) for a normal distribution, is not readily integrable in closed form.
However, substituting
Im m = V2au (2-91)

transforms the integral into the error function, defined as

erf u = e-2 du (2-92)

which can be evaluated from tables of mathematical functions.2


There are a number of special probability densities and corresponding
probability distributions which occur in noise analysis of nuclear systems.
They are briefly discussed here, and the expressions and plots of p(x)
and P(x < X) are presented in Table 2-3.
2-7.1 Discrete Distribution. The discrete distribution occurs when
a variable can assume only a finite number of discrete values. In a
typical situation where such a distribution exists, the variable can assume
only two or three values, thereby producing discrete binary or ternary
distributions, respectively. A useful example in the nuclear field of a dis-
crete binary variable is the output of a "flip-flop" whose change of state
is triggered by the interaction of a nuclear particle with a detector.
2-7.2 Uniform (Rectangular) Distribution. Another probability dis-
tribution of interest in nuclear work is the uniform, or rectangular, dis-
tribution that occurs when the random variable is limited to a given
range but has a uniform probability of assuming any value within this
range, including the end points.
2-7.3 Sine-Wave Distribution. A sine wave described by the
x(t) = A sin (wot + 0) (2-93)

where wo is a fixed frequency and A is a fixed amplitude, is normally
considered to be a deterministic variable. However, if the initial phase
angle for each test or sample function is a random variable, the sine wave
can be described in probabilistic terms. If the phase angle 0 has a uniform


Table 2-3
Probability Density and Distribution Functions

Discrete Binary Variable
States a and b (b > a)
p(a) =p(b) = %/
p(x) = % (x a) + % 8(x b)
f0 (x < a)
P(x < X) = (a < x < b)
1 (x > b)

0 a b

P(X x)

0 a b

Uniform (Rectangular) Distribution

p b- a (a < x < b)
0 (otherwise)
0 (x < a)
P(x < X)= a (a < x < b)
1 (x > b)

b --a

0 a b

P(x < X)

O i b-
0 a b


Table 2-3 (Continued)

Sine-Wave Distribution with Random Phase Angle

x(t) = A sin (wo t + 0)

LO(0 < 0 < 27r)
p(0) = 2
0 (otherwise)

f --^--- (Ixl < A)
p () = T(A 2-X2) x <
0 (Ixl > A)

P(x < XI1[1 + sin{ (Alx < A)
0 (otherwise)

----- -------- -------- ---- ^ ^
-A 0 A

,P(x < X)


-A 0 A


Table 2-3 (Continued)

Sine-Wave Distribution plus Gaussian Noise

x(t) = A sin (wot + 0) + n(t)

n(t) = Gaussian noise

{ (0 < 6 < 2ir)
0 (otherwise)


-A 0 A x

P(x < X)


-A 0 A X


Table 2-3 (Continued)

Rayleigh Distribution
{0 (K < 0)
px) = -[/2c2] (x 0)

Se -[x/2c] (x>0)

P(x < X) = 1-e-2/22 )
0 (otherwise)

x = Vc 1.25c

= 2c

S= (2- -) C2 0.432



P(x 1.0 - -- -



Table 2-3 (Continued)

Chi-Square Distribution with n Degrees of Freedom

(X2 l _) -(X2/2)
(X2 2(/2) r (n/2) Xn> 0



n = 10

o 2

Student's t Distribution with n Degrees of Freedom

r[+2_] n+1
p (t) [1+ 2

n= 10


n= 1

-3 -2 -1 0 1 2 3 4


Table 2-3 (Concluded)

"F" Distribution
(Yl(k) and y2(k) are independent random variables with chi-square
distributions having n1 and n2 degrees of freedom, respectively)

n2 Yl (k)
Fn,, n2 "n Y2(k)

r[(n, + n2)/2](n/n)n/2 Fn,,n
p(F) nF (n (F 0)
F()r( 2) 1 + nF12 (n, +

p(F) I "1 = 20

n2= 25

n2 = 10

0 1 2 F

probability density p(O) over the range from 0 to 2ir, the probability
density is
S (0 < < 2r)
p() = (0 2 (2-94)
"0 (otherwise)
The relation between p(0) and p(x) has been worked out for the general
case in which it was assumed that the inverse function O(x) is an n-valued
function of x, where n is an integer. For the case in which dx/dO is not
equal to zero, the result is
n p()
p(x) = n (2-95)
Application of this expression to the sine wave of Eq. 2-93, in which the
direct function x(0) is single valued but the inverse function 0(x) is
double valued, gives p(x). It is apparent from Table 2-3 that the prob-


ability density function for x = + A approaches infinity; however, the
area under the curve between -A and +A is still unity.
The unique shape of the sine-wave probability density graph shows
up readily even when the sine wave is accompanied by other fluctuations.
For example, let us consider the sum of a sinusoid and a random fluctua-
tion. The probability density function of the composite wave retains the
characteristic dual peaks at A, but they are finite in magnitude. How-
ever, the area under the p(x) curve is still unity.
2-7.4 Rayleigh Distribution. The Rayleigh distribution, a density
function that is restricted to non-negative values, is commonly used to
describe the probability density function of the envelope of a fluctuating
signal which has a large sinusoidal component of a single frequency.
Such a variable is commonly obtained when a random noise is passed
through a very narrow bandpass filter, or it might also be obtained in
the output of a system that exhibits a resonance at a particular fre-
quency. For instance, a boiling-water reactor exhibits a resonance peak
at a frequency characteristic of the bubble formation and collapse time
in the coolant for the particular combination of pressure and temperature.
2-7.5 Distribution of Amplitude-Limited Variable. An interesting
case arises when a variable with any given distribution is amplitude lim-
ited; i.e., the variable is passed through a "clipping" device that restricts
the lower and upper amplitudes to values a and b, respectively. The
resultant probability density function over the range a < x < b, is iden-
tical to the original probability density function of the variable. How-
ever, at x = a and b, the probability density function consists of Dirac-
delta functions, with amplitude A and B, respectively.
2-7.6 Chi-Square Distribution. The chi-square distribution arises
when several independent random variables, zf, each of which has a
normal distribution, zero mean, and unity variance, are added together.
The resultant random-variable chi-square for n independent random
variables is
X2 = Z + 2 + + + +z (2-96)
The new random-variable chi-square has n degrees of freedom, which
represent the number of independent, or "free," squares entering into
the expression. The probability density function for chi-square is given
(X2)(n/2)-1 e-(Xn/2)
p(x ) =2 (X 2 > 0) (2-97)
2n/2 P(n/2)

where r(n/2) is the Gamma function of n/2. This distribution is called
the chi-square distribution with n degrees of freedom, and it approaches
a normal distribution as the number of degrees of freedom increases.
Furthermore, the square root of the chi-square distribution with two


degrees of freedom gives the Rayleigh distribution function, and the
square root of the chi-square distribution with three degrees of freedom
gives the Maxwellian distribution function. The mean value and vari-
ance for the chi-square distribution are, respectively,
Ax = n (2-98)
ax' = 2n (2-99)
2-7.7 Student t Distribution. The Student t distribution occurs in
many situations where experimental data are being analyzed. Let y(k)
and z(k) be independent random variables such that y(k) has a chi-square
distribution and z(k) has a normal distribution function with zero mean
value and unity variance. We can now define a new random variable as
z (k)
tn = (k) (2-100)
where t, is the Student t variable with n degrees of freedom. The prob-
ability density function for ti is given by
T[(n + 1)/2] 12-- (2-101)
p(t) = 1 (2-101)
V irn n/2) n
The mean value and variance of the t, variable are, respectively,

At = 0 (n > 1) (2-102)
2 _n
n = (n > 2) (2-103)

It should be noted that the Student t distribution approaches a standard-
ized normal distribution as the number of degrees of freedom becomes
2-7.8 F Distribution. Another probability distribution that arises in
evaluating errors is measurements in the F distribution. For these
measurements, let yi(k) and Y2(k) be independent random variables so
that yi(k) has a chi-square distribution with ni degree of freedom and
ys(k) has a chi-square distribution of n2 degrees of freedom. Now let us
define a new random variable, F.,,1,, such that

Fn,,n, yi(k)/n1 n2y,(k) (2-104)
yl(k)/nl ni y2(k)
F.. yz(k)/nz nl y(k)-0

The probability density function for F.,,, is given by

r[(nl + n2)/2](nl/nR)n,/2F, (n/)-1
p(F) = r(n1/2)F(n2/2)[1 + (ni/n2)F,] (n,2 )/2 (2105)


The mean value and the variance for F.,,, are

1F = (n2 > 2) (2-106)
n2 2
2n(ni + nz 2)
F = -2 + n2 ) (n2 > 4) (2-107)
nl(n2 2)2(n2 4)
It should be noted that t2, the square of the Student t variable, has an
F distribution with nl = 1 and n2 = n degrees of freedom.


The objective of most experiments is to observe the phenomena taking
place and to quantitatively evaluate certain parameters associated with
the phenomena. The conditions associated with an experiment, in
general, determine the quality of the experimental data. In any given
test there are a large number of sources of error, which may deteriorate
the quality of the measurement. Many types of errors are associated
with the care taken by the investigator in setting up and carrying out
the experiment, e.g., errors associated with calibration of the instru-
ments, proper ranging of the instrumentation, proper protection of
instrumentation from the influence of extraneous noise; in fact, care
must be taken by the investigator to assure that he is measuring the
phenomena he thinks he is measuring. Many of these sources of errors
are dependent on the investigator and might be classified as gross oper-
ational errors. As such, they are generally not subject to a quantitative
analysis, and the success of the experiment generally depends on the
elimination of all operational errors. Failure to do so generally renders
the whole experiment invalid. In certain instances, operational errors
may be introduced which can be corrected, e.g., consistent mislocation
of a decimal point or use of the wrong scale factors, but these are the
2-8.1 Finite Ensemble of Records. It is clear that actual measure-
ments must be limited to a finite period of time and number of records.
Hence the statistical properties measured must be estimates of the true
values. Furthermore, these estimates of the ensemble properties are
not necessarily equal to the estimates of the corresponding temporal
When the conditions for self-stationarity as discussed in Chap. 1 can
be met, the procedure is to use a single sample record to determine esti-
mates of the properties of a process.
When we speak of a mean value, mean-square value, standard devia-
tion, and variance of a process, we use the symbols Iu, J/, o-, and or,


respectively, to represent the true parameters of the process and (x), (x2),
s,, and s' for the measured parameters of a sample record. The choice
of symbols depends on whether we are considering the parameters of a
process or the statistics of a sample record.
2-8.2 Estimators. Almost all physical phenomena show fluctuations
of some magnitude if sufficient resolution is attained in the measurement.
Hence the result of a single measurement, or of several measurements, is
not necessarily the true value of the variable, if indeed such a value
exists. Thus the result of a measurement is actually an estimation of the
true value, and the errors associated with this process are known as
estimation errors, sometimes called statistical errors. There is a field of
statistics that deals with the evaluation of errors associated with experi-
mental measurements, but it is beyond the scope of this text to review the
whole field. Therefore we will confine this discussion to some of the
concepts that are useful in determining the precision of measurements
carried out on nuclear reactor systems using both analog and digital
The expected value of any real single-valued continuous function f(x)
of the variable x(t) is given by

E[f(x)] = f f(x) p(x) dx (2-108)

where p(x) is the probability density function of x(t). The symbols
E[ ] and E( ) are used to denote the expectation operator, which is a
linear operator (and which therefore may be treated as a linear process).
It has the property that the expected value of a constant is the constant.
Estimators are usually mathematical expressions for a particular
parameter that indicate how it is obtained from the measured quantities.
For instance, the mean-square value of a set of quantities xi, x2, x3, .
XN is often given to be

(x2) = = lim -I = E(x2) (2-109)
N-ooo N
The hat ( ^ ) over 1 indicates that it is being used as an estimator for the
mean-square value of the process represented by x. Equation 2-109 is
not the only estimator that can be used to evaluate the mean-square
value but only one of several possibilities. Estimators are never right
or wrong but, rather, are classified as "good" or "better than others."
The quality of an estimator is generally determined by the following
1. The estimator should be unbiased; i.e., the expected value of the estimator
should be equal to the parameter being measured.


2. The estimator should be consistent; i.e., it should approach the parameter
being estimated with a probability approaching unity as the sample size
becomes large.
3. The estimator should be more efficient than any other possible estimator;
i.e., the mean-square error of the estimator should be less than any other
2-8.3 Bias of an Estimator. To demonstrate the bias of an estimator,
let us consider the variance. The variance of a sample record is
s = (x ) (x)2 (2-110)
and the variance of a process is

S= (2-111)
For an individual sample record, (xi) may be different from u.,, the mean
value for the process. If we use s as an estimator for 0 i.e.,
s2 = ai = (X4)- (_X,

= [(x) ] + [f (x,)] (2-112)
where the first term in square brackets is the variance of the measurement
and the last term is the bias Aj (xi)2, which is not zero unless

lX = (xi) (2-113)
However, as the number of records used to compute (xi) becomes greater
or the record length becomes longer, the bias will be less since Eq. 2-113
is more nearly true.
In a given measurement the record of x(t) during the interval over
which the measurement is being taken represents a unique set of circum-
stances which is not likely to be duplicated at any other time. Hence
the measured values of X, where X represents any parameter, computed
for different sample records vary randomly, and the measured quantity
is the estimator X, which is a random variable.
Let us apply the criteria described. If the estimator is unbiased, then
the expected value of the estimator is the true value, i.e.,

E[X] = X (2-114)

If this is not true, then a bias error exists so that

b[] = E[] X = E[f] E[X] = E[ X] (2-115)

i.e., the bias error is the expected value about the true value. Obviously,
for unbiased estimates
b[X] = 0 (2-116)


For a measurement over a finite period of time, T, the fact that X may
be unbiased does not mean that the estimator E(X) is equal, or even close,
to the true value, X. Indeed, there may be significant deviations from
the true value for any single measurement, even though it is unbiased.
2-8.4 Consistent Estimators. The following example cited by Bendat
and Piersol3 is very illustrative. Let us consider the mean-square error
(MSE) to be defined as the expected value of the square of the deviation
of the estimator from the true value, i.e.,
MSE = E[(X X)2] (2-117)
As indicated previously, if the estimator is to be consistent, this mean-
square error should approach zero as T becomes large. Hence, for a
large value T, a consistent estimate would necessarily tend to closely
approximate the true value X. The estimator is consistent if
lim E[(X X)2] = 0 (2-118)
i.e., if the mean-square error approaches zero with time. The mean-
square error can be reduced to
E[(2 X)2] = E{[X E(X) + E(_ ) X]2}
= E{[I E(k)]2} + 2E{[I E(k)][E(X) X]}
+ E{[E() X]2} (2-119)
E[X E(X)] = E[X] E[X] = 0 (2-120)
the middle term of Eq. 2-119 is equal to zero, and the result is
E[(X X)2] = E{[X E(X)]2} + E([E(X) X]2} (2-121)
In words, Eq. 2-121 states that the expected mean-square value about
the true value equals the sum of the expected mean-square value about
the expected value plus the expected mean-square value about the true
value. Thus the mean-square error is the sum of two parts. The first
part is the variance of the estimate given by

Var [ o] = 4 = E{[ E()]2}
= E[K2] {E[X]}2 (2-122)

The second part is the square of the bias of the estimate as given by

b2[] = E[b2(X)] = E{[E(X) X]2} (2-123)
In general, compromises may be required to ensure that both variance
and bias will approach zero as T becomes large. In terms of the variance


and the square of the bias, the mean-square error is
MSE = E[(X X)2] = a2[X] + b2[X] (2-124)
2-8.5 Most Efficient Estimator. The most efficient estimator mini-
mizes the mean-square error as expressed in Eq. 2-124. Since 02[X] and
b2[] are both positive, as seen from Eqs. 2-122 and 2-123, the most
efficient estimator is found by reducing variance and bias to a minimum.
Since variance is a property of the data, and not of the computational or
measurement procedures, reducing bias to the absolute minimum assures
that the most efficient estimator has been found.


Correlation is one of the most important concepts in random noise
analysis. Correlation is a quantitative and/or qualitative evaluation
of the relation of the variable to itself, to another variable, or to several
other variables as a function of time or time displacement. It is being
introduced at this point with some of the statistical relations developed
earlier in this chapter to illustrate the underlying statistical basis.
Let us consider the degree of dependence between two real random
variables x and y. If we plot a scatter diagram for sample values xi and
yj of the random variables such as those in Fig. 2-7, we can use a least-
squares technique to fit these data points to a straight line. If all the
data points fall on this straight line, we can say that the random variables
x and y are linearly dependent or completely correlated. If the data
points are so widely scattered that they do not support any particular
straight line, the variables x and y probably are independent or uncorre-
lated. In the case shown in Fig. 2-7, where the data appear to support
the straight line in spite of a great deal of scatter, x and y are partially
dependent or partially correlated.
Let us consider a least-squares fitting of the data points to the straight
y, = a + bx (2-125)
where y, is the predicted value of y and the constants a and b are the y
intercept and slope, respectively. We can define the mean-square error
E as
E, = E[(y y,)2] = E{[y (a + bx)]2} (2-126)

Differentiating with respect to a and b and equating the results to zero

= -2E(y) + 2a + 2b E(x) = 0 (2-127)




o o o
0 0

0 00

0 00

o 0 0

= -2E(xy) + 2a E(x) 2b E(x2) = 0 (2-128)

from which we can obtain
E(xy) E() E(y) E(y)- E(x) E(y)
E(2) [E(x)]2 (a

E(xy) E(x) E(y) E(x2) E(xy) E(x) E(y) E(x2) (2130)
S-2E(xy) + 2aEx) + 2bE(x2) = 0 (2-13028)

{E(x2) [E(x)]2} E(x) E(x) a

Equation 2-125 is used to provide the regression line of y on x. It is
equally valid to consider the regression line of x on y by fitting the data
points to the straight line
xp = a' + b'y (2-131)

where x, is the predicted value of x and where a' and b' are, respectively,
the x intercept and the slope (with respect to the y-axis). We can obtain
the constants a' and b' with the equations

SE(xy) E(x) E(y)
b'= 2 (2-132)


S E(xy) E(y) E(x) E(y2)
2 'E(y)
a =y) (2-133)

If x and y are perfectly correlated, the regression obtained by fitting x
on y and y on x would be identical, i.e., the two lines of Fig. 2-7 would
coincide. Hence we have the relations

a = -a or ab' = -a' (2-134)


b = or bb' = 1 (2-135)

2-9.1 Normalized Correlation Coefficient. If x and y are not
perfectly correlated, we can determine the extent of correlation by the
deviation from Eq. 2-135. Let us define the square root of the product
of the two slopes, b and b', to be the normalized correlation coefficient

= [bb']2 {[E(xy) E(x) E(y)]2}1/2
p = [bb'], = 2

S (xy) -E(x) E(y) (2-136)

Using Schwartz's inequality, we can show that

IE(xy)l < jE(x)l IE(y)l (2-137)
For the case where x and y are uncorrelated (linearly independent)
random variables,
E(xy) = E(x) E(y) (2-138)

and hence p = 0. We see from these relations that the absolute value
of the normalized correlation coefficient varies between zero for uncorre-
lated variables to unity for perfectly correlated variables, i.e.,

0 < |p| 1 (2-139)

2-9.2 Covariance Function. Now let us define the povariance 6,
between x and y as the numerator of Eq. 2-136; i.e.,

e., = E(xy) E(x) E(y) (2-140)

Algebraically manipulating Eq. 2-140 gives
ex, = E[(x A,)(y Ay)]

= /_/_ (x A.)(y luy) p(x,y) dx dy (2-141)
For the special case of a single variable where x = y,
e, = E[(x y,)2] = a (2-142)
The concepts of linear independent variables and uncorrelated vari-
ables are not identical. When exy and pxy equal zero, the independent
random variables are uncorrelated. The converse statement, i.e., that
uncorrelated variables are independent, is true only for the special (but
quite common) physical situations where the variables are all normally
(Gaussian) distributed random variables.
In general, the mean values of the sample random variables x and y
are not constant with time and must be evaluated at various times. At
times tl and t2 where t, = t and t2 = t + 7, the covariance of x(tl) and
y(t2) is

e.l(th,t2) = eQ,(t,t + r) = e.(r)
= E{[x(t) ,(t()][y(t + r) + y(t + 7)]} (2-143)

Similar expressions can be written for exx(t,t + r) and e,,(t,t + r). For
the case where r = 0, Eq. 2-143 becomes the same as Eq. 2-141.
2-9.3 Correlation Functions. We can now define the cross-correlation
function >,(rT) as
0,(r) = E[x(t) y(t + 7)] (2-144)
A comparison of Eq. 2-144 with Eq. 2-143 shows that the covariance is a
special case of the cross-correlation functions where the mean values
have been removed. For stationary processes, Eq. 2-143 becomes

e,,(r) = E[x(t) y(t + r)] j.xA
= (7) y (2-145)

For a single variable where x = y, we obtain the autocorrelation function:

.x(r) = E[x(t) x(t + r)] (2-146)

We can also express correlation functions in terms of the joint prob-
ability density functions:

,(7) = x(tl) y(t1) p[x(t1),y(t)1 dx dy (2-147)

For the special case where 7 = 0,
0,(0) = E[x(t) y(t)] (2-148)
x(0) = E{[x(t)]2} = / (2-149)
By again using Schwartz's inequality, we can show that
I0(7T)I2 CO.(0) 0,(0) (2-150)
e,((T)I2 e,(0) e,(0) (2-151)
1x(r) < .(0) = p (2-152)
le.x(-)| < exx(0) = (2-153)
We can now use Eq. 2-136 to redefine the normalized cross-correlation
function (normalized cross-covariance function) as

Pyu(7) = (2-154)
[e.X(0) e,(o)]"12

which satisfies the condition
Ip(r)| < 1 (2-155)
The function px,(r) indicates the degree of linear dependence between
x(t) and y(t) for a time displacement of r.

1. R. D. EVANS, The Atomic Nuclear, McGraw-Hill Book Company, Inc., New
York, 1955.
2. E. JAHNKE, and F. EMDE, Table of Functions, 4th ed., Dover Publications,
New York, 1945.
3. J. S. BENDAT and A. G. PIERSOL, Measurement and Analysis of Random Data,
John Wiley & Sons, Inc., New York, 1966.


Neutron-Counting Techniques in

Nuclear Reactor Systems

Noise techniques can generally be divided into microscopic techniques
(those based on the statistics of the neutron-population variation) and
macroscopic techniques (those based on the composite behavior of the
system. In this chapter we shall deal primarily with microscopic tech-
niques, which may involve the probability of detecting a neutron, the
deviation of a probability density from Poisson or Gaussian distributions,
the variance-to-mean ratio, the distribution of the time intervals between
counts, and other similar phenomena.
Most of the statistical techniques have been developed for zero-power
critical reactors. Recent work has allowed some techniques to be
extended to power reactors and subcritical nuclear systems. Certain
techniques are more useful for thermal reactors, whereas others are more
useful for fast reactors. Sometimes the instrumentation required (or
available) is a determining factor in the choice of techniques. These
factors are discussed with the description of the technique.
The chain-reaction nature of nuclear processes in a reactor gives rise
to non-normal distribution of the detected counts because the individual
counts are dependent on the other neutrons in the chain. Hence the
statistical properties of the count sequence are dependent on the dynamic
characteristics of the nuclear system.
There are several experimental methods based on neutron counting by
which the prompt-neutron decay constant Rossi-alpha, defined later, the
detector efficiency, and the reactor power can be determined. One of the
first experimental techniques to be employed for this purpose was the
Rossi-alpha method,' consisting in measurements of the conditional prob-
ability of a count in a time interval A at a time t following a count at
t = 0. The relative variance of neutron counts registered in a certain


time interval was studied by Feynman et al.2 Another method of
determining 0/1, the zero probability method suggested by Mogilner and
Zolotukhin,' consists in measurements of the probability of no count in a
certain time interval. All these methods have been reviewed by Thie4
and are presented in abbreviated form later in this chapter.
A recent study by Babala6 indicates that most of these techniques can
be derived from Kolmogorov's theory of branching processes.6 Courant
and Wallace7 studied the fluctuations of the number of neutrons in a
reactor on the basis of the Fokker-Planck equation, obtained from prob-
ability-balance considerations, and derived the formula for variance of
neutron counts. Pal8 used the first-collision technique to derive expres-
sions for zero probability, variance, the correlation function, which is
closely related to the conditional probability of the classical Rossi-alpha
In this chapter the lumped-parameter model of the nuclear-reactor
system is assumed unless otherwise specified. Such an assumption is
usually valid for reactor dynamics if the physical dimensions of the
core do not exceed a few migration lengths for the particular reactor
In nuclear systems that are critical at zero power or are slightly sub-
critical, one of the most important parameters is the prompt-neutron
decay constant, known as the Rossi-alpha and defined* as
1 k(l ) 1 -k P- p
a .. = (3-1)
1 I A
where all symbols have the definitions commonly accepted in reactor
theory.9'0 For a delayed-critical system, this equation becomes

ac = (3-2)
1 A
since 1 and A are then equal. Hence we can express a in terms of a,:

a = A( ) = ac -) = ac[1 p($)] (3-3)

where p($) is the reactivity expressed in dollars.


The basic cause of the statistical fluctuation of the neutron population
in most zero-power nuclear systems is the variation in the number of neu-
trons produced in each fission. The yield of neutrons per fission is based
Rossi's original definition was actually the negative of this expression, but the
given definition is more commonly used today.


on probabilities that in turn are related to the competing processes
involved in fission. For example, let us consider the neutron yield from
the fission of 235U. The probability of yielding v, neutrons, where ViP
is an integer between zero and six, and the associated probability distribu-
tion function are given in Table 3-1. The plots of probability distribu-

Table 3-1
Probability of Yielding v Neutrons in 235U Fission

v p(v) P(P) vp(v) vf'p()

0 0.03 0.03 0 0
1 0.16 0.19 0.16 0.16
2 0.33 0.52 0.66 1.32
3 0.30 0.82 0.90 2.70
4 0.15 0.96 0.60 2.40
5 0.03 1.00 0.15 0.75
6 -0 1.00 0 0

1.00 v = 2.47 2 = 7.33

tion and probability distribution function are shown in Fig. 3-1. It is
apparent from Fig. 3-1(a) that the probability distribution for vp is not
a Poisson distribution, even though the envelope of the discrete values
has a bell shape. Indeed, the deviation of the v, distribution from a
Poisson (or binomial) distribution is one of its distinguishing and useful
The relative width D, of a probability distribution is defined as
r p l A I + A A X
2 2 X (3-4)
2 2

Diven et al." have indicated that the relative width D,, sometimes called
Diven's parameter, is an appropriate normalized average for the number
of prompt neutrons per fission:

D, ) ( p (3-5)
2, VP -2

The notation of the last term is commonly used in the literature. For
231U, we can use the values of Table 3-1 to obtain

2 _- P 7.33 2.47
D, 2 = = 0.796
P- (2.47)2

This compares favorably with the value of 0.795 + 0.007 given by Diven
et al.l Values given for other fissionable isotopes are: 233U, 0.786 + 0.013;




o 0.2


0 T T I I
0 1 2 3 4 5 6 7 8



q 0.50


0 1 2 3 4 5 6 7 8


Fig. 3-1. Probability distribution (a) and probability distribution func-
tion (b) of the number of fast neutrons per fission in 23U.

"2Pu, 0.815 + 0.017; and 24oPu, 0.807 0.008. These values deviate
significantly from the value of unity, the value of D. for binomial, Poisson,
and Gaussian distributions. This is readily shown by substituting Eq.
2-74 into Eq. 3-4 to get D, = 1.
An alternate form of the relative width can be seen by evaluating the


numerator using the discrete probabilities

V- p = Pv Pa Vp, = V p, Vp)
I 1 i=1 i=1

= pPv, ,(, 1) = Vp(Vp 1) (3-6)
i '

where p,, is the probability that precisely v,, neutrons are liberated in
fission and v,p assumes integral values between 0 and 6, representing the
number of prompt neutrons emitted in a particular fission. Hence Eq.
3-5 becomes

S Vp p(vp 1) -
-2 (3-7)
v V


The Rossi-alpha technique was first suggested by Rossi, and the sta-
tistical theory of neutron chains was heuristically developed by Feyn-
man, de Hoffman, and Serber.2 Their derivation will be followed in
this section. More rigorous mathematical derivations have been carried
out by Matthes,12 Borgwaldt and Stegemann,13 Babala,5 and Iijima.14
This technique was originally developed for fast-reactor systems where
the number of neutron chains existing in the nuclear system at any
instant is not large and the decay of the neutron chain is very fast
because the neutron lifetime is very short. Recent modifications of the
technique, with other instrumentation, has permitted it to be used for
thermal-reactor systems where the chains overlap considerably and their
decay is slower because of the longer neutron lifetime.
In the original Rossi-alpha experiments, a coincidence counting system
such as that shown in Fig. 3-2 was used by Orndoff.1 The system can be
operated with a single detector providing both inputs 1 and 2. Alter-
nately, separate detectors can be used since the theoretical development
depends only on the chain-related detection occurring. Using two detec-
tors makes the timing problems of the instrumentation less critical. The
principle is to relate the probability that a neutron will be detected in
the time interval A at t following a neutron detection at t = 0 when the
original fission occurred at to. When we consider the subcritical multi-
plication relation given by Murray9 for prompt neutrons only, the prompt-
neutron population is

S- k





SCALER 2 --- 1


10 -

Fig. 3-2. Block diagram of Orndoff's analyzer. [From J. D. Orndoff,
Prompt Neutron Periods of Metal Critical Assemblies, Nuclear Science and
Engineering, 2:450 (1967).]

where S is the strength of the neutron source in the reactor. The prompt-
neutron population and hence the number of neutron chains in a system
is inversely related to a for a given source strength S. With a very
weak neutron source S in a fast assembly, it is quite possible for all the
neutrons in a near-critical system to be members of a single neutron
chain. Hence a sensitive detector can frequently detect two or more
neutrons from the same chain.
3-3.1 Theoretical Considerations. When the first neutron count
from a given chain occurs at a time designated at t = 0, there is a certain
probability that the detector will, at a time t later, detect either a random


neutron (i.e., one from some other chain) or a chain-related neutron
(i.e., one from the same chain that produced the count at t = 0).
The probability of detecting a random neutron is AA, where A is the
average counting rate of the detector and A is the time interval of meas-
urement, i.e., the time width of a single channel of the analyzer. Since
the prompt-neutron population on the average must decay exponentially,
the probability of detecting a chain-related event decreases according to
e-"t. Hence the total probability of detecting a neutron (either random
or chain related) in the time interval A is

p(t)A = AA + Be-"A (3-9)

where the coefficient B has been derived by Feynman, de Hoffman, and
Serber,2 and by Orndoff1 in the following manner. The probability that
a fission will occur at to in A0 or dto is

p(to) dto = F dto (3-10)

where F is the average fission rate of the system. Next, the probability
of a detection count in A1 at tl where ti > to due to fission at to is

p(tl)Ao = EvPvYe-"(t,-to)A (3-11)

where E = detector efficiency in counts per fission
v, = actual number of prompt neutrons emitted per fission at to
v = velocity of thermal neutrons
2f = macroscopic fission cross section
vZf = average fission rate per unit neutron density

In a similar manner the probability of a chain-related count in A2 at t2
where t2 > t, following a count at t, is

p(t2)A2 = e(v, 1)vZre-"(t-t0o)A2 (3-12)

where v, 1 takes into account the fact that the neutron detected at
time tl was lost to the fission chain. The three probabilities F dto,
p(tl)Ai, and p(t2)A2 are independent and can be multiplied to give the
joint probability of the occurrence of a fission at to followed by a count
within AI at tl and another count within A2 at t2 where the neutrons
detected are part of the chain initiated by the fission at to. Hence the
total probability of the preceding sequence of events occurring and
producing chain-related counts is the integral of the product of the three
probabilities over all time to (from co to ti) available for occurrence of


the first fission; i.e.,

pc(tl,t2)A1A2 = /' p(f)Ai p(t2)A2 F dto

= J Fe6(vP )2 vp(v, e-"(t'+ti2-2')A A2 dto

= Fe2 pp(V, 1) -- e-"(t2-,)AiA (3-13)

Note that v,(v, 1) indicates a suitable averaging over the distribution
of prompt neutrons emitted per fission as given in Eq. 3-6. We can
write Eq. 3-13 in a more familiar form by substituting the identity
kpZa kp
Zf Zfvl
and the definitions of a from Eq. 3-1 to give

pc(t1,t2)AlA2 = Fe2 v(V, 1) k e-c,(_t-)A2 (3-15)
2P, (1 kp)l
The probability of a random pair of counts in A1 and A2 is given as
pR(ti,t2)A1A2 = F262AIA2 (3-16)
Thus the total probability of a pair of counts in A1 and A2 is the sum of the
random and chain-related probabilities:

p(t1,ts)A1A2 = F2E2A21 + Fe2 vp(vp 1)k e_(t-tA
2lp(1 k,)l

= FEA2 FE + EDvk( e( -)A,2 (3-17)
1 2 (1 k,) 1 1
where FeA is the probability that a count occurs in the interval A and D,
is Diven's parameter given by Eq. 3-7. If we set FeA1 equal to 1, thereby
requiring that a count occur at ti, then FeA2 is the probability of a random
count in the interval As, and the second term in the brackets in Eq. 3-17
is the probability of a chain-related count at t2 following a count at tl.
This can be generalized so that the probability of a chain-related count
at time t following a count at t = 0 is

pc(t)A = e-"tA (3-18)
2(1 k,)l

Orndoff' has shown that this expression must be corrected for the prob-
ability of a count being introduced at t as a consequence of the fission


and detection process at t = 0 by replacing (v, 1)vp with
1 kj
vp(vp 1) + 2ip -
where 8 is the effective number of neutrons resulting from the fission and
detection process at t = 0. Since 8 is dependent on the detector charac-
teristics and location, it must be evaluated for each experimental setup.
Generally, this correction is small, about 1%, and is often neglected.
The total probability of a count at time t in interval A following a count
at t = 0 is
p(t)A = pR(t)A + pc(t)A

Ef{,p(v, 1) + 2P,(1 kp)8/ke, }j2
= FeA + 2(1 )- e-"A (3-19)
2 (1 kp)l
which has the form of Eq. 3-9:
p(t)A = AA + Be-aA (3-20)
A = FE (3-21)
is the average counting rate and

B e[v(Vp 1) + 2Pp(1 k,)6/k,]k4
2 (1 k,)l
EDp (3-22)
2(1 kp,)l

Equation 3-20 is the result obtained from Rossi-alpha experiments where
AA represents the background due to uncorrelated counts and can be
removed to leave a single exponential term from which the decay con-
stant a can be evaluated. Note that the uncorrelated term depends on
the fission rate (i.e., power level in a critical reactor or source level in a
subcritical system) whereas the chain related or correlated term is inde-
pendent of power level. Thus lowering the fission rate will increase the
signal-to-noise ratio of the measurement.
3-3.2 Experimental Measurements. Regardless of whether one or
two detectors are used, the instrumentation of Fig. 3-2 serves primarily
as a clock that measures the time interval between the trigger and sub-
sequent pulses. If sufficient delays and coincidence channels are pro-
vided, several neutrons may be detected after each trigger pulse. Since
this type of instrumentation is expensive, it is desirable to utilize com-
mercially available multichannel analyzers. Several different modes of
operation have been used, depending on the time resolution required


for the experiments, i.e., whether the neutron lifetime is a fraction of a
microsecond, a few microseconds, or several hundred microseconds.
A procedure that is similar to the one used by Orndoff involves using
a multichannel analyzer as a multiscaler. The first pulse starts the
internal clock, and detector pulses are registered in the appropriate time
channel. Commercially available multiscalers provide channel widths
of less than 10 usec with a few microseconds' dead time after each recorded
pulse. A special system designed by Diaz and Uhrig15 has a digital
computer as a special-purpose multiscaler to provide -3-psec channels
and an alternating input system to eliminate the dead time (i.e., one
input channel collects the counts) while the other stores the counts)
collected in the previous time increment).
If the counting rate is low (i.e., <1000 counts/sec), a multichannel
analyzer can be used in the time-of-flight mode to provide channel widths
down to 0.1 usec, but each pulse is followed by the dead time of the
analyzer (typically 10 usec). Special equipment using several channels
of buffer memory to temporarily store pulses until the end of the cycle
provide very narrow channel widths (i.e., down to 0.01 gsec) without
dead time.
A slightly modified technique used by Brunson et al.16 uses a multi-
scaler system in which the pulse from the first detector starts the clock
and the pulse from the second system stops it and which stores the pulse
in the appropriate memory location and resets the analyzer. Such a
procedure actually measures the time between detected events but pref-
erentially measures the shorter time intervals. Brunson et al. indicate
that the correct probability of detecting a neutron in the nth channel is

P" = cN (3-23)
Co + 2 ci

where c, and c, are the number of counts in the ith and nth channels,
respectively, co is the number of cycles during which no event is recorded,
and N is the total number of channels being used in the analyzer. This
procedure is discussed further in Sec. 3-7.
Mihalczo"7 modified the technique used by Brunson et al. by inserting
a variable time delay between detector 2 and the analyzer. Hence pulses
in detector 2, which preceded the trigger pulse in detector 1, are collected.
The probability p(t) of Eq. 3-20 now becomes

p(t')A = AA + Be-'A (3-24)

where t' = t td < 0. The term td is the delay time, and the other terms
are the same as in Eq. 3-9. This procedure, which is analogous to the


correlation of the pulse sequences, yields two measurements of a from a
single run.
The results of experimental Rossi-alpha measurements are fitted to
Eq. 3-9 by using a least-squares technique, and the parameters A, B, and
a are evaluated. Then the equations

A = Fe (3-25)

B e l) Dk (3-26)
"2 (1 k,)l 2a12

can be used to obtain any two of the five quantities F, e, D,, kp, or I if
the other three are known.
Karam's has pointed out that in many cases, particularly when a
reflector is present, the experimental data support an equation of the
p(t)A = AA + Be-"tA + B'e-"'tA (3-27)

where a' > a, except for the case of fast reactors with moderating
reflectors. This matter is discussed extensively by Suwalski,19 but no
completely satisfactory explanation has been put forth. Some spatial
effects have also been observed, although they have not been studied
The relation between a and p expressed in Eq. 3-1, the definition of a,
does not hold for fast reflected assemblies. Cohn20 has suggested that
these difficulties are related to the physical meaning of a and a'. Clearly,
the lumped-parameter model is not adequate for reflected systems, par-
ticularly when the reflector differs significantly in composition from the


3-4.1 Theoretical Considerations. Another statistical method
closely related to the Rossi-alpha method is the Feynman technique2 of
relating the ratio of the variance to the mean of the number of counts
collected in a fixed time interval. If we repeatedly measure the number
of counts occurring in a given time interval in a nuclear system, we can
relate the parameters of the nuclear system to the variance-to-mean
ratio (s2/l) of the number of counts; i.e.,

-2 c2 2(
= (3-28)
c c


where 5 represents the average number of counts in the interval T. The
number of pairs of counts expected in this interval is given by

c! c(c 1)
(c 2)! 2! 2
since the number of combinations of a set of c events taken two at a time
is {c!/[2!(c 2)!]}. Hence the average or expected number of pairs of
counts in the interval T is

C(c 1) (c(c 1))
=2 2 1,= f0 p(tl,t2) dtl dt2 (3-30)
2 2 hOJd=0
where p(tl,t2) is the total probability of a pair of counts in dti and dt2.
Using the differential form of Eq. 3-17 for p(ti,t2) gives

C(C 1) fT t2 dti FE dt2 + a dt2
2 J JFo d d2 2(1 k,)l1
F2E2T2 FE2Dk T ( 1 e-
S + -(- (3-31)
2 2(1 k,)2 aT
A = FeT (3-32)
we can rearrange Eq. 3-31 to obtain

2 e D, 1 e-a
S1+ (11-- 1
S(1 k) aT

= 1 +- 1T = 1+ Y (3-33)

eD( 1 T-
Y 1= (3-34)

and p, is the "prompt reactivity"* defined by
k -1
PP (3-35)
Equation 3-33 can be put in the form

c2 2 2 2
= Y (3-36)

The prompt reactivity pp is a grouping of terms in a form analogous to the defi-
nition of reactivity p. The two "reactivities" are related by
Pp = (p 3)/(1 3) = 0


where s, is the variance of the Poisson distribution. Hence Y can be
interpreted as the difference between the relative (or reduced) variances
s2/5 of the chain-related variable and a Poisson random variable. Since
the quantity Y is equal to zero for random Poisson fluctuations, it is a
measure of the additional fluctuations (in excess of random) that exist
when chain-related events occur. This technique was originally used
by Feynman et al.2 to obtain the dispersion P in the number of neutrons
per 2"U thermal fission by counting Y for T >> 1/a so that the term in
brackets in Eq. 3-33 approaches unity. If the counter efficiency and the
prompt multiplication factor are known, vP can be determined.
In many reactor applications it is a good approximation to ignore the
delayed neutrons because they are virtually constant over the time inter-
vals used in the experiments. However, for thermal and some inter-
mediate systems, it is necessary to include the influence of the delayed
neutrons. Bennett21 has derived an expression including the effect of
delayed neutrons. When delayed neutrons are included, Eq. 3-33
C2 +2 2A ( 1 e-iT
= 1 + ED, Ho(a) 1- (3-37)
c i=,i ajiT
where A, and ai are defined in terms of the zero-power transfer function
6 a
j- Pk 7
k=1 Xk + jW A
Ho(w) 6 L(3-38)
jw(l + k X -4jw) p i=1

Bennett" gave the values of A,, aj, and Ho(ao) for Ipl < 3/10 and 1 < 5
X 10-4 sec to be the values given in Table 3-2 for critical or slightly
subcritical systems. The delayed neutrons also have another undesirable
effect. As pointed out by Pal,s the successive measured time intervals
are correlated, and Eq. 3-33 has to be corrected also for this correlation.
Pil suggested a waiting time 0 between the successive measured time
intervals to reduce this correlation but did not give any formula for the
correction term. Babala6 indicated that the effect of this correlation
becomes small as the number of observations increase. Pacilio," how-
ever, has indicated that he could not find experimental evidence of this
3-4.2 Experimental Procedures. The experimental procedure for
the variance-to-mean technique is fairly simple: one measures the number
of counts in a large number of time intervals of length T and calculates
the variance. The procedure is repeated for other time intervals T of


different lengths. From the plot of the reduced variance vs. T, one can
determine a from a least-squares fit of the data to Eq. 3-33. A gated
scaler, which is controlled by a precision timer, is usually used to count
the events detected in the interval T; and the output is printed or punched
on tape or cards. The output operation actually represents an interrup-
tion of the experiment and introduces a dead time between consecutive
observations. The error due to dead time is minimized by the use of a
modern multichannel analyzer as a multiscaler where the dead time can
be as short as 10 to 20 psec and as many as 1000 to 4000 channels may be
available. Although special equipment could be built to allow the collec-
tion and storage of data simultaneously and thereby eliminate the dead
time, the alternate procedure described subsequently is more commonly
used today.
In addition to the dead time problem, the preceding procedure requires
the collection of a large amount of data. In an alternate procedure first
suggested by Stegemann,23 a multichannel analyzer is used in which the
detector counts advance the channel address. At the end of an interval
T, a single count is added to the analyzer memory at the final address
and the channel address is reset simultaneously. (For instance, if there
are 341 events detected in a time interval T, a single count is inserted
into memory position 342. The final address is always one greater than
the number of counts since the memory address is reset to an address of
one.) This procedure then gives the discrete probability function, and
we can modify Eqs. 2-38 and 2-39 to calculate the mean and mean-
square values and therefore variance and variance-to-mean ratio:
2 (i 1)Ni M
C = M (i 1)N, (3-39)
2 (i 1) =2
S i1

2 (i 1)2
where M is the number of channels (memory positions) in the analyzer
and N, is the number of counts stored in the ith channel. Care must be
taken to see that the number of counts in a time interval does not exceed
the number of channels available in the analyzer or that some special
arrangement, such as an auxiliary printout system, is used when this
Both procedures are subject to the limitations associated with the
stationarity of the system being studied. Hence it is common procedure


to record a sufficiently long record of the output of a detector on magnetic
tape and to process this time record repeatedly until the necessary infor-
mation is obtained (see Albrecht24 and Johnson25). This recording also
allows the results of the different methods to be compared with each
other. An alternate procedure used by Turkcan and Dragt26 is to use a
very short basic time interval so that the successive samples can be
added to form longer time intervals that are multiples of the basic
3-4.3 Parameter Measurements. It is apparent from Eqs. 3-33
and 3-37 that there are several parameters that can be evaluated by
variance-to-mean measurements (e.g., the prompt decay constant a, the
dispersion of the number of neutrons emitted per fission, the reactivity
of a subcritical system, and the power level of a critical system). Obvi-
ously, not all of these can be evaluated independently. Furthermore,
the type of system (fast, intermediate, or thermal) being studied also
determines which parameters can be evaluated. Pacilio27 has expressed
the limitations on the use of Eq. 3-33 in terms of acT; i.e., it can be used
until the inequality
a2T<< 1 (3-41)

is no longer valid. Physically, this means that the interval T is suffi-
ciently short that delayed neutron effects are not significant, i.e., T <
50 msec for critical or near-critical systems. However, the effects of
delayed neutrons become less important as the reactor becomes more
Pacilio28 points out that the number of intervals counted, N, influence
the precision of the measurements, even though it does not appear in
Eq. 3-33 or 3-37. He also derived the relation for the relative standard
deviation, where successive samples are regarded as uncorrelated, to be

= [(4 + +(2 + +l (3-42)

where Y is defined by Eq. 3-34. He carried out a parametric study of
Eqs. 3-37 and 3-42 and concluded that:

1. A large number of short test intervals is preferable to a small number of
long intervals.
2. The dependence of Y on a occurs for aT < 1 but then vanishes as T
3. The requirements for a Feynman variance-to-mean measurement are (a)
very low power, (b) high detector efficiency (10-3 to 10-4 for uranium
systems), and (c) a large number of short measurements.


If we restrict T to the range
1 1
-<< T << (3-43)

Eq. 3-33 becomes
c2 c2 eD
S1 + = 1 + Y (for subcritical systems) (3-44)
c pi
C2 j2 gDj(1 -- 0)2
- 1 +- = 1 + Ycrit (for critical systems) (3-45)
c P2
The conditions of Eq. 3-43 cannot be met in graphite or heavy-water
systems. Even so, these expressions have been used by Feynman et al.2
and Kurusyna29 to measure D,, by McCulloch38 to measure # for a
plutonium system, and by Lindeman and Ruby"1 to measure subcritical-
ity. The subcriticality measurements are based on the relation
Y^ [*D (1-- ))']/ -= p(1-- -)'
j [ED( )2] (1 1 = [1 p($)]2 (3-46)
Y eD,/p' #2
This method does not require that the generation time remain constant for
changes of reactivity, but it does require that the detector efficiency
remain constant. The reported results have been in good agreement
with pulsed neutron experiments down to $3.5 subcritical.32 The effi-
ciency e can be calculated from Eq. 3-28 if the reactivity has been deter-
mined from the measurements of a and ac and if 3 is determined by a
calculation. The absolute fission rate F in the system is given by

F = A (3-47)

where A is the average counting rate in the experiment.


As the delayed criticality is approached, the reduced variance calcu-
lated from Eq. 3-36 diverges (a7 approaches zero since a7 = -p/11.6).
To circumvent this difficulty, Bennett21 suggested an alternate method
which does not diverge at delayed critical, namely, measurements of the
second moment of differences of counts in subsequent time intervals
(differential method). From the point of view of neutron statistics, the
reactor then behaves as a subcritical system. Bennett has derived the
((ck+l- Ck)2) 7 (a) -2a T 2e (3-48)
2(Ck 1 + I Ho(a) 1 -T
2c= \iT i)


where ck is the number of counts in the kth time increment of length T
and the other symbols have their previous meanings. The ensemble
averaging is carried out over N time increments. If the condition
((Ck+1 Ck)2 1 2D, ( + e-2,T 2e-",T 1
((Cl k)) ED, 1 -- = 1 + W (3-49)
2(ck) iT
ED,( + e-2T + 2e-"T
W =- 1 (3-50)
pW aT
In a way analogous to the Y of the Feynman variance-to-mean experi-
ment, W represents the increase in fluctuations due to the correlated
events of the neutron chains over the fluctuations that would have
occurred had they been random normal. However, W is smaller than Y,
indicating that the correlation between the differences in the number of
counts in successive intervals is less than the correlation between the
number of counts in successive intervals. As T becomes short, both W
and Y approach zero. Similarly, as T becomes long, W and Y approach
asymptotic values of eD,/p in a similar but not identical matter.
In experiments the gated circuits used for the Feynman experiments
can also be used, but the procedure for analyzing the data is different.
Dead time between runs, particularly for the very short time intervals,
is as important as for the Feynman method. The Stegemann probabil-
ity analyzer used for the Feynman experiments cannot be used for this
technique. Therefore a large amount of data is necessary, and the experi-
mental error is larger than in variance measurements.


There are several methods of measuring parameters of nuclear reactor
systems which are based on the relation of pi(A), the probability of count-
ing i pulses in a time interval A. When chain-related counts are present,
pi(A) is a function of j, the average number of counts in the interval A,
and the correlation term Y, the measure of additional fluctuations in
excess of random which occur when chain-related events occur. For
uncorrelated random events, pi(A) is only a function of 5.
Experimental measurements involve measuring the frequencies fi(A),
or frequency distribution, and comparing them with pi(A), probability
distribution. The probabilities thus obtained are then used to evaluate
the variance-to-mean ratio, from which the parameters can be evaluated
by using the Feynman method (i.e., by using Eq. 3-33). Alternately, the


probability pi(A) can be expressed in terms of the parameters of the
nuclear system.
3-6.1 Zero Probability (Mogilner) Method. The use of the zero
probability method was first suggested by Mogilner and Zolotukhin2 in
1961. The average fraction of empty channels (i.e., zero counts during
interval A) in an analyzer containing M channels is measured for a series
of tests in which A is varied over a wide range.
Mogilner and Zolotukhin use probability generating functions to calcu-
late the probability distribution for a discrete random variable as defined
by Eq. 2-47 because of the ease of computing probabilities and moments.
However, their original derivation was based on the assumed negative
binomial distribution F(A,z) of neutron counts, where

F(A,z) = e ei pi(A)
= [1 + (1 e)Y]-l' (3-51)
z is an auxiliary variable, j is the average number of counts in time
interval A, and Y is the correlation parameter defined by Eq. 3-34. If
the number of counts i = 0, the auxiliary variable z approaches o0, and
the zero probability is given as
In po(A) =F(A,- o)

= In (1+ Y) (3-52)

po(A) = (1 + Y)-I/Y (3-53)
From experimental values of po(A), we can obtain Y and hence a.
Pal8 has given a theoretical basis for the zero probability using a more
exact theory and gives the expression

S2A 2 [( + 1)2 ( 1)2e-y^1A
In po(A) = -1 + -- In
37 + 1 (7 1)A 47J

,= 1+2 D (3-55)

and all other terms have been defined previously in this chapter.
Pa133 indicates that the first two terms of Eq. 3-54 expanded in a power
series in eD'/p2 are the same as the corresponding terms for In po(A) given
in Eq. 3-52. Such a power expansion is possible only for eD,/p < 1, which
means that the variance of the counts is hardly different from that of a


Poisson distribution. However, a is much more easily determined for
ED,/p >> 1; i.e., the variance of the counts is quite different from that for a
Poisson distribution. Pil has recommended that the more exact expres-
sion in Eq. 3-54 be used since his work indicates the approximations used
by Mogilner are valid only for A < 3 msec. Babala6 has derived Eq. 3-54
using a three-interval probability generating function and concurs with
the recommendation of Pa1. However, Pacilio34 indicates that the exper-
imental agreement between the results using Eqs. 3-53 and 3-54 are
consistent over a range that is wider than expected.
The experimental equipment used for this type of experiment is the
probability analyzer described in Sec. 3-4, which gives the discrete prob-
abilities pi(A) as an output. Only po(A) and j are needed for this experi-
ment, where po(A) is given by

po(A) = (3-56)
where No is the number of counts in the first channel (zero counts during
A) and N is the total number of counts collected in all channels. The
average number of counts c can be obtained from a monitoring scaler. A
least-squares fitting of po(A) vs. A will give a and eD,/p2. Pacilio34 has
used this technique to measure absolute power level, and Lindeman and
Ruby31 have used it to measure subcritical reactivity.
The zero probability method is usually applied to thermal reactors at
very low power since there must be a substantial number of intervals
with no counts if the method is to be useful.
3-6.2 Polya-Model Method. The Polya-model method is an exten-
sion of the Mogilner method in which all values of pi(A), as approximated
by the probability profile of Polya,'5 are compared with the frequency
distribution of counts, i.e., the ensemble of fractions fi of the channels
with i counts. The distribution of the Polya model is actually the nega-
tive binomial distribution. The expression for pi(A) has been derived by
successive differentiation of a probability distribution generating func-
tion. The result is a recursive relation:
5 + (i 1)Y
pi(A) = p + (- P1Y (A) (3-57)
i(1 + Y)

where the last term of the series

po(A) = (1 + Y)-l (3-58)
is the zero probability of the Mogilner method. The recursive relation
is an approximation of a more rigorous but complicated analytical expres-
sion derived by PAl8 and Mogilner and Zolotukhin.'


The experimental procedure is to use a probability analyzer such as
that described in Sec. 3-4 to determine the frequency distribution of
counts for various values of time interval A. The problem of dead time
for short time intervals is substantially the same as for the Feynman
method. A least-squares fitting procedure is then used to obtain opti-
mum values of c and Y. The approach recommended by Mogilner and
Zolotukhin3 involves the minimization of the quantity x2 where

S(c c.()
x2 = (C C)(3-59)
i=O Cpi
where ci is the actual number of counts collected in the ith channel and
c,, is the number expected on the basis of the theoretical probability
distribution relations of Eqs. 3-57 and 3-58. Pacilio34 suggests an alter-
nate method in which the quantity to be minimized is

xP = S wi(#j bi) (3-60)
where wi is the weighting function, usually taken to be unity, and bi and
Pi are defined by

pi pi-_i (i + Y) (3-61)
b (3-61)
p C-1 i(1 + Y)
"C* Ci--
P c ci- (3-62)
i(1 + Y)

If the value of j is obtained from a monitor scaler, the variance to mean
can be shown to be
1 1
( M 1) M M
(5 +-- 1)2 2(- + 1) + M

=i 1 i l li=1 i=1
where M is the number of values of i used in the summations. This
technique of processing data has shown good agreement with the Feyn-
man variance method.


Recent work by Babala5 using the distribution of the lengths of inter-
vals between counts seems to offer a number of advantages over some
of the other counting techniques. In the case of a sequence of counts
with (uncorrelated) statistics, the interval distribution is given by the


probability of no count in a time interval t, multiplied by the probability
of a count in an infinitely small time interval dt, immediately following
p(t) dt = po(t) pc(dt) = e-F^FE dt (3-64)
Since in this case the counts are independent of each other, Eq. 3-64
represents the probability distribution of time intervals between counts.
More rigorously, one should write

(t) d = pc(dt') po(t) p(dt) (365)
where dt' is an infinitely small time interval immediately preceding the
interval t. The denominator in Eq. 3-65 is required to satisfy the
normalization condition

/0 p(t) dt = 1
Equation 3-64 gives the probability that, after a time origin t = 0
chosen at random, the first count arrives in the time interval dt at t, and
Eq. 3-65 gives the probability that, after a count at t = 0, the next count
comes in dt at t. For correlated sequences of counts, these probabilities
are different from each other. Therefore we shall adopt the nomen-
clature of Babala and refer to the expressions of Eqs. 3-64 and 3-65 as
the random-origin (RO) interval distribution and the count-to-count
(CC) interval distribution, respectively, and designate the corresponding
probabilities to be pRo(t) and pcc(t).
3-7.1 Count-to-Count Interval Distribution. Babala5 has derived an
expression for the count-to-count interval distribution using a three-
interval probability generating function and letting the first and third
intervals go to zero. The result is
pcc(t) dt = C1(t) dt + C2(t) e-ut dt (3-66)
[ ( -- 1) -1- (, -- ) e-"T ]2
C,(t) = 4Fe po(t) [( + 1 + (Y 1e-J (3-67)
(L + 1)2 (Y 1)2 e-yt
8FP po(t) 72
C2(t) = (3-68)
0 [(y 1)2 (y 1)- e-ayt]2

where r-, the equivalent neutron source strength, is given by
a = or a = (3-69)
D, aD,
The parameter 7 is given by Eq. 3-55, and po(t), the probability of no counts
in the interval from 0 to t, is given by Eq. 3-54 with A = t. S is the


neutron source strength when the system is subcritical. The two forms
of Eq. 3-69 are for subcritical and critical nuclear systems, respectively.
Equation 3-66 has certain features that are of interest. If we let
e < p2, i.e., y 1, and increase the source S so that C2(t) -- 0, the result
pcc(t) dt = Fee-Ft dt (3-70)
which is identical to Eq. 3-64 for a Poissonian distribution of counts;
i.e., the process is uncorrelated.
The probability pcc(t) is dependent on both the power (or source)
level and the detector efficiency but does offer advantages over other
statistical techniques. At high power levels where the Rossi-alpha tech-
nique is useless for parameter measurements, C2 -+ 0, and thus we have
pcc(t) dt = Cl(t) dt (3-71)
which can be used for parameter measurements.
If the efficiency is very low (i.e., e<< pp and 7 -- 1), all efficiency-
limiting techniques are useless. However, under the condition 2ED,/p' <<
1, Eq. 3-65 becomes

pcc(t) = e-F^t [Fe + (Do e-
I \2Ap
= e-At[A + Be-"t] (3-72)
which, except for the term e-F^, is comparable to the Rossi-alpha expres-
sion. For fast reactors where efficiencies are low, the counting rate is
low, and the time intervals are short, the exponential term e-Fe' is approxi-
mately unity, and Eq. 3-72 becomes
pcc(t) dt = (A + Be-"e) dt (3-73)
which is identical to Eq. 3-20 for the Rossi-alpha experiment. This is
the explanation for the success of the Rossi-alpha technique used by
Brunson et al.16 (see Sec. 3-3) when they were actually measuring the
count-to-count times.
The experimental procedure has been described in Sec. 3-3. The first
pulse triggers the analyzer in which the channels are advanced by a
precision timer. The second count stops the analyzer, a count is inserted
into the memory position corresponding to the channel where the analyzer
was stopped, and the system is reset to wait for the next count. Such an
arrangement records only half the data, i.e., the time interval between
every other pulse. If the analyzer is automatically triggered by the
stop-and-reset action, all the data can be recorded. However, this
procedure does shorten the measured time interval by an amount equal
to the dead time (time required to stop the analyzer, store the count, and

reset and start the analyzer). With a modern analyzer this dead time
can be made quite short; however, an appropriate correction should be
made routinely.
3-7.2 Random-Origin Interval Distribution Method. Closely related
to the count-to-count interval distribution method is the random-origin
interval distribution method. The primary difference is that the origin
of the interval is randomly chosen by a process that is uncorrelated with
the nuclear phenomenon being studied. Babala6 has derived the expres-
sion for the probability distribution for random-origin intervals to be
[ (y -L 1) 4 (y 1) e- 1dt (
pRO(t) dt = 2Fe po(t) (Y + 1) ( 1) e- dt (3-74)
L(y + 1)2 (y 1)2 e-ay'
where 7 and po(t) are defined by Eqs. 3-69 and 3-54, respectively. As in
the case of the count-to-count interval distribution, the process becomes
Poissonian and
PRO(t) dt = Fee-F^t dt (3-75)
when 7 -- 1 because of decreased efficiency e or very subcritical systems.
Pacilio32 has pointed out that f pRo(t) dt represents the probability
that, after a time t = 0 chosen at random, the first pulse will arrive
between 0 and t. Since po(t) is the probability that the same event occurs
between t and infinity, we obtain

po(t) + / pRO(t) dt = 1 (3-76)
from which

pRO( = (3-77)
We can also consider pRO(t) to be the probability that after a pulse arrives
between the random origin and dt an empty interval follows. This
probability can be expressed as the product of the probability of one
count between 0 and dt and the probability of the next count occurring
at any time greater than t. Hence

pRo(t) dt = (Fe dt)[1 J pcc(t) dt] (3-78)
where the integral is the probability that after a count at time t = 0
the next count arrives between 0 and t. Hence

1 9pRo(t) 1 92po(t)
pce(t) = t- (3-79)
Fe at Fe at2
The experimental procedure is substantially the same as that used
for the count-to-count interval distribution procedure except that the


analyzer is triggered by a randomly occurring pulse after the analyzer
is reset.
This procedure is effective for thermal-reactor systems but is efficiency
limited and cannot effectively be used for fast-reactor systems. Austin
et al.36 have used this procedure, which they called the "waiting time-
alpha method" and obtained good agreement with Rossi-alpha and
pulsed-neutron measurements.


An alternate method of measuring a has been recently suggested by
Srinivasan.37 It is based on the fact that by introducing an artificial
variable dead time into the measuring instrument, one influences the
correlation between counts. We shall discuss this influence for the case
of a paralyzable instrument, defined in the following way by Srinivasan:
Suppose a sequence of input pulses (true counts) from a neutron detector
is fed into an instrument that yields a sequence of output pulses (output
counts). If the instrument transmits a true count to the output, it is
unable to provide a second output count unless there is a time interval
of at least d (dead time) between two successive true counts. Thus this
instrument registers a number of intervals longer than d between true
For uncorrelated counts the relation between the count rate Cd on the
output of a paralyzable instrument and the true count rate C is given by

Cd = Ce-cd (3-80)

where the exponential function is simply the probability that an interval
between two true counts is longer than d. The variance of output
counts in a time A of such a system for a process having a Poisson
distribution of input pulses is given by Srinivasan to be

c2 = + 2 [2(A d) > 0] (3-81)

from which
= 1 -- 1 (3-82)

c = CdA (3-83)

For correlated counts such as those which occur in a zero-power nuclear
reactor, the variance in counts for a paralyzable instrument is

c2 = j + j2

2CdB /A d\r 1- e-(-)
+ -- ) ( d7 e-d (3-84)
a \ A a(A d)
where for short dead times

B= 1- Cd [1 + ( I (3-85)

Rearranging Eq. 3-84 gives

=1--5 1--(nf L

2a d(A d)
+] __[1 __I) | e|-ad

=l-- [1- (__)2

eD, A d 1 e-a(-d)
+ -P pco(d) Be-" A- 1 e d-) (3-86)
p2 A a (A d)
where pco(d) is the probability that an interval between two counts is
greater than d and is given by
[ (, + 1) + (, 1)e-"d,
S (y + 1)2 (' 1)2ed-yd
where po(d) is given by Eq. 3-54 with A = d and all other terms are as
defined previously.
Equation 3-85 is perhaps too complicated for the purpose of practical
determination of a by varying the dead time d. It can be used, however,
for estimating the effect that the dead time of an instrument has on
variance. It is readily seen, for example, that the dead-time effect can
be neglected if Cd A similar analysis of a nonparalyzable instrument appears to be
much more difficult and will not be attempted here.

The cross-correlation function ,xy(r), of which the autocorrelation
function 4x(r) is a special case where x = y, of a stationary process has


been defined by Eqs. 2-144 and 2-147 to be
^ = E[x(t) y(t + r)]

= f_ (t1) y(t2) P[x(tl),y(t2)] dx dy (3-88)

where p[x(ti),y(t2)] is the joint probability function that event x occurs
at time tl and event y occurs at time t2, and T is defined by
7 = t2 11 (3-89)

If we let x be the detection of a neutron by detector 1 and y be the detec-
tion of a neutron by detector 2 (or by 1 for the special case where x = y),
then the correlation function is readily seen to be the probability of a pair
of counts occurring in A1 at ti and in A2 at t2; i.e., they occur at an interval
7 apart. This is the same quantity studied in Sec. 3-3 in the discussion
of the Rossi-alpha. Hence

Oxy(T) = p(tl,t2) = PC(tl,t2) + pR(tl,t2) (3-90)

where p(tl,t2) is the probability of a count at tl followed by a count at t2
and the subscripts C and R refer to correlated and random events. If
one detector is used, then Eq. 3-90 becomes equal to Eq. 3-17, except
for the presence of a Dirac delta term at 7 = 0.

xM(r) = F2e + FE2 D kp e-" + Fe 6(r) = A2 + ABe-" + A 6(r)
2(1 k,)I
= A(A + Be-") + A S(r) (3-91)

where A and B have been defined by Eqs. 3-25 and 3-26. This is known
as the autocorrelation analysis technique. From a theoretical point of
view, it is substantially the same as the Rossi-alpha technique, but the
measurement technique is entirely different. Note that the random or
background term is dependent on the square of the fission rate F (or
power), whereas the amplitude of the exponential term is dependent only
on F. Hence such a technique is limited to very low fission rates. The
Dirac delta term does not occur in the Rossi-alpha measurements due to
the delay located in front of the first coincidence channel (see Fig. 3-2).
If two detectors with the same efficiencies, e, are used, the random count
collected is independent and hence uncorrelated since the neutrons are
detected by absorption. Equation 3-91 now becomes

4,y(r) = Fe2 + F e-
2(1 k,)l
= A2 + ABe-" = A (A + Be-") (3-92)


and is known as the cross-correlation analysis technique. The elimination
of the Dirac delta term in Eq. 3-92 is the principal difference when the
two-detector cross-correlation technique is used in the time domain. As
we will see later, this corresponds to the elimination of the constant back-
ground term in the frequency domain and allows measurements to be
taken with relatively low efficiency detectors. Since the preceding
derivation is based on a lumped-parameter model, the detectors are
usually located reasonably close to each other; the reactor must be small
enough that spatial effects are not significant.
The typical correlation experiment with pulses from a detector is
carried out by recording the pulses from one or two detectors and replay-
ing the record for each value of r. Typically, the number of counts x and
y in small time increments A is taken as the counting rates over the time
interval A, and the data are processed according to the relation

1 N-I
,(kA) = N--k x[t + iA] y[t + (i + k)A] (3-93)

where the time lag is an integral number of time increments A

r = kA (3-94)

The calculations associated with Eq. 3-92 are time consuming and
usually require a digital computer. Often it is more convenient to use
the Rossi-alpha procedure than to carry out an autocorrelation measure-
ment. Sometimes the pulses are converted to an analog variable, or
ionization-chamber-type detectors are used to provide an analog variable
that can be correlated with analog-type correlators. The relation of
Eq. 3-92 is valid only for reactors small enough to be represented by a
lumped-parameter model. Spatial effects can distort the results if
they are not properly taken into account or are not recognized.


The covariance is defined by Eq. 2-140 to be

x,, = E(xy) E(x) E(y) (3-95)

i.e., it is the difference between the expected product of the variables
and the product of the expected values. If this difference vanishes, the
two variables are not correlated; if it does not, it is a good measure of the
correlation between them. If the outputs of two neutron detectors are
sampled for a large number of times for an interval A to give the ensemble


of counts {ci(A),c2(A)}, the covariance can be calculated by
e12(A) = ((c (c6))(c2 (c2)))
= (clC2) (Ci)(C2) (3-96)
Cohn38 indicated that, if the prompt-neutron approximation
a2A|<< 1 (3-97)
is valid, we can modify the Feynman variance-to-mean expression (Eq.
3-33) to obtain the alternate (but equally valid) expressions
(A) 2D, 1 e-
S 1- (3-98)
(C) pP aA
e1I(A) _6D, / 1 --e-
(c)- = -A) (3-99)
(C2 p2 aA I
If the prompt-neutron approximation of Eq. 3-97 is not valid,
Eqs. 3-98 and 3-99 become
e12(A) 7 2A Ho 1 e-(30
-= e2D, (a) 1 (3-100)
(C6) i=1 a\ aiA /
ez2(A) 7 2A ( 1 e-"ai
D, I o(ai) 1 (3-101)
(2) i1 i iA /
where Ai, Ho(a(), and ac are the same as in Eq. 3-36 and as given in Table
3-2. Note that the unity term in Eq. 3-33 which represented the random
background (actually, the Poissonian relative variance) has been elim-
inated by the cross-correlation involved in this process.

Table 3-2
Constants for Zero-Power Transfer Function 235U-Fueled Reactor Near
Delayed Criticality*
3 = 0.0064, A < 5 X 10-4 sec, [p\ < 0.10
i 1 2 3 4 5 6 7

a sec-1 (3 p)/A 2.89 1.02 0.195 0.068 0.0143 -p/11.6
Aisec-1 (1 3)/A 29 20 11.2 6.1 1.2 11.6
Ho(ai) (1 ) 164 186 237 284 343 415 -
2(3 p) 2p
From E. F. Bennett, The Rice Formulation of Reactor Noise, Nucl. Sci. Eng.,
8(1): 53 (1960).


The covariance technique is superior to the conventional Feynman
technique because it partially eliminates the bias effects of measurements
for finite times which are present in the latter technique (i.e., the Poisson
relative variance is assumed to be unity, whereas it may actually differ
somewhat from unity for a finite time measurement).

In the implementation of the Rossi-alpha procedure for measurements
using a multichannel analyzer as a multiscaler, the first pulse starts the
analyzer and subsequent pulses are recorded in the appropriate channel.
No regard is given to whether the neutron density is increasing, decreas-
ing, or remaining constant. The endogenous pulsed technique uses a
triggering pulse that occurs when the fluctuating neutron density reaches
a preselected level above the mean level. The spontaneous bursts to
levels significantly higher than the mean level may be considered to be
due to variations in the fission rate; the decay to a lower level is character-
ized by the fundamental decay constant a. The improvement of this
technique over the conventional Rossi-alpha measurements using a multi-
scaler is due to the preselection of measuring periods when the neutron
density is decaying. This provides an increased signal-to-background
ratio because only decay chains of significant amplitude are analyzed.
Such a technique has some of the features of a pulsed-neutron measure-
ment while retaining the simplicity, economy, and convenience of the
conventional Rossi-alpha measurements. The reduction in time required
over a conventional Rossi-alpha measurement is such that it is practical
to carry out endogenous-pulsed-source measurements on thermal reactors.
Similar advantages can be expected for fast-reactor systems.
3-11.1 Theoretical Considerations. The neutron density can be
described by the counts detected in the interval A
c(t) = coe-" + c (3-102)
where c is the mean value of the background given by Eq. 3-32 to be
F = FeA (3-103)
for a critical reactor. For a subcritical system
c (3-104)
v(1 k)
The amplitude of the spontaneous burst co above the mean value c is
co = c (3-105)
This technique has sometimes been called the inherent-pulsed-source technique.


where S/B is the signal-to-background ratio. Pacilio89 has pointed out
that this technique is equivalent to a pulsed-neutron technique with the
intensity (above the steady-state level) given by Eq. 3-105 and a repe-
tition rate given by

R = pi (3-106)
A -(S/B+1)
where pi is the probability of counting i pulses in a time interval A when
j is the mean number of counts per interval A. Pacilio39 has tabulated
values of co and R for various experimental conditions and calculated the
time necessary to collect a given number of burst decays in such measure-
ments on thermal-reactor systems. The result has been significant
improvement in statistical accuracy and decreased measuring time com-
pared with conventional Rossi-alpha procedures. Although this study
presumes an efficiency associated with an in-core detector for both types
of measurements, recent work by Pacilio22 indicates that such measure-
ments can be taken with the detectors located in the reflector.
3-11.2 Experimental Measurements. The experimental setup is
substantially the same as that used for the one-detector Rossi-alpha
experiment except that a special preselection and triggering device is
used. Several types of such devices have been used:
1. Pacilio39 used a fast-responding rate meter to observe the neutron
population. When a predetermined threshold level is reached, the instru-
mentation system is triggered. This threshold level must be adjusted
with power level and efficiency of the detector.
2. Pacilio39 has also digitally counted the number of pulses collected
in a predetermined time interval A. When this number of pulses reaches
a preselected level, the analyzer is triggered.
3. Chwaszchewski et al.40 used two count-rate meters, one with a slow
time constant r, and the other with a fast time constant rf. When a
burst occurred, the fast rate meter responded while the slow one did not,
thereby triggering the instrumentation system. This procedure has the
rather severe limitation
7T < < 7, (3-107)

4. Borgwaldt41 and Pacilio39 used a simple triple-coincidence trigger
that functions in the following manner. A pulse from the detector opens
a coincidence gate for a time interval inversely related to the counting
rate selected. If two more pulses arrive from the detector in the time
interval, the instrumentation is triggered. Obviously, other combina-
tions of gates are possible.


Experimental data are fitted to Eq. 3-102 to obtain a value of a,
usually as a means of measuring reactivity. Chwaszchewski et al.40
found agreement within 2% with conventional pulsed-neutron experi-
ments in the reactivity range -$0.05 to -$0.35 in a water-graphite-
moderated enriched system. Pacilio39 carried out endogenous-pulsed-
source measurements in the reactivity range from criticality to -$13 in
an organic-moderated enriched system. The results were in good agree-
ment with pulsed-neutron experiments.


1. J. D. ORNDOFF, Prompt Neutron Periods of Metal Critical Assemblies,
Nucl. Sci. Eng., 2: 450 (July 1957).
2. R. P. FEYNMAN, F. DE HOFFMAN, and R. SERBER, Dispersion of the Neutron
Emission in U-235 Fission, J. Nucl. Energy, 3: 64 (1956).
3. A. I. MOGILNER and V. G. ZOLOTUKHIN, The Statistical r-Method of Meas-
uring the Kinetic Parameters of a Reactor, At. Energ. (USSR), 10: 377 (1961).
4. J. A. THIE, Reactor Noise, Rowman and Littlefield, Inc., New York, 1963.
5. D. BABALA, Neutron Counting Statistics in Nuclear Reactors, Norwegian
Report KR-114, November 1966.
6. A. N. KOLMOGOROV and N. A. DMITRIEV, Theory of Branching Processes,
Dokl. Akad. Nauk. SSSR, 56: 7 (1947).
7. E. D. COURANT and P. R. WALLACE, Fluctuations of the Number of Neu-
trons in a Pile, Phys. Rev., 72: 1038 (1947).
8. L. I. PAL, Statistical Fluctuations of Neutron Multiplication, in Proceedings
of the Second United Nations International Conference on the Peaceful Uses
of Atomic Energy, Geneva, 1958, Vol. 16, p. 687, United Nations, New York,
9. R. L. MURRAY, Nuclear Reactor Theory, Prentice-Hall, Inc., Englewood
Cliffs, N.J., 1957.
10. S. GLASSTONE and M. C. EDLUND, The Elements of Nuclear Reactor Theory,
D. Van Nostrand Co., Inc., New York, 1952.
11. B. C. DIVEN, H. C. MARTIN, R. F. TASCHEK, and J. TERRELL, Multiplicities
of Fission Neutrons, Phys. Rev., 101: 1012 (1956).
12. W. MATTHES, Statistical Fluctuations and Their Correlation in Reactor
Neutron Distribution, Nukleonik, 4: 213 (1962).
13. H. BORGWALDT and D. STEGEMANN, A Common Theory for Neutronic Noise
Analysis Experiments in Nuclear Reactors, Nukleonik, 7: 313 (1965).
14. T. IIJIMA, Remark on Rossi-Alpha Experiment, Nukleonik, 10: 93 (1967).
15. H. DIAZ and R. E. UHRIG, A Digital Computer Controlled Data Acquisition
and Processing System for Nuclear Experiments, Trans. Amer. Nucl. Soc.,
8: 588 (November 1965).
16. G. S. BRUNSON, R. N. CURRAN, J. M. GASIDLO, and R. J. HUBER, A Survey
of Prompt-Neutron Lifetimes in Fast Critical Systems, USAEC Report
ANL-6681, Argonne National Laboratory, August 1963.


17. J. T. MIHALCZO, Prompt-Neutron Lifetime in Critical Enriched-Uranium
Metal Cylinders and Annuli, Nucl. Sci. Eng., 20: 60 (1964).
18. R. A. KARAM, Measurements of Rossi-Alpha in Reflected Reactors, Trans.
Amer. Nucl. Soc., 7: 283 (June 1964).
19. W. SUWALSKI, NORA First H20 Core Noise Measurements: Part I, Rossi-
Alpha Method, Norwegian Report NORA-Memo-112, 1965.
20. C. E. COHN, Reflected Reactor Kinetics, Nucl. Sci. Eng., 13(1): 12 (1962).
21. E. F. BENNETT, The Rice Formulation of Reactor Noise, Nucl. Sci. Eng.,
8(1): 53 (1960).
22. N. PACILIO, Comitato Nazionale per l'Energia Nucleare, personal communi-
cation, 1968.
23. D. STEGEMANN, Die Analyse des Neutronenravschens in Reaktoren, German
Report INR-4/66-1, 1966.
24. R. W. ALBRECHT, The Measurement of Dynamic Nuclear Reactor Parameters
Using the Variance of the Number of Neutrons Detected, Nucl. Sci. Eng.,
14(2): 153 (1962).
25. R. L. JOHNSON, A Statistical Determination of the Reduced Prompt Genera-
tion Time in the SPERT IV Reactor, USAEC Report IDO-16903, Phillips
Petroleum Company, August 1963.
26. E. TURKCAN and J. B. DRAGT, Experimental Study of Different Techniques
for Analyzing Reactor Noise Measured by a Neutron Counter, Dutch Report
RCN-INT-75, 1967.
27. N. PACILIO, Review of Statistical Methods for Reactor Parameter Measure-
ments Developed at C.S.N. Casaccia, Italian Report RT/FI 66-37, 1966.
28. N. PACILIO, Short Time Variance Method for Prompt Neutron Lifetime
Measurements, Nucl. Sci. Eng., 22(2): 266 (1965).
29. K. KURUSYNA, Analysis of Nuclear Reactor Noise, Genshiryoku Kogyo, 8: 49
30. D. B. McCULLOCH, An Absolute Measurement of the Effective Delayed
Neutron Fraction in the Fast Reactor ZEPHYR, British Report AERE-
R/M-176, July 1958.
31. A. J. LINDEMAN and L. RUBY, Subcritical Reactivity from Neutron Statistics,
Nucl. Sci. Eng., 28(2): 308 (1967).
32. N. PACILIO, Reactor-Noise Analysis in the Time Domain, USAEC Critical
Review Series, USAEC Report TID-24512, April 1969.
33. L. I. PAL, Statistical Theory of Neutron Chain Reactors, in Proceedings of
the Third United Nations International Conference on the Peaceful Uses of
Atomic Energy, Geneva, 1964, Vol. 2, pp. 218-224, United Nations, New
York, 1965.
34. N. PACILIO, The Polya Model and the Distribution of Neutrons in a Steady
State Reactor, Nucl. Sci. Eng., 26(4): 565 (1966).
35. G. POLYA and F. EGGENBERGER, Uber die Statistik Verketterer Vorgange,
Z. Angew. Math. Mech., 3: 279 (1923).
36. D. T. AUSTIN et al., Comparison of the Waiting-Time Alpha with the Rossi-
Alpha, Trans. Amer. Nucl. Soc., 10(2): 591 (1967).


37. M. SRINIVASAN and D. C. SAHNI, A Modified Statistical Technique for the
Measurement of a in Fast and Intermediate Reactor Assemblies, Nukleonik,
9(3): 155-157 (1967).
38. C. E. COHN, Argonne National Laboratory, personal communication, 1968.
39. N. PACILIO, Neutron Statistics Techniques Applied to the ROSPO Reactor,
in Proceedings of the Karlsruhe EAES Symposium III, European Atomic
Energy Society, p. 9, 1966.
40. S. CHWASZCHEWSKI et al., Improved Methods for Prompt Neutron Period
Measurements, Nucl. Sci. Eng., 25(2): 201 (1966).
41. H. BORGWALDT, Karlsruhe Nuclear Research Center, personal communica-
tion, 1966.


Basic Relations of Random

Noise Theory

The probability that a particular 235U atom in a nuclear system will
absorb a neutron and produce fission is dependent on its location, the
surrounding materials and their absorption cross sections, the neutron
energy, and the direction of motion of both the neutron and the 235U
atom. These factors give rise to a statistical variation in the lengths
of time between fissions in a nuclear system. Since the probability of
fission occurring is influenced by the characteristics of the nuclear system,
some of these characteristics of the system can be determined by an
analysis of these statistical variations. As shown in Chap. 3, the pres-
ence of the correlated events associated with fission chains increases the
magnitude of the fluctuations over that which would otherwise occur.

An autocorrelation function is an extension of the concept of a mean-
square value to cover an interval of time. Whereas the mean-square
value is the average of the square of the value of a function at a particular
time, the autocorrelation function 04,(r) is the average of the product of
two values of the variable separated by a time interval r. At time tk
the autocorrelation function of the process x(t), shown in Fig. 1-1, is
S xi(tk) Xi(tk + -)
xM(tk,r) = lim i= (t) + T) (4-1)
N-.am N
If the process is time stationary, the definition becomes independent of
= i X(t) xi(t + r)
N-8 3(


Basic Relations of Random

Noise Theory

The probability that a particular 235U atom in a nuclear system will
absorb a neutron and produce fission is dependent on its location, the
surrounding materials and their absorption cross sections, the neutron
energy, and the direction of motion of both the neutron and the 235U
atom. These factors give rise to a statistical variation in the lengths
of time between fissions in a nuclear system. Since the probability of
fission occurring is influenced by the characteristics of the nuclear system,
some of these characteristics of the system can be determined by an
analysis of these statistical variations. As shown in Chap. 3, the pres-
ence of the correlated events associated with fission chains increases the
magnitude of the fluctuations over that which would otherwise occur.

An autocorrelation function is an extension of the concept of a mean-
square value to cover an interval of time. Whereas the mean-square
value is the average of the square of the value of a function at a particular
time, the autocorrelation function 04,(r) is the average of the product of
two values of the variable separated by a time interval r. At time tk
the autocorrelation function of the process x(t), shown in Fig. 1-1, is
S xi(tk) Xi(tk + -)
xM(tk,r) = lim i= (t) + T) (4-1)
N-.am N
If the process is time stationary, the definition becomes independent of
= i X(t) xi(t + r)
N-8 3(


The fundamental process involved in correlation is displacing one variable
with respect to another, multiplying the displaced variable by the original
variable, and averaging over an infinite period of time or number of
ensembles. For an ergodic process we can substitute a time average
for the ensemble average, and the autocorrelation function becomes
1 T
(r) = lim f xi(t) xi(t + r) dt
T-.-- 2T _T
= E[xi(t) xi(t + 7)] (4-3)
where xi(t) is any of the sample records. It is necessary to let the limits T
approach infinity unless xi(t) is periodic. For the special case when the
time lag is zero, the autocorrelation function becomes
1 T
4.x..(0) = lim I [xi(t)] dt
T-.oo 2T J_T
= E[x2(t)] = ,i (4-4)
which, by definition, is the mean-square value of x(t).
If x(t) is time stationary, it can be considered to be the sum of a
fluctuating component x'(t) and a steady component that is the mean
value u,; i.e.,
x(t) = Jx + x'(t) (4-5)
Substituting Eq. 4-5 into Eq. 4-4 gives
Ox.(0) = E[,A] + E[2, x'(t)]
+ E[x'(t)2] (4-6)
where, according to the definitions given in Chap. 2, the last term is the
variance oa (square of the standard deviation) and the first term is the
square of the mean, Ip. The second term can be shown to be equal to
E[2c, x'(t)] = 2AE[x'(t)]
= 2 0x x, = 0 (4-7)

since Ax is zero by the definition of x'. Hence Eq. 4-6 becomes

OX(0) = = 2 + A (4-8)
The autocorrelation function xx(Tr) also contains some information
concerning the frequency distribution of the random signal x(t). If
xx,(r) changes rapidly with r, high frequencies predominate; but, if
4x(r) changes very slowly with r, low frequencies predominate.
It can be easily shown that the autocorrelation function


even function and hence symmetrical about the vertical axis. This
symmetry can be expressed by

=hr) = 0.(-T) (4-9)

Furthermore, Pxx(r) never exceeds 0,,(0); i.e., ,xx(r) < 0xx(0) for all r.
This follows from the inequality

[x(t) x(t + r)]2 > 0 (4-10)
Expanding this expression, transposing terms, integrating from T to T,
dividing by 2T, and taking the limit as T approaches infinity gives

Ixx(r)) < = 4x(0) (4-11)
when the definitions of Eqs. 4-3 and 4-4 for a time-stationary variable
are used. This expression can be rearranged to give the ratio

S< 1 (4-12)

which is often called the normalized autocorrelation function and always
has a value less than unity except at 7 = 0.
If x(t) contains a periodic component, txx(r) will also contain a periodic
component with the same period; but x,,(r) gives no information about
the phase of the periodic component. However, 0xx(r) approaches zero
as r approaches infinity if x(t) contains only random components and ix
equals zero. This means x(t + r) becomes uncorrelated with x(t) as r
approaches infinity.
If two uncorrelated random variables such as xl and x2 have zero means
and autocorrelation functions 011(r) and 022(7), then the autocorrelation
function of xl + x2 is [11(7r) + 022(r)]. This can be shown by sub-
stituting x = xx + x2 into Eq. 4-3.

In many practical applications the mean value A, is zero and the pre-
ceding relations are simplified. In fact, it is often necessary (and usually
standard procedure) to remove the mean value from experimental data
before further processing it. Thus it is convenient to define the auto-
correlation function of a variable for which the mean value is zero as the
autocovariance function and designate it by the symbol exx(r). The
relation between the autocorrelation function and the autocovariance
function can be shown by substituting Eq. 4-5 into Eq. 4-3 and proceed-
ing in the manner used to derive Eq. 4-8. Using the definitions of mean


value, mean-square value, variance, and standard deviation gives

(r) = ,xx'(r) + xx', + ixx' + '2
= 'x'(r) + = C(7r) + p (4-13)

since A', by definition, is equal to zero. The autocovariance function
exx(r) is identical to the autocorrelation function if the mean value is
equal to zero or if the mean value has been removed. The effect of the
presence of the mean value yx in the variable x(t) is to displace the auto-
correlation function by an amount IA. This will be discussed later when
the effect of the presence of a mean value is considered.
In many practical situations the mean value of the variable ju is equal
to zero, and Eq. 4-13 becomes

0.(r) = .1,x,(T) = exx(r) (4-14)

In most analyses of experimental results, the mean value t is removed
from the sample record before the data are processed. Hence there is no
difference between the autocovariance and autocorrelation functions of
the adjusted variable provided that the mean value of the sample record
is equal to the mean value of x(t). In this text we will use the auto-
correlation function ,xx(r) when there is no requirement that the mean
value be equal to zero. When there is such a requirement, we will
specify it or use the autocovariance function e,(r). By doing this, we
hope to follow the nomenclature of the random-noise field while still main-
taining the distinction between correlation and covariance functions.


In working with the autocorrelation function, one is dealing with the
behavior of the function of time and hence is working in the time domain.
An alternate approach is to work in the frequency domain and separate-.
the signal into its frequency components. For a nonperiodic function -
it is usually necessary to take the Fourier transform of the function t6
transfer it to the frequency domain since there is a continuum of fre-
quencies represented. However, in the case of a stationary random or
stochastic process, x(t) cannot become arbitrarily small for a large t,
because the statistical properties must remain constant with time.
Therefore Ix(t) dt does not converge, and the Fourier transform
does not exist.
This difficulty can be overcome by defining a new function called the
power spectral density, designated by the symbol )(w), as the Fourier

University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs