RANDOM NOISE TECHNIQUES
IN
NUCLEAR REACTOR SYSTEMS
UNIVERSITY
OF FLORIDA
LIBRARIES
ENGINEERING AND PHYSICS
LIBRARY
ROBERT E. UHRIG, Ph.D., Iowa State University, is Dean
of the College of Engineering and Director of the Engineer
ing and Industrial Experiment Station at the University of
Florida, where he has served as Chairman of the Depart
ment of Nuclear Engineering. Dr. Uhrig previously held
academic positions at Iowa State University and the United
States Military Academy. As Deputy Assistant Director
for Research for the Department of Defense (1967), he was
concerned with the management of the fundamental
research program in the physical sciences and engineering.
RANDOM NOISE TECHNIQUES
IN
NUCLEAR REACTOR SYSTEMS
ROBERT E. UHRIG
UNIVERSITY OF FLORIDA
Prepared under the auspices of the
United States Atomic Energy Commission
THE RONALD PRESS COMPANY NEW YORK
2Q ^72foo
C 4
Copyright @ 1970
THE RONALD PRESS COMPANY
All Rights Reserved
No part of this book may be reproduced in
any form without permission in writing from
the publisher.
This copyright has been assigned to and is
held by the General Manager of the United
States Atomic Energy Commission. All
royalties from the sale of this book accrue
to the United States Government.
MP
Library of Congress Catalog Card Number: 71110558
PRINTED IN THE UNITED STATES OF AMERICA
To Paula
Preface
This book is an outgrowth of several years' experience in teaching a
course dealing with the application of random noise theory to nuclear
reactor systems, as well as research and industrial consulting work in the
field. It is designed to serve the dual purpose of supplementing a course
in nuclear reactor noise at the firstyear graduate level and as a reference
book for practicing engineers and scientists interested in applying random
noise techniques to nuclear reactor systems. The first five chapters
provide background for those not familiar with random noise theory. It
is presumed that the reader has a working knowledge of nuclear reactor
theory and transfer mathematics.
There has been a deliberate attempt to make the book as selfcontained
as possible. Much of the material has been drawn from scattered sources
dealing with both the science and the technology of the field. An effort
has also been made to integrate the wide spectrum of subjects that are
important to persons working in the area of nuclear reactor noise, but the
problem of retaining the conventional nomenclature that exists in the
various fields and of avoiding confusing duplication could not be com
pletely resolved.
The random fluctuations of the measured variables of the system
(fission rate, temperature, pressure, flow rate, displacement, etc.) are
related to the dynamic characteristics of the nuclear systems. However,
only Chapters 3, 5, and 11 deal specifically with nuclear processes. Most
of the material in the other nine chapters is concerned wit' basic relations
of random noise theory and the techniques and instrumentation for
acquisition, transmission, recording, and processing of data from random
noise experiments. Hence, it has application to a broad range of physical
and engineering systems, and it is my hope that the material in this book
will also be useful to persons working in such fields as random vibration,
oceanography, medicine, communications, and information sciences.
In preparing this manuscript I have drawn from many sources, includ
ing the Proceedings of the 1963 and 1966 University of Florida Symposia
dealing with nuclear reactor noise, Noise Analysis in Nuclear Systems and
Neutron Noise, Waves, and Pulse Propagation (USAEC Reports TID7679
and CONF660206, respectively), and have made a conscientious effort
to give proper credit for all such material. It has been my privilege to
V
vi PREFACE
know most of the scientists and engineers working in nuclear noise, and
I sincerely regret that it has not been possible to include all of the excellent
work that has been carried out in this interesting and stimulating field.
I am indebted to a large number of people who contributed generously
of their time in reviewing and discussing the manuscript. Appreciation
is expressed to those who served as reviewers of the manuscript in draft
form, specifically to Robert Albrecht, Alan Jacobs, G. Robert Keepin,
Edward Kenney, M. N. Moore, Philip Pluta, Andrew Sage, M. A. Schultz,
and Joseph Thie. Special recognition should be given to Nicola Pacilio,
who provided original material for Chapter 3 and reviewed it in the final
form; to Bruno Bars, who devoted much time to an extensive review and
criticism of the manuscript; and to Robert Albrecht and James Sheff for
the original developments presented in Chapter 5. I am also indebted
to Julius S. Bendat not only for his review of the manuscript but also for
many helpful discussions about the original techniques developed by him
and his associates at the Measurement Analysis Corporation.
The Atomic Energy Commission supported the preparation of the
manuscript, and I am indeed grateful to the AEC, particularly personnel
of the Division of Technical Information: James D. Cape initiated
preparation of the book, and John Inglima and Robert F. Pigeon admin
istered its preparation. Editorial work was done by Charles Carroll,
Jean Smith, and Margaret Givens of the Division of Technical Informa
tion Extension, Oak Ridge; their meticulous care has contributed very
significantly to the internal consistency and readability of the manu
script. Credit is also due members of the Graphics Art Branch who are
responsible for the excellence of the art work.
The herculean task of typing this manuscript, parts of it as many as
four times, was ably carried out by Joan Boley. Finally, I am particularly
appreciative of the understanding of my wife, Paula, during the writing
and preparation of the manuscript, without which this book would not
have been possible.
ROBERT E. UHRIG
Gainesville, Florida
April, 1970
Contents
1 Introduction 3
11 Random Processes in Nuclear Reactor Systems, 3
12 Motivation for Random Noise Techniques in Measurements on
Nuclear Reactors, 5
13 Random Processes and Variables, 7
14 Stationary and Ergodic Processes, 9
2 Statistics for Random Noise Analysis 13
21 Introduction, 13
22 Elementary Probability Theory, 13
23 Mean Value, Variance, and Standard Deviation, 17
24 Probability, Probability Density, and Probability
Distribution Functions, 19
25 Average Values and Probability Moments, 24
26 Probability Distributions in Radioactive Decay, 26
27 Special Probability Densities and Distributions, 32
28 Parameter Estimation, 41
29 Correlation Functions, 45
3 NeutronCounting Techniques in Nuclear Reactor Systems 50
31 Introduction, 50
32 Probability Distribution of Fission Neutrons, 51
33 RossiAlpha Technique, 54
34 VariancetoMean (Feynman) Method, 60
35 Bennett Variance Method, 65
36 Count Probability Methods, 66
37 Interval Distribution (Babala) Method, 69
38 DeadTime (Srinivasan) Method, 73
39 Correlation Analysis Techniques, 74
310 Covariance Measurements, 76
311 EndogenousPulsedSource Technique, 78
4 Basic Relations of Random Noise Theory 83
41 Introduction, 83
42 Autocorrelation Function, 83
vii
viii CONTENTS
43 Autocovariance Function, 85
44 Power Spectral Density, 86
45 Special Autocorrelation Functions and Power
Spectral Densities, 89
46 CrossCorrelation Function, 96
47 CrossCovariance Function, 98
48 Cross Spectral Density, 99
49 InputOutput Relations, 100
410 Practical Considerations, 104
411 OneSided Spectral Densities, 105
412 Influence of Mean Value on Correlation Functions
and Spectral Densities, 110
413 Coherence Functions, 113
414 TwoDetector Correlation and SpectralDensity
Measurements, 114
415 MultipleInput Linear Systems, 119
5 Reactor Noise Theory 130
51 Introduction, 130
52 NoiseEquivalent Source, 130
53 Langevin ProcedureLumpedParameter Model, 134
54 SpaceDependent Reactor Noise, 137
55 SpaceDependent Noise in an Infinite Medium, 144
56 Effect of Boundaries on Correlation, 151
57 SpaceDependent Noise in an Unreflected Parallelepiped, 153
58 Conclusions, 158
6 Noise Measurement Techniques 161
61 Introduction, 161
62 Correlation Measurements, 163
63 SpectralDensity Measurements, 165
64 Measurement of Transfer Functions, 168
65 Direct Harmonic Analysis, 170
66 Finite Length of Record, 173
67 Lag and Spectral Windows, 175
68 SpectralDensity Analyses, 178
69 Statisical Degrees of Freedom, 180
610 Influence of Uncorrelated Noise on TransferFunction
Measurements, 184
611 Precision of TransferFunction Measurements, 188
7 Noise Instrumentation and Measurement Techniques 197
71 Instrumentation for ReactorNoise Measurements, 197
72 AnalogComputer Techniques for Continuous Data Analysis, 198
CONTENTS ix
73 Probability Density Measurement, 204
74 Measurement of Correlation Functions, 206
75 SpectralDensity Measurements, 208
76 Filtering Techniques in SpectralDensity Measurements, 213
77 Spacing of SpectralDensity Estimates, 223
78 Periodic Data Analysis, 225
79 Transient Spectrum Analysis, 227
8 Acquisition, Transmission, and Recording of Data 231
81 Introduction, 231
82 Acquisition of Data, 232
83 Measurement Transducers, 234
84 Data Transmission, 244
85 Analog Data Recording, 257
86 AnalogtoDigital Conversion, 265
87 Multiplexing: Time Sharing of Equipment, 272
88 Digital DataAcquisition Systems, 273
9 Pseudorandom Noise Techniques 277
91 Introduction, 277
92 Input Variables for Cross Correlation, 279
93 MaximumLength LinearShiftRegister Sequence
(m Sequence), 286
94 Residue of the Square Pseudorandom Variable, 300
95 Multifrequency Binary Input Signals, 302
96 InverseRepeat Pseudorandom Binary Variable, 305
97 Use of Pseudorandom Variables as a Substitute
for Random Noise, 307
98 Cross Correlation with Pseudorandom Binary Signals, 311
99 Use of Pseudorandom Ternary Variables
for Nonlinear Systems, 318
10 Digital Processing of Data 324
101 Introduction, 324
102 Trend Removal, 324
103 Digital Processing of Periodic Data, 327
104 Digital Filtering, 330
105 Statistical Analysis, 345
106 FourierSeries Representation (Classical Procedure), 349
107 Correlation Functions and Spectral Densities, 350
108 Transfer Functions and Coherence Functions, 355
109 Parameter Selection for Reactor Tests, 356
1010 Fast Fourier Transforms, 359
x CONTENTS
11 Experimental Noise Measurements in Nuclear Reactor Systems 365
111 Introduction, 365
112 NeutronPulse Counting Experiments, 365
113 Noise Measurements in Critical Reactors, 383
114 Reactivity Measurements, 394
115 Noise Measurements in Power Reactors, 418
12 Special Noise Techniques and Applications in Nuclear Systems 441
121 Introduction, 441
122 Optical Demonstration of Correlation, 441
123 Reactor Noise Analysis Using Polarity Correlation, 443
124 InCore FlowVelocity and Vibration Measurements
Made by Use of Electrodes and Cross Correlation, 447
125 Use of Exponential Cosine Autocorrelation Functions
in Processing NuclearSystem Test Data, 449
126 Noise Analysis of Nuclear Reactors by Use
of Gamma Radiation, 451
127 Acoustical Noise Measurements in Nuclear Reactors, 456
128 Pseudorandom Noise Measurement of Neutron Cross Sections, 457
Appendix: Deterministic Variables 465
Index 475
RANDOM NOISE TECHNIQUES
IN
NUCLEAR REACTOR SYSTEMS
1
Introduction
11 RANDOM PROCESSES IN NUCLEAR REACTOR SYSTEMS
The first experiment in which energy was released from nuclear fission
in a controlled selfsustaining chain reaction was achieved by Enrico Fermi
and his associates in December 1942 in a squash court beneath Stagg
Stadium at the University of Chicago with a "nuclear pile" of graphite
and uranium. Fermi had predicted that this particular configuration of
materials would achieve criticality from calculations based on proba
bilities (called neutron cross sections) of interaction between neutrons
and constituent materials. The probabilities of the various types of
interaction, i.e., scattering,, radiative capture, and fission, had been
measured in a series of experiments carried out in the preceding months.
Hence, even in the very earliest days of the nuclear era, the essentially
probabilistic nature of the fundamental processes involved was recognized
and used effectively in the calculations and experiments that led to the
world's first selfsustaining nuclear chain reaction.
In the approach to criticality of a nuclearreactor system, the fluctua
tions that take place in the power level can be observed from recordings
of the output of the neutrondetector instrumentation system. In a
typical subcritical reactor, an artificial neutron source randomly supplies
the neutrons necessary to initiate the chain reaction. Often this source
consists of plutonium and beryllium or polonium and beryllium in which
the decay of the alphaemitting polonium or plutonium occurs randomly;
i.e., each disintegration is an event that is not dependent on the preceding
or following disintegrations. Hence the neutrons produced by the
alphaneutron reactions are generated randomly. Although we often
speak in terms of the average number of neutrons emitted per unit time
from such a source, the number emitted in successive time intervals is a
randomly varying quantity. These neutrons travel throughout the
nuclear system where various types of interactions take place. Typically,
a fission neutron undergoes a number of scattering collisions with
moderating or coolant materials before eventually being absorbed or
3
4 RANDOM NOISE TECHNIQUES
escaping from the reactor. Each step in the life of a neutron (which is
strongly influenced by the amount, nuclear cross section, and geometrical
arrangement of the materials present) can be dealt with in a probabilistic
manner. In cases where nuclear fission occurs, the number of neutrons
released is again a probabilistic quantity, varying between zero and six,
with a mean value of about 2.5 for 235U fission.
Although microscopic details of the interactions govern the behavior of
a nuclear reactor system, practical observations are usually made on a
macroscopic basis. When we view the processes taking place in a
subcritical nuclear reactor from the macroscopic viewpoint, we find that
the system is being disturbed by a random phenomenon, the emission of
individual neutrons from the extraneous neutron source. Such neutrons
may start a very long fission chain, but ultimately the chain must die out
if the reactor is subcritical. However, each of the chains initiated by
external neutrons contributes to the neutron population in the reactor,
which is directly related to the power level. With a large number of
individual chain reactions, each initiated by the independently emitted
neutrons from the extraneous source, taking place simultaneously in the
reactor, it is clear that the neutron population is going to increase and
decrease in a stochastic, or random, manner. When the reactor is
highly subcritical, the neutron chains are quite short and the fluctuations
are relatively small. However, as the reactor approaches criticality, the
chains increase in average length. For instance, when the reactor
reaches an effective multiplication factor of 0.98, each neutron injected
into the system from an outside extraneous source generates, on the
average, 50 additional neutrons before the chain dies out. In a typical
uranium system, this means that about 20 fissions are required to produce
these 50 neutrons. Since some chains are relatively short, it is necessary
that others be very long to sustain this average.
In many situations the geometrical arrangement, amount, or effective
cross section of the materials will be changed as a function of time, and
this change will result in a variation of the neutron population. The
classical pile oscillator experiment is an example in which the neutron
absorber in the rotor is moved from one position to another in a prescribed
manner, thereby changing the geometrical arrangement of the neutron
absorber material, as well as its effective cross section due to selfshielding
effects. In such a case the input (movement of material and change of
effective cross section) may not necessarily be random. In fact, it often
is made to be deterministic; typically, reactivity is changed in a sinusoidal
manner, at least to a first approximation, by the movement of one
absorber with respect to another. Hence the input perturbation is
periodic and deterministic rather than random. However, the output
may be influenced so strongly by the statistical processes in the reactor
INTRODUCTION 5
that the deterministic sinusoidall in this case) component may be
completely obscured by the random component.
The dynamic characteristics of a system can be studied by an analysis
of its output variables as a function of its input variables and time. In
some situations the phenomena involved may be random in the sense
that the observed fluctuations arise as a result of internal or external
random stimulation that cannot be controlled. In others the phenome
non itself involves probabilities that cause the observed variables to
fluctuate in a randomlike manner, even though the input is deterministic.
Often an experimenter may externally stimulate a system with either a
random or a deterministic input to produce a variation of the output
that can be analyzed alone or in conjunction with the input. In many
situations more than one of these conditions may exist simultaneously;
e.g., a subcritical nuclear system may be stimulated by both an internal
indigenous neutron source and a neutron generator whose output is
controlled in a programmed manner.
12 MOTIVATION FOR RANDOM NOISE TECHNIQUES IN
MEASUREMENTS ON NUCLEAR REACTORS
The reasons for utilizing random noise techniques in measurements on
nuclear reactor systems may be one or more of the following:
1. To measure the dynamic behavior or monitor the status of a nuclear
system with a minimum of perturbation or interference with normal
operation.
2. To take advantage of the naturally occurring fluctuations of neutron
population to evaluate system parameters.
3. To utilize special techniques or special equipment that facilitate the
experiment and/or its data acquisition and processing.
4. To better describe and explain the nature of the phenomena producing
the fluctuations.
5. To use the theory of fluctuation to evaluate the errors in measurements.
The author does not view random noise techniques as a panacea for
all reactordynamics investigations. Rather, noise techniques supple
ment the classical dynamic procedures such as reactor oscillator experi
ments, bursttype excursions, pulsed neutron experiments, and other
moreorless conventional procedures used in measuring parameters of
nuclear systems.
12.1 Microscopic Noise Techniques. Noise studies in nuclear
reactor systems can be carried out either on the microscopic level or on
the macroscopic level. On the microscopic level the occurrence of
counts in a detector, triggered by the individual chains that occur in a
6 RANDOM NOISE TECHNIQUES
nuclear reactor, are studied by statistical techniques. The early theoreti
cal work in this field was carried out by Feynman,1,2 Fermi,2 and de
Hoffman,14 at Los Alamos about 1947 and led to the Rossialpha
experiments on fast critical assemblies described later by Orndoff.6
Various other microscopic techniques have been developed by Feynman,1'2
Mogilner and Zolotukhin,6 Bennett,7 P11,8,9 Pacilio,10 and others."14
Several of these techniques involve describing a statistical distribution
and its deviation from a Gaussian distribution, and others deal directly
with the probabilities of detecting an event. In all cases the nature of
the mathematical treatment is influenced by the type of equipment used
for the measurement and by the fact that a detected event involves the
removal of a neutron from the system.
12.2 Macroscopic Noise Measurements. The macroscopic
approach to reactor noise measurements was introduced by Moore"",16
and verified experimentally by Cohn17,18 about 10 years after microscopic
noise work was initiated. The Langevin formulation of reactor noise by
Moore"1 is based on early work in Brownian motion in which the noise
in a system is considered to be the response of the system to a random, or
stochastic, driving function; i.e., the noise is the response of the system to
an input representing the statistical nature of the underlying process.
If the dynamic characteristics of the system are known, it is possible to
relate the correlation or spectraldensity measurements (both defined
later) to the parameters of the system.
The driving functions may be random fluctuations either in one of the
variables or in one of the parameters of the system. For example, the
driving function that produces fluctuations in the neutron density in a
subcritical nuclear reactor may be the fluctuations in the rate at which
neutrons are emitted from an extraneous neutron source, one of the
variables of the system. On the other hand, the driving force that
produces fluctuations in a zeropower reactor may be the variations in
the delayedneutron fractions, the number of neutrons released per
fission, and the effective neutron lifetime, all of which are parameters of
the system. Many driving functions of both types may be present in a
particular system, and all must be taken into account. However, in
many practical situations one or two particular driving functions may
predominate and all others can be neglected. For instance, a variable,
such as reactivity, can be deliberately perturbed in a random manner with
a rootmeansquare amplitude 10 to 100 times as large as that of the next
most significant driving function.
The usefulness of both these techniques is dependent on a good under
standing of the dynamic behavior of the system under investigation;
i.e., the processes involved can be adequately represented by mathe
matical models. The degree of sophistication of the model will vary
INTRODUCTION 7
from case to case, depending on the goals of the investigator. Often a
transfer function based on a lumpedparameter onespeedneutron
representation in which delayed neutrons are ignored is adequate. In
other cases the use of a model based on the threedimensional time
dependent Boltzmann neutrontransport equation, approximated in 100
regions and 30 energy groups, is not adequate. Feedback effects and
other nonlinearities may be introduced into the mathematical model and
linearized since the rootmeansquare amplitude of the random noise
driving functions is usually small enough that linear approximations
are justifiable.
13 RANDOM PROCESSES AND VARIABLES
Throughout this text we will be dealing with phenomena that occur in
nuclear systems. However, we can observe the behavior of the system
only by measuring certain of its "observables" (pressures, temperatures,
power level, etc.). These properties are measured with sensors or
transducers that convert the quantity being measured into a physical
quantity (electrical current, mechanical displacement, etc.). These can
be readily interpreted by the experimenter or recorded by his data
acquisition system; they are time variables that represent the phenome
non being studied. Therefore it is appropriate to refer to the inputs and
outputs of a system as being variables and to designate them as random
or deterministic in accordance with their nature. In general, phenomena
are classified as random if their behavior can be described only in terms
of statistical quantities.
Let us consider the timehistory records of a system (such as the power
of a baseload nuclear power generating plant in a metropolitan area with
a complex industrial and domestic load) as shown in Fig. 11. These
individual records (from which the steady components have been
removed) might represent the load pattern for several (not necessarily
consecutive) 24hr periods. Such an array of records is called an
ensemble, and each time history is called a sample record. The collection
of all possible sample records produced by the random phenomenon under
consideration is called a stochastic process. The term "process," in a
strict sense, means a collection of sample records sufficiently large to
unambiguously establish the statistical properties of the quantity being
measured.
Such an ensemble of records as that shown in Fig. 11 can be obtained
by taking many individual measurements or by dividing a single record
into an arbitrary number of pieces. When the latter procedure is used,
there is, for most practical purposes, little difference in meaning between
the terms "process" and "variable." In this text, "process" will be
8 RANDOM NOISE TECHNIQUES
X, (t) _Xt \/  t
X2(t) "N t
x, (t)= t
XN(t) M t
t, t2 ti tN
Fig. 11. An ensemble of time records.
used when an ensemble of sample records is involved. Since a large part
of random noise theory is based on the assumption of ergodicity or, at
least, stationarity (both defined later), which can be shown only if an
ensemble of sample records is available, the term "process" is more prop
erly used. However, in practical situations it is usually necessary to
proceed with the analysis of data with assurance only of selfstationarity,
which involves only a single sample record. Hence the term "variable"
can also be properly applied to this situation. An effort is made to
retain the distinction between these two terms throughout this text,
although there are situations where the choice of the term to use is
completely arbitrary.
The classification of variables and processes as being either deter
ministic or random is generally straightforward. If the variable is
INTRODUCTION 9
reproducible or its future behavior predictable (i.e., if it can be repre
sented with reasonable accuracy by explicit mathematical relations),
it is classified as deterministic. For example, the reactivity of a nuclear
reactor with a sinusoidal pile oscillator in operation is a variable that can
be described mathematically as a function of time. On the other hand,
the position of an individual neutron as it moves throughout its lifetime
inside a reactor is not predictable and therefore must be classified as a
random variable. At best, we can evaluate the average distance
traveled by all the neutrons in the reactor. In general, the future
behavior of random variables is described only in terms of probabilities
and statistical quantities rather than by explicit mathematical relations.
If one were to take an extreme position, he might argue that there is no
such thing as a deterministic variable; i.e., on a "microscopicenough"
scale, every phenomenon yields observables that must be classified as
random variables. It can also be argued that many random variables
could be described by a mathematical relation and their future behavior
predicted if the phenomena involved were sufficiently well understood.
While conceding the possibilities of these extreme interpretations, we
can readily differentiate between deterministics and random variables in
most practical situations. For situations in which this differentiation is
not possible, methods of mathematical determination are described later.
14 STATIONARY AND ERGODIC PROCESSES
14.1 Stationarity. A function, x(t), is said to be a random variable
if its value at any instant of time can be described only in terms of its
statistical properties. The principal classification of random variables
is that of stationarity or of nonstationarity. A random variable is said
to be stationary if its statistical characteristics do not change with time.
An assumption of stationarity is usually justified for systems in which
the basic mechanisms giving rise to the fluctuations are invariant over a
reasonable period of time.
The matter of the particular statistical properties that must remain
constant as a function of time to demonstrate stationarity in a process
or variable is an integral part of the definition of the stationary process or
variable. Some authors have (erroneously) indicated that it is sufficient
to determine that the ensemble mean and ensemble mean square value
remain constant as a function of time to establish stationarity. Others
(e.g., Bendat and Piersol'1) contend that it is necessary to show that the
ensemble mean value and the autocorrelation function (of which, as we
will see later, the mean square value is a special case) must be constant as
a function of time to demonstrate weak stationarity, or stationarity in a
general sense. They further indicate that an infinite collection of higher
10 RANDOM NOISE TECHNIQUES
order moments and joint moments for the random process is necessary
to establish the complete family of probability distribution functions
describing the process and that, for the special case where all possible
moments and joint moments are time invariant, the random process can
be said to be strongly stationary, or stationary in a strict sense. They
also indicate, however, that for many practical applications verification
of weak stationarity justifies an assumption of strong stationarity.
Clearly it is not practical to demonstrate strong stationarity even under
the most ideal situations. Even demonstrating weak stationarity is
difficult; therefore a range of values must be established for the mean
value and mean square value, or autocorrelation function, which will be
acceptable for a finitelength record.
14.2 Ergodic Processes. All stationary processes can be further
defined as being ergodic or nonergodic. This property can be demon
strated by referring to the ensemble of records in Fig. 11. Let us
consider an ensemble average of the array of records at any given time,
ti. The ensemble average indicated by (x(ti)) is calculated by
N
y xQi(tl)
(x(ti)) = lim i=1 (11)
N* N
Ensemble averages at other times (t2, t3, t4, etc.) can be calculated in a
similar manner. If the process is stationary, each of these ensemble
averages should be the same; i.e., the ensemble averages remain constant
regardless of time.
Now let us consider the time average of a single sample record, xi(t).
1 fT
Xi(t) = lim xi(t) dt (12)
r. 2T JT
If the sample records of Fig. 11 are of the same stationary process, the
time averages should be the same. If the process is ergodic, the common
value for the ensemble average at any time (x(t)) must be equal to the
common value for the time average of any record xi(t). Again, obtaining
identical numerical values in a given experimental situation is impossible,
and therefore acceptable tolerances for these values must be established.
It is also necessary that the autocorrelation function and other properties
based on time averages be equal to the corresponding characteristics
based on ensemble averages.
Ergodic random processes are clearly an important class since all their
properties can be determined by performing the time average over a
single record. Fortunately, in actual practice, random variables
representing stationary physical phenomena are often ergodic. For this
INTRODUCTION 11
reason the properties of the stationary random phenomena can be
measured quite satisfactorily from a single observed timesample record.
14.3 SelfStationarity. Individual time records of a random varia
ble are sometimes said to be stationary. This means that the properties
computed over short intervals of time within a single time record do not
vary significantly from one interval to the next. However, these varia
tions are usually greater than would normally be expected owing to the
normal statistical sampling variation. This type of stationarity is
sometimes called selfstationarity to avoid confusion with the more
classical definition.
The sample record obtained from an ergodic random process is self
stationary. Furthermore, sample records for most physically interesting
nonstationary random processes are selfnonstationary. Bendat and
Piersol'1 have indicated that if an ergodic assumption is justified, as it
is for most stationary physical phenomena, verification of selfstation
arity for a single sample record effectively justifies an assumption of
ergodicity for a random process from which the sample record is obtained.
Therefore we will proceed with the development of a theory which,
strictly speaking, is valid only for ergodic processes but which can
be applied to variables and processes that have been shown to be
selfstationary.
REFERENCES
1. R. FEYNMAN, F. DE HOFFMAN, and R. SERBER, J. Nucl. Energy, 3: 64 (1956).
2. E. FERMI, R. P. FEYNMAN, and F. DE HOFFMAN, Theory of the Criticality of
the Water Boiler and the Determination of the Number of Delayed Neutrons,
USAEC Report MDDC383(LADC269), Los Alamos Scientific Laboratory,
December 1944.
3. F. DE HOFFMAN, Intensity Fluctuations of a Neutron Chain Reactor, USAEC
Report MDDC382(LADC256), Los Alamos Scientific Laboratory, October
1946.
4. F. DE HOFFMAN, Statistical Aspects of Pile Theory, in The Science and
Engineering of Nuclear Power, C.D. GOODMAN (Ed.), Vol. II, p. 116, Addison
Wesley Publishing Company, Inc., Reading, Mass., 1949.
5. J. D. ORNDOFF, Prompt Neutron Periods of Metal Critical Assemblies, Nucl.
Sci. Eng., 2: 450460 (1957).
6. A. I. MOGILNER and V. G. ZOLOTUKHIN, Measuring the Characteristics of
Kinetics of a Reactor by the Statistical pMethod, At. Energ. (USSR),
10(4): 377379 (1961).
7. E. F. BENNETT, The Rice Formulation of Pile Noise, Nucl. Sci. Eng., 8: 5361
(1960).
8. L. PAL, Determination of the Prompt Neutron Period from the Fluctuations
of the Number of Neutrons in a Reactor, Central Research Institute of
Physics, Hungarian Academy of Sciences, Budapest, 1962.
12 RANDOM NOISE TECHNIQUES
9. L. PAL, Statistical Fluctuations of Neutron Multiplication, in Proceedings of
the Second United Nations International Conference on the Peaceful Uses of
Atomic Energy, Geneva, 1958, Vol. 16, p. 687, United Nations, New York,
1959.
10. N. PACILIO, Short Time Variance Method for Prompt Neutron Lifetime
Measurements, Nucl. Sci. Eng., 2: 266 (1965).
11. W. MATTHES, Statistical Fluctuations and Their Correlation in Reactor
Neutron Distribution, Nukleonik, 4: 213 (1962).
12. D. R. HARRIS, The Sampling Estimate of the Parameter Variance/Mean
in Reactor Fluctuation Measurements, USAEC Report WAPDTM157,
Westinghouse Electric Corp., Bettis Plant, August 1958.
13. D. H. BRYCE, Measurement of Reactivity and Power Through Neutron
Detection Probabilities, in Noise Analysis in Nuclear Systems, Gainesville,
Fla., Nov. 46, 1963, Robert E. Uhrig (Coordinator), AEC Symposium
Series, No. 4 (TID7679), 1964.
14. A. FURUHASHI and S. IZUMI, A Proposal on Data Treatment in the Feynman
Alpha Experiment, J. Nucl. Sci. Tech. (Tokyo), 4: 99 (1967).
15. M. N. MOORE, The Determination of Reactor Transfer Functions from
Measurements at Steady Operation, Nucl. Sci. Eng., 3: 387394 (1958).
16. M. N. MOORE, The Power Noise Transfer Function of a Reactor, Nucl. Sci.
Eng., 6: 448452 (1959).
17. C. E. COHN, Determination of Reactor Kinetic Parameters by Pile Noise
Analysis, Nucl. Sci. Eng., 5: 331335 (1959).
18. C. E. COHN, A Simplified Theory of Pile Noise, Nucl. Sci. Eng., 7: 472 (1960).
19. J. BENDAT and A. PIERSOL, Measurement and Analysis of Random Data,
John Wiley & Sons, Inc., New York, 1966.
2
Statistics for Random
Noise Analysis
21 INTRODUCTION
Random noise analysis has its basis in statistics; indeed, an under
standing of the fundamental concepts of statistical techniques is essential
to the understanding of how random noise theory is used to analyze the
behavior of dynamic systems. Since there is extensive literature
available for statistics, this chapter includes only the concepts directly
related to random noise analysis and those needed to establish nomen
clature for future work. No attempt is made to be rigorous in derivations.
22 ELEMENTARY PROBABILITY THEORY
22.1 Simple Probability. The simplest case of probability in which
all events are equally likely to occur is considered. If an event can
happen in n ways, of which m are favorable to the occurrence of a par
ticular event, then the probability, p, of its occurrence in a single trial is
S=m (21)
n
Elementary probability is often associated with the throwing of dice.
For instance, the probability that a four will appear when a die is thrown
is 6 since the total of ways a die can fall is six and of these ways only one
is favorable to the occurrence of a four. It is, of course, presumed that
the die is not "loaded" and will fall in any one of the six possible ways
with equal probability. If this is not the case, the appropriate probability
must be determined experimentally. It is obvious that, if an event is
certain to happen, then the probability of its occurrence is unity; if the
event is certain not to happen, the probability is zero.
Events are said to be mutually exclusive if the occurrence of one of them
13
2
Statistics for Random
Noise Analysis
21 INTRODUCTION
Random noise analysis has its basis in statistics; indeed, an under
standing of the fundamental concepts of statistical techniques is essential
to the understanding of how random noise theory is used to analyze the
behavior of dynamic systems. Since there is extensive literature
available for statistics, this chapter includes only the concepts directly
related to random noise analysis and those needed to establish nomen
clature for future work. No attempt is made to be rigorous in derivations.
22 ELEMENTARY PROBABILITY THEORY
22.1 Simple Probability. The simplest case of probability in which
all events are equally likely to occur is considered. If an event can
happen in n ways, of which m are favorable to the occurrence of a par
ticular event, then the probability, p, of its occurrence in a single trial is
S=m (21)
n
Elementary probability is often associated with the throwing of dice.
For instance, the probability that a four will appear when a die is thrown
is 6 since the total of ways a die can fall is six and of these ways only one
is favorable to the occurrence of a four. It is, of course, presumed that
the die is not "loaded" and will fall in any one of the six possible ways
with equal probability. If this is not the case, the appropriate probability
must be determined experimentally. It is obvious that, if an event is
certain to happen, then the probability of its occurrence is unity; if the
event is certain not to happen, the probability is zero.
Events are said to be mutually exclusive if the occurrence of one of them
13
14 RANDOM NOISE TECHNIQUES
precludes the occurrence of the others. In throwing a die, the occurrence
of a four certainly precludes the occurrence of any other number. In
the case of n mutually exclusive events, the probability that any one of
m events will occur is m/n; i.e., the sum of the probabilities of the
individual events. The probability of throwing either a 2 or a 5 with a
die is , or 4, since the probability of either event is T.
Events are said to be independent if the occurrence of one of them does
not influence the occurrence of the other. When two dice are thrown, the
occurrence of a particular number with one die does not influence the
number that comes up with the other die.
When events are independent, the probability of several of them
occurring as a group, i.e., the joint probability, is the product of the
probabilities of each event occurring independently. When two dice are
thrown, the probability of getting two fives is X = I since the two
events are independent. This procedure can be extended to evaluate the
probability of obtaining any given sum (between 2 and 12) when two
dice are thrown. For example, a sum of 6 can be obtained from the
following combinations: 51, 42, 33, 24, and 15. Since the occur
rence of any one of these pairs would preclude the occurrence of the other
four combinations listed, these five possibilities are mutually exclusive,
and the probability of one of these combinations occurring in a single
throw is ', i.e., the sums of the individual probabilities. The proba
bility of each of the 11 possible sums occurring in a single throw of two
dice is tabulated in Table 21. Note the sum of all probabilities is unity
since one of the combinations must occur.
Table 21
Individual Probabilities
Sum of Number of Ways
Two Dice Sum Can Occur Probability
1 0 0
2 1 
3 2
4 3 
5 4
6 5 A
7 6
8 5 A3
9 4 9
10 3 A
11 2 A1
12 1 A
STATISTICS FOR RANDOM NOISE ANALYSIS 15
These data are presented in the form of a bar graph in Fig. 21, giving
a display of the probabilities. The height of each bar represents the
probability of that particular number resulting from the throwing of two
dice. The total length of all bars will be equal to unity since it is certain
that one of these sums will be obtained when two dice are thrown.
Curves such as those in Fig. 21 are called discrete probability curves
because the probability function p(xi) (i.e., the probability that any
particular quantity x will be xi) is plotted against the discrete variable x.
If one considers the probability that the sum of dice will be equal to or
less than a certain number, the result is the sum of the probabilities of all
J 6
0
1
0 X
1 2 3 4 5 6 7 8 9 10 11 12
SUM OF TWO DICE
Fig. 21. Probability curve for sum of two dice.
of the possibilities up to and including that number. For example, the
probability that the sum of two dice will be equal to or less than 5 is
F + T T + 9 = Y (see Table 21), which is called the cumulative
probability. The cumulative probability for each possible result when two
dice are thrown is given in Table 22. The probability of obtaining one
as a sum is zero since it is impossible. The cumulative probability is
plotted vs. the sum of two dice in Fig. 22; this plot is called the probability
distribution curve. It is apparent that Fig. 22 is the integral of the curve
in Fig. 21. This relationship will be discussed later.
22.2 Conditional Probabilities. If events are not independent or
mutually exclusive, the joint probability of the various events is still the
product of the probabilities of the individual events provided that the cor
rect probabilities are used. For instance, the probability of drawing two
aces in successive draws from a deck of cards is dependent on whether the
first card is replaced before the second card is drawn. Thus it is necessary
to introduce the concept of conditional probability, i.e., the probability of
event B happening if event A has occurred.
16 RANDOM NOISE TECHNIQUES
Table 22
Cumulative Probabilities
Sum of Cumulative
Two Dice Probability
1 0
2 A
3
4
5 A
6 A
7 A
8 H
9 i
10 I
11 5o
12 1
Let us consider an experiment in which n mutually exclusive events can
occur of which mA are favorable to event A, mB are favorable to event B,
and mAB are favorable to the occurrence of events A and B. The
corresponding probabilities are p(A), p(B), and p(A,B), respectively.
Now let us define the conditional probability p(AIB), the probability
z 5I I
I I
0 LL .
SVII I
2 I
0 0
1 2 3 4 5 6 7 8 9 10 11 12
SUM OF TWO DICE
Fig. 22. Probability distribution curve for sum of two dice.
STATISTICS FOR RANDOM NOISE ANALYSIS 17
that event A will occur if event B has occurred previously, as
mAB MAB/n p(A,B)
p(AIB) n (22)
mB mB/n p(B)
Similarly, the conditional probability p(BIA), the probability that
event B will occur if event A has occurred previously, is
mAB p(A,B)
p(BIA) = p(AB) (23)
mA p(A)
Therefore we can rearrange these equations to give
p(A,B) = p(A) p(BJA) = p(B) p(AIB) (24)
i.e., the joint probability of events A and B occurring is the product of the
unconditional probability of the occurrence of one event and the con
ditional probability that the other event will occur if the first event has
occurred previously.
Let us consider the original problem of drawing two aces from a deck
of cards in two successive drawings. We will define event A as the
occurrence of an ace in the first drawing and event B as the occurrence
of an ace in the second drawing.
Case 1: The two drawings are both from shuffled decks with all cards
in place, i.e., the two events are independent. Hence
p(A) = p(B) = p(AIB) = p(BIA) = A = A
Therefore
p(A,B) = p(A) p(B) = A X A = TW
Case 2: The second drawing is made from the same deck without
returning the first card. The probability of the first card being an ace is
the same as in case 1, i.e., p(A) = A. If the first card is an ace, then
the probability of the second card being an ace is
P(BIA) = I = r
since there are now 51 cards left, of which only 3 are aces. Therefore
p(A,B) = p(A) p(BIA) = A X A = IT
23 MEAN VALUE, VARIANCE, AND STANDARD DEVIATION
Data from a test or a series of tests are often generalized in the sense
that they are considered typical of situations of a similar kind. However,
repeated tests do not give exactly the same results, and statistical methods
are needed to interpret the results.
There are two kinds of information in most sets of data: evidence of
18 RANDOM NOISE TECHNIQUES
uniformity and of variability. Uniformity is represented by the average
value or the rootmeansquare value, and variability is usually represented
by an index of precision such as standard deviation or variance.
Let us consider the stationary process x(t) represented by the infinite
ensemble in Fig. 11. The average or mean value of the infinite ensemble
of records at time tl is the average of the values x1(ti), x2(tl), xa(t1), .
Xi(tl), N(t); i.e.,
N
z xi(t1)
(x(t,)) = lim = 1 (25)
N N
The meansquare value of the ensemble of records at time tl is
N
y xi(tl)
(x2(t)) = lim i=1 (26)
N. N
The variance of the ensemble of records at time t1 is given by
N
( [Xi(tl) (X(t)]
) = lim i
N N
( xi(t ) 1 i xt)
= lim 1 2(x(t)) lim i=1 + (x(ti))2 (27)
N w N N N N
The first term is seen to be the meansquare value of the ensemble at
time tl as defined by Eq. 26, and the other two terms involve the mean
value as defined by Eq. 25. Hence the variance is
2(,i) = (x2(t)) 2(x(ti))2 + (x(t1))2
(28)
= (x2(t)) X(tl))2
and the standard deviation is
OX(t,) = [(x2(t)) (Z(tl))2]1/2 (29)
When the average value of the ensemble is equal to zero, the variance
and standard deviation are, respectively, the meansquare and root
meansquare values, i.e.,
= (x2(tl)) (210)
and (i)
a (t = [(x2(ti))1/2 (211)
Similar expressions can be written for all the statistical properties of the
ensemble of records at other times t2, ts, t4, ty. If the process is
STATISTICS FOR RANDOM NOISE ANALYSIS 19
stationary, the ensemble mean and meansquare values, indeed all the
statistical characteristics, of the ensemble will remain constant for all
values of time.
Now let us consider a single infinitely long sample record xi(t). The
temporal mean and meansquare values are given by
x (t) = lim x,(t) dt = (212)
T_ 2T T
(t) = lim x(t) dt = (213)
T_. 2T JTI
respectively. The bar over the symbols indicates temporal averaging
over infinitely long records. The symbols lx, and l2. are commonly used
to represent true temporal mean and meansquare values. By the same
procedure used for the ensemble properties, we can derive the temporal
values of the variance and standard deviation of the infinite record to
be, respectively,
u, = [x(t)] [xi(t)] = [4 ] (214)
and
,.i = {[x(t)J [xe(t)12/2 = [ i]1/2 (215)
If the mean value is equal to zero, the variance and standard deviation
of the infinite record again become equal to the meansquare and root
meansquare values, i.e., respectively,
2, = X(t) = (216)
and
x. = [x(t)]1/2 = [,p2]1/2 (217)
Since xi represents any of the infinitely long records, of the stationary
process, all statistical characteristics of each sample record are the same.
If the process is ergodic, the temporal statistical properties will be equal
to the corresponding ensemble properties, and we can use either type of
statistical properties.
24 PROBABILITY, PROBABILITY DENSITY, AND PROBABILITY
DISTRIBUTION FUNCTIONS
24.1 Discrete Random Variables. When the random variable can
assume only a finite number of values in any finite interval, it is a discrete
random variable. In the case of the sum of two dice, only 11 discrete
(and integral) values are possible. Figure 21 is a graph of the probability
function p(xi) of this discrete random variable, i.e., the probabilities of x
assuming each value x, is plotted against x,.
20 RANDOM NOISE TECHNIQUES
The probability distribution function P(x < X) can be defined as the
probability that x will assume some value equal to or less than X, a
specifically designated value. We can express this relation for a discrete
variable by
P(x < X) = 2 p(xi) (218)
x,X
Obviously, when X = + oo, P(x < X) = 1, the probability distribution
function for the discrete random variable of Fig. 21 is given in Fig. 22.
The probability distribution function can be given by
P(x < X) = J p(xi) dx (219)
In either case, it is apparent that P(x < X) will have discontinuities at
the discrete values of xi.
The twodimensional joint probability function and joint probability
density function can be readily demonstrated by flipping two coins.
We can reduce the problem to numerical terms by assigning the following
numerical values: heads = 1; tails = 2. Since the probability of either
state on a coin is T, the probability of each of the four possible combina
tions of two coins (11, 12, 21, and 22) is I since the two events are
independent. This relation is shown in the joint probability function
graph of Fig. 23(a).
The joint probability density function P(x < X, y 5 Y) of this random
function is shown in Fig. 23(b). It is found by the expression
P(x X, y Y) = 1 p(xa,yk) (220)
xi
Here again this equation can be expressed in the form of an integral:
P(x < X, y < Y) = f i_ p(xe,yk) dx dy (221)
24.2 Continuous Random Variables. The concepts developed in
the preceding section can be applied to continuous random variables also.
Let us consider one of the continuous random sample records, xi(t), from
the ensemble of Fig. 11, where the amplitude can (theoretically) vary
continuously between o and + o. The probability distribution
function P(x < X), as for the discrete random variable, is defined as the
probability that x will assume a value equal to or less than X. We can
further define the probability density function p(x) of a continuous ran
dom variable to be the rate of change of P(x < X), i.e.,
S d[P(x < X)]
p() = dx(222)
dz
STATISTICS FOR RANDOM NOISE ANALYSIS 21
p(xi, Yk) Y(k)
^ / xi
0 1 2
(a)
y
P(x X,y ) Y)
///
0 1 2
(b)
Fig. 23. A twodimensional joint discrete probability function and joint
discrete probability distribution function for the tossing of two coins (H = 1,
T = 2). (a) Joint probability function. (b) Joint probability distribution.
The inverse relation is also very useful in dealing with continuous random
processes:
P(x < X) = /x p(x) dx (223)
We can obtain the relation for the probability that x is greater than a
22 RANDOM NOISE TECHNIQUES
and less than or equal to b, where a and b are arbitrary values, to be
P(a < x < b) = p(x) dx (224)
Since all of x lies between o and + oo,
P( < x < o) = j p(x) dx = 1 (225)
Furthermore, it is apparent from the definition of the probability dis
tribution function that it is a nondecreasing function, and hence we can
see from Eq. 222 that the probability density function p(x) is always
nonnegative. Figures 24 and 25 show probability density and proba
bility distribution curves, respectively, for a continuous variable.
To visualize the physical meaning of probability density function, let
p(x)
AT x =
p(X)
0 x
X X+dX
Fig. 24. Probability density curve for a continuous variable.
i P(x
1.0 
0 X x
Fig. 25. Probability distribution curve for a continuous variable.
STATISTICS FOR RANDOM NOISE ANALYSIS 23
us use the definition of a derivative as a limit to give
() li. P[x < (X + AX)] P(x X)
p(X) = hm
AXo0 ( AX
Slirm P[X
= hm (226)
AX [ AX !
In differential form Eq. 226 becomes
p(X) dX = P[X < x < (X + dX)] (227)
where p(X) dX represents the probability that the random variable falls
in the interval X < x < (X + dX). This is shown graphically in Fig.
24.
We can extend the concept of probability density to the multidimen
sional case by defining the joint probability density function p(x,y) as
02
p(x,y) [P(x < X, y Y)] (228)
ax Oy
The corresponding reciprocal relation is
P(x < X, y < Y) = i f,(x,y) dx dy (229)
Again we can use the limiting process for the partial derivative of Eq.
228 to give the differential equation
p(X,Y) dX dY = P[(X < x < (X + dX), Y < y (Y + dY)] (230)
where p(X,Y) dX dY represents the probability that a sample point falls
in the incremental area dX dY about the point (X,Y).
By analogy with Eq. 225, we can write
P( o < x < o, < y < oo) = f p(x,y) dxdy = 1 (231)
If individually we allow only one of the upper limits to go to zero, the
results are
/ J p(x,y) dx dy = P(x < X, y : co) = P(x < X) (232)
i f_ p(x,y) dxdy = P(x o,y 5 Y)= P(y Y) (233)
The probability that the random variable y < Y, subject to the
hypothesis that a second random variable x = X, can be called the
conditional probability distribution function P(y < YIX). Now we can
24 RANDOM NOISE TECHNIQUES
define the conditional probability density function as
d[P(y < Y(X)]
p(YIX) = d[( (234)
"dy
The corresponding reciprocal relation is
P(y < YIX) = fY p(ylX) dy (235)
If we differentiated Eq. 235 with respect to Y, we get
p(YlX) =p ) (236)
p(X)
or
p(X,Y) = p(Y[X) p(X) (237)
This indicates that the joint probability of a random variable f(x,y) being
equal to f(X,Y) is the product of the conditional probability p(YIX)
and an elementary probability p(X).
25 AVERAGE VALUES AND PROBABILITY MOMENTS
The term "average value" is usually used to represent mean value, or
the first moment of the probability density function. However, it can
be used to represent other average values such as the meansquare value
(second moment of the probability density function) or other weighed
function of the probability density function, e.g., the characteristic
function having an exponential weighting of the probability density
function.
For a discrete random variable, the first and second moments (mean
and meansquare values), respectively, of the probability density function
are
N
I Xi p(xi) N
O =  x p(xi) = E(x) (238)
Sp(xi) i=O
i=0
and
N
2 xi p(xi) N
NP = N = x p(xi) = E(x2) (239)
Sp(xi) i=O
i=0
where E(x) and E(x2) are the expectation values of x and x2, respectively.
The denominator is equal to unity when N is the total number of events
STATISTICS FOR RANDOM NOISE ANALYSIS 25
in the discrete random process. Similar expressions can be written for
the various higher moments, x3, x x etc. These relations for p and #
(as well as for the higher moments) are valid only for large values of
N; i.e., the statistical average or expectation value is reached only as
N o.
Similar expressions for the mean and meansquare values of a con
tinuous random variable are, respectively,
Sx p(x) dx
x = = x p(x) dx = E(x) (240)
Sp(x) dx 
and
cx1 p(x) dx r
f = : p = x2 p(x) dx = E(x) (241)
:_. p(x) dx
Values for the root mean square (rms), variance (ao), and standard
deviation (ao) can now be obtained by using the relations of Sec. 23, i.e.,
rms = (2)1/2 (242)
2 2, X (243)
) = ( p J)1/2 (244)
Another statistical average that is useful in random noise theory is the
characteristic function Mx(jv), which is a complex exponential weighting
of the probability density function of a continuous random variable:
J ej" p(x) dx
M(jv) =  ep)dx = ej" p(x) dx (245)
j_ p(x) dx f
where v is real. Since Eq. 245 has the general form of a Fourier inte
gral,* we can, under proper circumstance, use the inverse Fourier rela
tion to obtain the probability density
p(x) = M(jv) eji dv (246)
27r .
When x is a discrete random variable, Eq. 245 becomes
Mi(jv) = 2 p(x,) ei<" (247)
m
In this case we can obtain the usual forms of the Fourier integral transform pair
by letting 7 equal jv and w equal jx.
(
26 RANDOM NOISE TECHNIQUES
If we take the derivative of the characteristic function with respect to v,
d[M(jv) j f xeix p(x) dx (248)
dv .
and evaluate both sides at v = 0, the integral becomes the mean value
SdM(jv) (249)
dv o=
We see that the first moment of the random variable x can be obtained by
differentiating the characteristic function with respect to v and evaluating
the result at v = 0. The higher moments of a random variable can be
found by taking successive derivatives of the characteristic function with
respect to v and evaluating the result at v = 0:
S= (j) d (jv)] (250)
dv'n ,=o
Such a process is generally called moment generation.
When two random variables are involved, we can define a joint char
acteristic function of the joint probability distribution of the continuous
random variable x and y:
M(jvi,jv2) = f e(j,f i, p(x,y) dx dy (251)
In a manner analogous to the onedimensional case, we can use the two
dimensional Fourier transform to obtain the joint probability density
functions of a pair of random variables when we know their joint char
acteristic function M(jvi,jv2); i.e.,
p(x,y) = (2) M(jvljv2) e(iV 2y) dv, dv2 (252)
26 PROBABILITY DISTRIBUTIONS IN RADIOACTIVE DECAY
26.1 Binomial Distribution. The phenomenon of radioactive decay
is amenable to analysis by elementary probability theory. Radioactive
decay also offers an opportunity to demonstrate the binomial, Poisson,
and Gaussian (normal) probability distributions.'
If there are a large number No of radioactive atoms with a probability
of decay p, the probability of m atoms disintegrating in time t can be
evaluated. For the moment, consider only m of the No atoms. The
probability that the first of these m atoms will decay is p; that the first
and second will decay is p2; that the first, second, and third will decay is
p3; etc. The probability that all m of the atoms will decay is pm. If
STATISTICS FOR RANDOM NOISE ANALYSIS 27
exactly m of these atoms are to decay, the remaining (No m) atoms
must not decay. This probability is (1 p)No" since the probability of
not decaying is 1 p. Hence, for a particular group of m atoms, the
probability of exactly m disintegrations in time t is pm X (1 p)No.
However, this particular group of m atoms is not the only group of atoms
that can decay. The first of m atoms might be any one of the No atoms,
the second might be any one of No 1 atoms; etc.; the mth atom might
be any one of No m + 1 atoms. The product of these terms,
No(No 1)(No 2) (No m + 1)
m1 No!
= n (No i) = (253)
i=o (No m)!
is the total number of arrangements in which m atoms of No can dis
integrate in time t. Since this product also includes the order of selection
of the m atoms, it is necessary to divide by the number of permutations
of m atoms, which is m! Therefore the probability p(m) that m atoms
out of No atoms will disintegrate in time t is
p(m) = (N )! p(1 p)No (254)
This expression for p(m) is usually called the binomial probability dis
tribution (even though the proper name for p(m) is the binomial proba
bility density function) because the coefficient in brackets is the coefficient
of the xa term in the binomial expansion of (1 + x)No.
The probability 1 p that an atom will not decay in time t is given
by the ratio of the number of atoms N that survive the time interval t to
the initial number of atoms No:
N 1 p = q (255)
No
where q is defined as the probability that an atom will not decay in time t.
The rate at which nuclei disintegrate at time t is proportional to the
number of nuclei N remaining:
dN
d XN (256)
dt
where X, the constant of proportionality, is the characteristic decay con
stant for the radioactive material. The solution of Eq. 256 is
= ext (257)
No
28 RANDOM NOISE TECHNIQUES
We can combine Eqs. 254, 255, and 257 to obtain
N
p = 1 No= 1 ext= 1 q (258)
p(m) N Nm ](1 e ) (et)(No
p (No m)! m!
= N 0! ] pmq(Yom) (259)
(No m)!m!
(a) Average Disintegration Rate. The expected average disintegration
rate of a radioactive material can be obtained from the application of the
binomial distribution law. Substituting Eq. 259 into Eq. 238 gives
the mean value of m, the average number of disintegrations in time t:
No Na N!
M = m p(m) = I m N pmqNm) (260)
M=o = o (No m)! m!
This expression can be evaluated from the binomial expansion of
(px + q)No:
(Pxh q)N= r l F N
(px + q)No = pmqm)x = xm "p(m) (261)
m=o (No )! m! po
Differentiating with respect to x gives
No
Nop(px + q)No1 = 2 mx"1 p(m) (262)
m=0
For x = 1, which makes Eq. 261 an expansion of unity,
No
Nop(p + q)N' = Np = 2 m p(m) = Am (263)
m=0
Substituting Eq. 258 gives the average number disintegrations in a time
t to be
Am = Nop = No(1 et) (264)
For observation times that are short compared to the halflife of the
radioactive material, the approximation
ext 1 Xt (265)
can be used to give
M, = NoXt (266)
For observation times greater than approximately onehundredth of the
halflife, the expression in Eq. 264 should be used.
(b) Standard Deviation of Counting Measurements. The standard
deviation and variance of the number of disintegrations in a time t can
STATISTICS FOR RANDOM NOISE ANALYSIS 29
be obtained from the binomial expansion of Eq. 261 by taking the second
derivative with respect to x:
No
No(No 1)p2(px + q)No = 2 m(m 1) xZ2 p(m) (267)
m=0
which for x = 1 reduces to
No
No(No l)p2 = S m(m 1) p(m)
m=O
Nb No
= m2 p(m) m p(m) (268)
m=O m=0
With the use of Eqs. 238 and 239, the preceding expression is further
reduced to
No(No l)p2 = p2, ,, (269)
The variance is given by Eq. 243 to be
q, = .2 (270)
which can be combined with Eqs. 264 and 269 to obtain
S No(No 1)p2 + m /U
= Nop(1 p) = Nopq = Am(1 p) = Amq (271)
For radioactive decay where p is given by Eq. 258, Eq. 271 reduces to
S= m mext (272)
If the time of observation is short compared to the halflife, i.e., Xt is
small, Eq: 272 can be reduced to
2 = m (273)
or
am = V/ (274)
i.e., the standard deviation of the number of disintegrations in a time t is
the square root of the average number of disintegrations that occur in that
interval of time.
26.2 Poisson Distribution. The binomial distribution of Eq. 259
can be simplified if the limitations
m << No (275)
No > 1 (276)
Xt
and the approximations
ext 1 + Xt (278)
30 RANDOM NOISE TECHNIQUES
x! (2rx)1/2 exxx (Stirling's approximation) (279)
( mNo r mN
Mbm)N. N lim (1  =em (280)
No N6. No
Sm No(1 ex) = NoXt (281)
are imposed. The result is
p(m) = (282)
m!
which is known as the Poisson distribution and is valid for No as low
as 200 and Xt as large as 0.01. It is nearly symmetrical about 1mA if values
of m far from im are excluded and tends to become more symmetrical as
j.m increases. The principal advantage of the Poisson distribution is that
it can be completely defined by a single parameter im.
26.3 Gaussian, or Normal, Distribution. If the additional
limitations
m > 200 (283)
\Im m << pm (284)
and the approximation
In 1 + 2 (285)
Sm m 2m2
are imposed, Eq. 282 for the Poisson distribution reduces to
p(m = 1 "exp [ m) 2 (286)
(21r/m)1/2 2e m
This expression is called the Gaussian or normal distribution and is sym
metrical about the mean value, Am.
(a) Central Limit Theorem. The importance of the normal distribu
tion in many physical problems is directly related to the use of the central
limit theorem, which states that the sum of independent random variables
under fairly general conditions is approximately normally distributed
regardless of the underlying distributions. Since many physically
observed phenomena represent the result of numerous contributing
variables, the normal distribution constitutes a good approximation to
many commonly occurring distribution functions. This theorem is
extremely useful in many practical applications; e.g., in a nuclear reac
tor, the resultant neutron density at a particular point may be made up
of neutrons whose origins are in chains that are virtually uncorrelated.
(b) Standard Deviation. For large values of u.m, the standard deviation
STATISTICS FOR RANDOM NOISE ANALYSIS 31
is the same as that in Eq. 274:
0m = V/Am (287)
for the binomial distribution. Substituting Eq. 287 into Eq. 286 gives
the more common form for the normal distribution of
1 (A. M) I
p(m) = exp ( )2 (288)
\/27 2a J
A normal distribution curve is completely defined by the average value
,m and the standard deviation m. of the random variable m. Normal
distribution curves for large and small values of variance are shown in
Fig. 26. It should be borne in mind that the area under the probability
density function curve is unity, regardless of the value of variance. If
the average value Im is zero, the normal distribution curves of Fig. 26
p(m)
SMALL cr
LARGE o0
0 pma m m + a m
Fig. 26. Gaussian (normal) probability distribution.
are symmetrical about m = 0. The integral of the probability density
function from Im a to Mm + a gives the probability that m will be
within lal of )m and is represented by the crosshatched area under the
curves in Fig. 26. The value of a that makes the integral
Sp(m) dm (289)
equal to onehalf is the probable error of m; i.e., half of the experimental
data are expected to fall within the interval of plus and minus one prob
able error of the mean value. It can be shown that for a normal distri
32 RANDOM NOISE TECHNIQUES
bution the probable error and the standard deviation are related by
Probable error = 0.6745om (290)
and that 68.27% of the data lie within +om, of the average value im.
The integral of p(m), which gives the probability distribution function
P(m) for a normal distribution, is not readily integrable in closed form.
However, substituting
Im m = V2au (291)
transforms the integral into the error function, defined as
erf u = e2 du (292)
which can be evaluated from tables of mathematical functions.2
27 SPECIAL PROBABILITY DENSITIES AND DISTRIBUTIONS
There are a number of special probability densities and corresponding
probability distributions which occur in noise analysis of nuclear systems.
They are briefly discussed here, and the expressions and plots of p(x)
and P(x < X) are presented in Table 23.
27.1 Discrete Distribution. The discrete distribution occurs when
a variable can assume only a finite number of discrete values. In a
typical situation where such a distribution exists, the variable can assume
only two or three values, thereby producing discrete binary or ternary
distributions, respectively. A useful example in the nuclear field of a dis
crete binary variable is the output of a "flipflop" whose change of state
is triggered by the interaction of a nuclear particle with a detector.
27.2 Uniform (Rectangular) Distribution. Another probability dis
tribution of interest in nuclear work is the uniform, or rectangular, dis
tribution that occurs when the random variable is limited to a given
range but has a uniform probability of assuming any value within this
range, including the end points.
27.3 SineWave Distribution. A sine wave described by the
expression
x(t) = A sin (wot + 0) (293)
where wo is a fixed frequency and A is a fixed amplitude, is normally
considered to be a deterministic variable. However, if the initial phase
angle for each test or sample function is a random variable, the sine wave
can be described in probabilistic terms. If the phase angle 0 has a uniform
STATISTICS FOR RANDOM NOISE ANALYSIS 33
Table 23
Probability Density and Distribution Functions
Discrete Binary Variable
States a and b (b > a)
p(a) =p(b) = %/
p(x) = % (x a) + % 8(x b)
f0 (x < a)
P(x < X) = (a < x < b)
1 (x > b)
p(x)
0 a b
1/2
P(X x)
0 a b
Uniform (Rectangular) Distribution
p b a (a < x < b)
0 (otherwise)
0 (x < a)
P(x < X)= a (a < x < b)
1 (x > b)
p(x)
b a
0 a b
P(x < X)
O i b
0 a b
34 RANDOM NOISE TECHNIQUES
Table 23 (Continued)
SineWave Distribution with Random Phase Angle
x(t) = A sin (wo t + 0)
LO(0 < 0 < 27r)
p(0) = 2
0 (otherwise)
f ^ (Ixl < A)
p () = T(A 2X2) x <
0 (Ixl > A)
P(x < XI1[1 + sin{ (Alx < A)
0 (otherwise)
    ^ ^
A 0 A
,P(x < X)
1.0
0.5
A 0 A
STATISTICS FOR RANDOM NOISE ANALYSIS 35
Table 23 (Continued)
SineWave Distribution plus Gaussian Noise
x(t) = A sin (wot + 0) + n(t)
n(t) = Gaussian noise
{ (0 < 6 < 2ir)
0 (otherwise)
p(x)
A 0 A x
P(x < X)
1.0
0.5
A 0 A X
36 RANDOM NOISE TECHNIQUES
Table 23 (Continued)
Rayleigh Distribution
{0 (K < 0)
px) = [/2c2] (x 0)
Se [x/2c] (x>0)
P(x < X) = 1e2/22 )
0 (otherwise)
x = Vc 1.25c
= 2c
S= (2 ) C2 0.432
p(x)
0
P(x
1.0   
0
STATISTICS FOR RANDOM NOISE ANALYSIS 37
Table 23 (Continued)
ChiSquare Distribution with n Degrees of Freedom
(X2 l _) (X2/2)
(X2)X22
(X2 2(/2) r (n/2) Xn> 0
P(X2)
n=1
n=4
n = 10
o 2
Student's t Distribution with n Degrees of Freedom
r[+2_] n+1
p (t) [1+ 2
p(tn)
n= 10
n=4
n= 1
3 2 1 0 1 2 3 4
38 RANDOM NOISE TECHNIQUES
Table 23 (Concluded)
"F" Distribution
(Yl(k) and y2(k) are independent random variables with chisquare
distributions having n1 and n2 degrees of freedom, respectively)
n2 Yl (k)
Fn,, n2 "n Y2(k)
[(n,/2)1]
r[(n, + n2)/2](n/n)n/2 Fn,,n
p(F) nF (n (F 0)
F()r( 2) 1 + nF12 (n, +
p(F) I "1 = 20
n2= 25
n2 = 10
0 1 2 F
probability density p(O) over the range from 0 to 2ir, the probability
density is
S (0 < < 2r)
p() = (0 2 (294)
"0 (otherwise)
The relation between p(0) and p(x) has been worked out for the general
case in which it was assumed that the inverse function O(x) is an nvalued
function of x, where n is an integer. For the case in which dx/dO is not
equal to zero, the result is
n p()
p(x) = n (295)
Idx/de(
Application of this expression to the sine wave of Eq. 293, in which the
direct function x(0) is single valued but the inverse function 0(x) is
double valued, gives p(x). It is apparent from Table 23 that the prob
STATISTICS FOR RANDOM NOISE ANALYSIS 39
ability density function for x = + A approaches infinity; however, the
area under the curve between A and +A is still unity.
The unique shape of the sinewave probability density graph shows
up readily even when the sine wave is accompanied by other fluctuations.
For example, let us consider the sum of a sinusoid and a random fluctua
tion. The probability density function of the composite wave retains the
characteristic dual peaks at A, but they are finite in magnitude. How
ever, the area under the p(x) curve is still unity.
27.4 Rayleigh Distribution. The Rayleigh distribution, a density
function that is restricted to nonnegative values, is commonly used to
describe the probability density function of the envelope of a fluctuating
signal which has a large sinusoidal component of a single frequency.
Such a variable is commonly obtained when a random noise is passed
through a very narrow bandpass filter, or it might also be obtained in
the output of a system that exhibits a resonance at a particular fre
quency. For instance, a boilingwater reactor exhibits a resonance peak
at a frequency characteristic of the bubble formation and collapse time
in the coolant for the particular combination of pressure and temperature.
27.5 Distribution of AmplitudeLimited Variable. An interesting
case arises when a variable with any given distribution is amplitude lim
ited; i.e., the variable is passed through a "clipping" device that restricts
the lower and upper amplitudes to values a and b, respectively. The
resultant probability density function over the range a < x < b, is iden
tical to the original probability density function of the variable. How
ever, at x = a and b, the probability density function consists of Dirac
delta functions, with amplitude A and B, respectively.
27.6 ChiSquare Distribution. The chisquare distribution arises
when several independent random variables, zf, each of which has a
normal distribution, zero mean, and unity variance, are added together.
The resultant randomvariable chisquare for n independent random
variables is
X2 = Z + 2 + + + +z (296)
The new randomvariable chisquare has n degrees of freedom, which
represent the number of independent, or "free," squares entering into
the expression. The probability density function for chisquare is given
by
(X2)(n/2)1 e(Xn/2)
p(x ) =2 (X 2 > 0) (297)
2n/2 P(n/2)
where r(n/2) is the Gamma function of n/2. This distribution is called
the chisquare distribution with n degrees of freedom, and it approaches
a normal distribution as the number of degrees of freedom increases.
Furthermore, the square root of the chisquare distribution with two
40 RANDOM NOISE TECHNIQUES
degrees of freedom gives the Rayleigh distribution function, and the
square root of the chisquare distribution with three degrees of freedom
gives the Maxwellian distribution function. The mean value and vari
ance for the chisquare distribution are, respectively,
Ax = n (298)
ax' = 2n (299)
27.7 Student t Distribution. The Student t distribution occurs in
many situations where experimental data are being analyzed. Let y(k)
and z(k) be independent random variables such that y(k) has a chisquare
distribution and z(k) has a normal distribution function with zero mean
value and unity variance. We can now define a new random variable as
z (k)
tn = (k) (2100)
y(k)/n
where t, is the Student t variable with n degrees of freedom. The prob
ability density function for ti is given by
T[(n + 1)/2] 12 (2101)
p(t) = 1 (2101)
V irn n/2) n
The mean value and variance of the t, variable are, respectively,
At = 0 (n > 1) (2102)
2 _n
n = (n > 2) (2103)
n2
It should be noted that the Student t distribution approaches a standard
ized normal distribution as the number of degrees of freedom becomes
large.
27.8 F Distribution. Another probability distribution that arises in
evaluating errors is measurements in the F distribution. For these
measurements, let yi(k) and Y2(k) be independent random variables so
that yi(k) has a chisquare distribution with ni degree of freedom and
ys(k) has a chisquare distribution of n2 degrees of freedom. Now let us
define a new random variable, F.,,1,, such that
Fn,,n, yi(k)/n1 n2y,(k) (2104)
yl(k)/nl ni y2(k)
F.. yz(k)/nz nl y(k)0
The probability density function for F.,,, is given by
r[(nl + n2)/2](nl/nR)n,/2F, (n/)1
p(F) = r(n1/2)F(n2/2)[1 + (ni/n2)F,] (n,2 )/2 (2105)
STATISTICS FOR RANDOM NOISE ANALYSIS 41
The mean value and the variance for F.,,, are
1F = (n2 > 2) (2106)
n2 2
2
2n(ni + nz 2)
F = 2 + n2 ) (n2 > 4) (2107)
nl(n2 2)2(n2 4)
It should be noted that t2, the square of the Student t variable, has an
F distribution with nl = 1 and n2 = n degrees of freedom.
28 PARAMETER ESTIMATION
The objective of most experiments is to observe the phenomena taking
place and to quantitatively evaluate certain parameters associated with
the phenomena. The conditions associated with an experiment, in
general, determine the quality of the experimental data. In any given
test there are a large number of sources of error, which may deteriorate
the quality of the measurement. Many types of errors are associated
with the care taken by the investigator in setting up and carrying out
the experiment, e.g., errors associated with calibration of the instru
ments, proper ranging of the instrumentation, proper protection of
instrumentation from the influence of extraneous noise; in fact, care
must be taken by the investigator to assure that he is measuring the
phenomena he thinks he is measuring. Many of these sources of errors
are dependent on the investigator and might be classified as gross oper
ational errors. As such, they are generally not subject to a quantitative
analysis, and the success of the experiment generally depends on the
elimination of all operational errors. Failure to do so generally renders
the whole experiment invalid. In certain instances, operational errors
may be introduced which can be corrected, e.g., consistent mislocation
of a decimal point or use of the wrong scale factors, but these are the
exceptions.
28.1 Finite Ensemble of Records. It is clear that actual measure
ments must be limited to a finite period of time and number of records.
Hence the statistical properties measured must be estimates of the true
values. Furthermore, these estimates of the ensemble properties are
not necessarily equal to the estimates of the corresponding temporal
properties.
When the conditions for selfstationarity as discussed in Chap. 1 can
be met, the procedure is to use a single sample record to determine esti
mates of the properties of a process.
When we speak of a mean value, meansquare value, standard devia
tion, and variance of a process, we use the symbols Iu, J/, o, and or,
42 RANDOM NOISE TECHNIQUES
respectively, to represent the true parameters of the process and (x), (x2),
s,, and s' for the measured parameters of a sample record. The choice
of symbols depends on whether we are considering the parameters of a
process or the statistics of a sample record.
28.2 Estimators. Almost all physical phenomena show fluctuations
of some magnitude if sufficient resolution is attained in the measurement.
Hence the result of a single measurement, or of several measurements, is
not necessarily the true value of the variable, if indeed such a value
exists. Thus the result of a measurement is actually an estimation of the
true value, and the errors associated with this process are known as
estimation errors, sometimes called statistical errors. There is a field of
statistics that deals with the evaluation of errors associated with experi
mental measurements, but it is beyond the scope of this text to review the
whole field. Therefore we will confine this discussion to some of the
concepts that are useful in determining the precision of measurements
carried out on nuclear reactor systems using both analog and digital
techniques.
The expected value of any real singlevalued continuous function f(x)
of the variable x(t) is given by
E[f(x)] = f f(x) p(x) dx (2108)
where p(x) is the probability density function of x(t). The symbols
E[ ] and E( ) are used to denote the expectation operator, which is a
linear operator (and which therefore may be treated as a linear process).
It has the property that the expected value of a constant is the constant.
Estimators are usually mathematical expressions for a particular
parameter that indicate how it is obtained from the measured quantities.
For instance, the meansquare value of a set of quantities xi, x2, x3, .
XN is often given to be
SN
(x2) = = lim I = E(x2) (2109)
Nooo N
The hat ( ^ ) over 1 indicates that it is being used as an estimator for the
meansquare value of the process represented by x. Equation 2109 is
not the only estimator that can be used to evaluate the meansquare
value but only one of several possibilities. Estimators are never right
or wrong but, rather, are classified as "good" or "better than others."
The quality of an estimator is generally determined by the following
considerations:
1. The estimator should be unbiased; i.e., the expected value of the estimator
should be equal to the parameter being measured.
STATISTICS FOR RANDOM NOISE ANALYSIS 43
2. The estimator should be consistent; i.e., it should approach the parameter
being estimated with a probability approaching unity as the sample size
becomes large.
3. The estimator should be more efficient than any other possible estimator;
i.e., the meansquare error of the estimator should be less than any other
estimator.
28.3 Bias of an Estimator. To demonstrate the bias of an estimator,
let us consider the variance. The variance of a sample record is
s = (x ) (x)2 (2110)
and the variance of a process is
S= (2111)
For an individual sample record, (xi) may be different from u.,, the mean
value for the process. If we use s as an estimator for 0 i.e.,
s2 = ai = (X4) (_X,
= [(x) ] + [f (x,)] (2112)
where the first term in square brackets is the variance of the measurement
and the last term is the bias Aj (xi)2, which is not zero unless
lX = (xi) (2113)
However, as the number of records used to compute (xi) becomes greater
or the record length becomes longer, the bias will be less since Eq. 2113
is more nearly true.
In a given measurement the record of x(t) during the interval over
which the measurement is being taken represents a unique set of circum
stances which is not likely to be duplicated at any other time. Hence
the measured values of X, where X represents any parameter, computed
for different sample records vary randomly, and the measured quantity
is the estimator X, which is a random variable.
Let us apply the criteria described. If the estimator is unbiased, then
the expected value of the estimator is the true value, i.e.,
E[X] = X (2114)
If this is not true, then a bias error exists so that
b[] = E[] X = E[f] E[X] = E[ X] (2115)
i.e., the bias error is the expected value about the true value. Obviously,
for unbiased estimates
b[X] = 0 (2116)
44 RANDOM NOISE TECHNIQUES
For a measurement over a finite period of time, T, the fact that X may
be unbiased does not mean that the estimator E(X) is equal, or even close,
to the true value, X. Indeed, there may be significant deviations from
the true value for any single measurement, even though it is unbiased.
28.4 Consistent Estimators. The following example cited by Bendat
and Piersol3 is very illustrative. Let us consider the meansquare error
(MSE) to be defined as the expected value of the square of the deviation
of the estimator from the true value, i.e.,
MSE = E[(X X)2] (2117)
As indicated previously, if the estimator is to be consistent, this mean
square error should approach zero as T becomes large. Hence, for a
large value T, a consistent estimate would necessarily tend to closely
approximate the true value X. The estimator is consistent if
lim E[(X X)2] = 0 (2118)
T.
i.e., if the meansquare error approaches zero with time. The mean
square error can be reduced to
E[(2 X)2] = E{[X E(X) + E(_ ) X]2}
= E{[I E(k)]2} + 2E{[I E(k)][E(X) X]}
+ E{[E() X]2} (2119)
Since
E[X E(X)] = E[X] E[X] = 0 (2120)
the middle term of Eq. 2119 is equal to zero, and the result is
E[(X X)2] = E{[X E(X)]2} + E([E(X) X]2} (2121)
In words, Eq. 2121 states that the expected meansquare value about
the true value equals the sum of the expected meansquare value about
the expected value plus the expected meansquare value about the true
value. Thus the meansquare error is the sum of two parts. The first
part is the variance of the estimate given by
Var [ o] = 4 = E{[ E()]2}
= E[K2] {E[X]}2 (2122)
The second part is the square of the bias of the estimate as given by
b2[] = E[b2(X)] = E{[E(X) X]2} (2123)
In general, compromises may be required to ensure that both variance
and bias will approach zero as T becomes large. In terms of the variance
STATISTICS FOR RANDOM NOISE ANALYSIS 45
and the square of the bias, the meansquare error is
MSE = E[(X X)2] = a2[X] + b2[X] (2124)
28.5 Most Efficient Estimator. The most efficient estimator mini
mizes the meansquare error as expressed in Eq. 2124. Since 02[X] and
b2[] are both positive, as seen from Eqs. 2122 and 2123, the most
efficient estimator is found by reducing variance and bias to a minimum.
Since variance is a property of the data, and not of the computational or
measurement procedures, reducing bias to the absolute minimum assures
that the most efficient estimator has been found.
29 CORRELATION FUNCTIONS
Correlation is one of the most important concepts in random noise
analysis. Correlation is a quantitative and/or qualitative evaluation
of the relation of the variable to itself, to another variable, or to several
other variables as a function of time or time displacement. It is being
introduced at this point with some of the statistical relations developed
earlier in this chapter to illustrate the underlying statistical basis.
Let us consider the degree of dependence between two real random
variables x and y. If we plot a scatter diagram for sample values xi and
yj of the random variables such as those in Fig. 27, we can use a least
squares technique to fit these data points to a straight line. If all the
data points fall on this straight line, we can say that the random variables
x and y are linearly dependent or completely correlated. If the data
points are so widely scattered that they do not support any particular
straight line, the variables x and y probably are independent or uncorre
lated. In the case shown in Fig. 27, where the data appear to support
the straight line in spite of a great deal of scatter, x and y are partially
dependent or partially correlated.
Let us consider a leastsquares fitting of the data points to the straight
line,
y, = a + bx (2125)
where y, is the predicted value of y and the constants a and b are the y
intercept and slope, respectively. We can define the meansquare error
E as
E, = E[(y y,)2] = E{[y (a + bx)]2} (2126)
Differentiating with respect to a and b and equating the results to zero
gives
= 2E(y) + 2a + 2b E(x) = 0 (2127)
Oa
46 RANDOM NOISE TECHNIQUES
Y
REGRESSION OF x ON y 
0
o o o
0 0
0
0 00
0 00
o 0 0
0
00
= 2E(xy) + 2a E(x) 2b E(x2) = 0 (2128)
from which we can obtain
E(xy) E() E(y) E(y) E(x) E(y)
E(2) [E(x)]2 (a
E(xy) E(x) E(y) E(x2) E(xy) E(x) E(y) E(x2) (2130)
S2E(xy) + 2aEx) + 2bE(x2) = 0 (213028)
{E(x2) [E(x)]2} E(x) E(x) a
Equation 2125 is used to provide the regression line of y on x. It is
equally valid to consider the regression line of x on y by fitting the data
points to the straight line
xp = a' + b'y (2131)
where x, is the predicted value of x and where a' and b' are, respectively,
the x intercept and the slope (with respect to the yaxis). We can obtain
the constants a' and b' with the equations
SE(xy) E(x) E(y)
b'= 2 (2132)
"orv
STATISTICS FOR RANDOM NOISE ANALYSIS 47
and
S E(xy) E(y) E(x) E(y2)
2 'E(y)
a =y) (2133)
If x and y are perfectly correlated, the regression obtained by fitting x
on y and y on x would be identical, i.e., the two lines of Fig. 27 would
coincide. Hence we have the relations
a = a or ab' = a' (2134)
and
1
b = or bb' = 1 (2135)
29.1 Normalized Correlation Coefficient. If x and y are not
perfectly correlated, we can determine the extent of correlation by the
deviation from Eq. 2135. Let us define the square root of the product
of the two slopes, b and b', to be the normalized correlation coefficient
= [bb']2 {[E(xy) E(x) E(y)]2}1/2
p = [bb'], = 2
S (xy) E(x) E(y) (2136)
Using Schwartz's inequality, we can show that
IE(xy)l < jE(x)l IE(y)l (2137)
For the case where x and y are uncorrelated (linearly independent)
random variables,
E(xy) = E(x) E(y) (2138)
and hence p = 0. We see from these relations that the absolute value
of the normalized correlation coefficient varies between zero for uncorre
lated variables to unity for perfectly correlated variables, i.e.,
0 < p 1 (2139)
29.2 Covariance Function. Now let us define the povariance 6,
between x and y as the numerator of Eq. 2136; i.e.,
e., = E(xy) E(x) E(y) (2140)
48 RANDOM NOISE TECHNIQUES
Algebraically manipulating Eq. 2140 gives
ex, = E[(x A,)(y Ay)]
= /_/_ (x A.)(y luy) p(x,y) dx dy (2141)
For the special case of a single variable where x = y,
e, = E[(x y,)2] = a (2142)
The concepts of linear independent variables and uncorrelated vari
ables are not identical. When exy and pxy equal zero, the independent
random variables are uncorrelated. The converse statement, i.e., that
uncorrelated variables are independent, is true only for the special (but
quite common) physical situations where the variables are all normally
(Gaussian) distributed random variables.
In general, the mean values of the sample random variables x and y
are not constant with time and must be evaluated at various times. At
times tl and t2 where t, = t and t2 = t + 7, the covariance of x(tl) and
y(t2) is
e.l(th,t2) = eQ,(t,t + r) = e.(r)
= E{[x(t) ,(t()][y(t + r) + y(t + 7)]} (2143)
Similar expressions can be written for exx(t,t + r) and e,,(t,t + r). For
the case where r = 0, Eq. 2143 becomes the same as Eq. 2141.
29.3 Correlation Functions. We can now define the crosscorrelation
function >,(rT) as
0,(r) = E[x(t) y(t + 7)] (2144)
A comparison of Eq. 2144 with Eq. 2143 shows that the covariance is a
special case of the crosscorrelation functions where the mean values
have been removed. For stationary processes, Eq. 2143 becomes
e,,(r) = E[x(t) y(t + r)] j.xA
= (7) y (2145)
For a single variable where x = y, we obtain the autocorrelation function:
.x(r) = E[x(t) x(t + r)] (2146)
We can also express correlation functions in terms of the joint prob
ability density functions:
,(7) = x(tl) y(t1) p[x(t1),y(t)1 dx dy (2147)
STATISTICS FOR RANDOM NOISE ANALYSIS 49
For the special case where 7 = 0,
0,(0) = E[x(t) y(t)] (2148)
x(0) = E{[x(t)]2} = / (2149)
By again using Schwartz's inequality, we can show that
I0(7T)I2 CO.(0) 0,(0) (2150)
e,((T)I2 e,(0) e,(0) (2151)
and
1x(r) < .(0) = p (2152)
le.x() < exx(0) = (2153)
We can now use Eq. 2136 to redefine the normalized crosscorrelation
function (normalized crosscovariance function) as
Pyu(7) = (2154)
[e.X(0) e,(o)]"12
which satisfies the condition
Ip(r) < 1 (2155)
The function px,(r) indicates the degree of linear dependence between
x(t) and y(t) for a time displacement of r.
REFERENCES
1. R. D. EVANS, The Atomic Nuclear, McGrawHill Book Company, Inc., New
York, 1955.
2. E. JAHNKE, and F. EMDE, Table of Functions, 4th ed., Dover Publications,
New York, 1945.
3. J. S. BENDAT and A. G. PIERSOL, Measurement and Analysis of Random Data,
John Wiley & Sons, Inc., New York, 1966.
3
NeutronCounting Techniques in
Nuclear Reactor Systems
31 INTRODUCTION
Noise techniques can generally be divided into microscopic techniques
(those based on the statistics of the neutronpopulation variation) and
macroscopic techniques (those based on the composite behavior of the
system. In this chapter we shall deal primarily with microscopic tech
niques, which may involve the probability of detecting a neutron, the
deviation of a probability density from Poisson or Gaussian distributions,
the variancetomean ratio, the distribution of the time intervals between
counts, and other similar phenomena.
Most of the statistical techniques have been developed for zeropower
critical reactors. Recent work has allowed some techniques to be
extended to power reactors and subcritical nuclear systems. Certain
techniques are more useful for thermal reactors, whereas others are more
useful for fast reactors. Sometimes the instrumentation required (or
available) is a determining factor in the choice of techniques. These
factors are discussed with the description of the technique.
The chainreaction nature of nuclear processes in a reactor gives rise
to nonnormal distribution of the detected counts because the individual
counts are dependent on the other neutrons in the chain. Hence the
statistical properties of the count sequence are dependent on the dynamic
characteristics of the nuclear system.
There are several experimental methods based on neutron counting by
which the promptneutron decay constant Rossialpha, defined later, the
detector efficiency, and the reactor power can be determined. One of the
first experimental techniques to be employed for this purpose was the
Rossialpha method,' consisting in measurements of the conditional prob
ability of a count in a time interval A at a time t following a count at
t = 0. The relative variance of neutron counts registered in a certain
50
NEUTRONCOUNTING TECHNIQUES 51
time interval was studied by Feynman et al.2 Another method of
determining 0/1, the zero probability method suggested by Mogilner and
Zolotukhin,' consists in measurements of the probability of no count in a
certain time interval. All these methods have been reviewed by Thie4
and are presented in abbreviated form later in this chapter.
A recent study by Babala6 indicates that most of these techniques can
be derived from Kolmogorov's theory of branching processes.6 Courant
and Wallace7 studied the fluctuations of the number of neutrons in a
reactor on the basis of the FokkerPlanck equation, obtained from prob
abilitybalance considerations, and derived the formula for variance of
neutron counts. Pal8 used the firstcollision technique to derive expres
sions for zero probability, variance, the correlation function, which is
closely related to the conditional probability of the classical Rossialpha
experiment.
In this chapter the lumpedparameter model of the nuclearreactor
system is assumed unless otherwise specified. Such an assumption is
usually valid for reactor dynamics if the physical dimensions of the
core do not exceed a few migration lengths for the particular reactor
configuration.
In nuclear systems that are critical at zero power or are slightly sub
critical, one of the most important parameters is the promptneutron
decay constant, known as the Rossialpha and defined* as
1 k(l ) 1 k P p
a .. = (31)
1 I A
where all symbols have the definitions commonly accepted in reactor
theory.9'0 For a delayedcritical system, this equation becomes
ac = (32)
1 A
since 1 and A are then equal. Hence we can express a in terms of a,:
a = A( ) = ac ) = ac[1 p($)] (33)
where p($) is the reactivity expressed in dollars.
32 PROBABILITY DISTRIBUTION OF FISSION NEUTRONS
The basic cause of the statistical fluctuation of the neutron population
in most zeropower nuclear systems is the variation in the number of neu
trons produced in each fission. The yield of neutrons per fission is based
Rossi's original definition was actually the negative of this expression, but the
given definition is more commonly used today.
52 RANDOM NOISE TECHNIQUES
on probabilities that in turn are related to the competing processes
involved in fission. For example, let us consider the neutron yield from
the fission of 235U. The probability of yielding v, neutrons, where ViP
is an integer between zero and six, and the associated probability distribu
tion function are given in Table 31. The plots of probability distribu
Table 31
Probability of Yielding v Neutrons in 235U Fission
v p(v) P(P) vp(v) vf'p()
0 0.03 0.03 0 0
1 0.16 0.19 0.16 0.16
2 0.33 0.52 0.66 1.32
3 0.30 0.82 0.90 2.70
4 0.15 0.96 0.60 2.40
5 0.03 1.00 0.15 0.75
6 0 1.00 0 0
1.00 v = 2.47 2 = 7.33
tion and probability distribution function are shown in Fig. 31. It is
apparent from Fig. 31(a) that the probability distribution for vp is not
a Poisson distribution, even though the envelope of the discrete values
has a bell shape. Indeed, the deviation of the v, distribution from a
Poisson (or binomial) distribution is one of its distinguishing and useful
characteristics.
The relative width D, of a probability distribution is defined as
r p l A I + A A X
2 2 X (34)
2 2
Diven et al." have indicated that the relative width D,, sometimes called
Diven's parameter, is an appropriate normalized average for the number
of prompt neutrons per fission:
D, ) ( p (35)
2, VP 2
The notation of the last term is commonly used in the literature. For
231U, we can use the values of Table 31 to obtain
2 _ P 7.33 2.47
D, 2 = = 0.796
P (2.47)2
This compares favorably with the value of 0.795 + 0.007 given by Diven
et al.l Values given for other fissionable isotopes are: 233U, 0.786 + 0.013;
NEUTRONCOUNTING TECHNIQUES 53
0.4
0.3
o 0.2
0.1
0 T T I I
0 1 2 3 4 5 6 7 8
NUMBER OF PROMPT NEUTRONS PER FISSION (Vp)
(a)
1.00
0.75
VI
q 0.50
0.25
0 1 2 3 4 5 6 7 8
NUMBER OF PROMPT NEUTRONS PER FISSION (vp)
(b)
Fig. 31. Probability distribution (a) and probability distribution func
tion (b) of the number of fast neutrons per fission in 23U.
"2Pu, 0.815 + 0.017; and 24oPu, 0.807 0.008. These values deviate
significantly from the value of unity, the value of D. for binomial, Poisson,
and Gaussian distributions. This is readily shown by substituting Eq.
274 into Eq. 34 to get D, = 1.
An alternate form of the relative width can be seen by evaluating the
54 RANDOM NOISE TECHNIQUES
numerator using the discrete probabilities
V p = Pv Pa Vp, = V p, Vp)
I 1 i=1 i=1
= pPv, ,(, 1) = Vp(Vp 1) (36)
i '
where p,, is the probability that precisely v,, neutrons are liberated in
fission and v,p assumes integral values between 0 and 6, representing the
number of prompt neutrons emitted in a particular fission. Hence Eq.
35 becomes
S Vp p(vp 1) 
2 (37)
v V
33 ROSSIALPHA TECHNIQUE
The Rossialpha technique was first suggested by Rossi, and the sta
tistical theory of neutron chains was heuristically developed by Feyn
man, de Hoffman, and Serber.2 Their derivation will be followed in
this section. More rigorous mathematical derivations have been carried
out by Matthes,12 Borgwaldt and Stegemann,13 Babala,5 and Iijima.14
This technique was originally developed for fastreactor systems where
the number of neutron chains existing in the nuclear system at any
instant is not large and the decay of the neutron chain is very fast
because the neutron lifetime is very short. Recent modifications of the
technique, with other instrumentation, has permitted it to be used for
thermalreactor systems where the chains overlap considerably and their
decay is slower because of the longer neutron lifetime.
In the original Rossialpha experiments, a coincidence counting system
such as that shown in Fig. 32 was used by Orndoff.1 The system can be
operated with a single detector providing both inputs 1 and 2. Alter
nately, separate detectors can be used since the theoretical development
depends only on the chainrelated detection occurring. Using two detec
tors makes the timing problems of the instrumentation less critical. The
principle is to relate the probability that a neutron will be detected in
the time interval A at t following a neutron detection at t = 0 when the
original fission occurred at to. When we consider the subcritical multi
plication relation given by Murray9 for prompt neutrons only, the prompt
neutron population is
S k
NEUTRONCOUNTING TECHNIQUES 55
INPUT 2 SHORT PULSE
GENERATOR
INPUT 1 GATE
GENERATOR
SCALER 1
COINCIDENCE
CHANNELS
SCALER 2  1
DELAY
LINES
10 
Fig. 32. Block diagram of Orndoff's analyzer. [From J. D. Orndoff,
Prompt Neutron Periods of Metal Critical Assemblies, Nuclear Science and
Engineering, 2:450 (1967).]
where S is the strength of the neutron source in the reactor. The prompt
neutron population and hence the number of neutron chains in a system
is inversely related to a for a given source strength S. With a very
weak neutron source S in a fast assembly, it is quite possible for all the
neutrons in a nearcritical system to be members of a single neutron
chain. Hence a sensitive detector can frequently detect two or more
neutrons from the same chain.
33.1 Theoretical Considerations. When the first neutron count
from a given chain occurs at a time designated at t = 0, there is a certain
probability that the detector will, at a time t later, detect either a random
56 RANDOM NOISE TECHNIQUES
neutron (i.e., one from some other chain) or a chainrelated neutron
(i.e., one from the same chain that produced the count at t = 0).
The probability of detecting a random neutron is AA, where A is the
average counting rate of the detector and A is the time interval of meas
urement, i.e., the time width of a single channel of the analyzer. Since
the promptneutron population on the average must decay exponentially,
the probability of detecting a chainrelated event decreases according to
e"t. Hence the total probability of detecting a neutron (either random
or chain related) in the time interval A is
p(t)A = AA + Be"A (39)
where the coefficient B has been derived by Feynman, de Hoffman, and
Serber,2 and by Orndoff1 in the following manner. The probability that
a fission will occur at to in A0 or dto is
p(to) dto = F dto (310)
where F is the average fission rate of the system. Next, the probability
of a detection count in A1 at tl where ti > to due to fission at to is
p(tl)Ao = EvPvYe"(t,to)A (311)
where E = detector efficiency in counts per fission
v, = actual number of prompt neutrons emitted per fission at to
v = velocity of thermal neutrons
2f = macroscopic fission cross section
vZf = average fission rate per unit neutron density
In a similar manner the probability of a chainrelated count in A2 at t2
where t2 > t, following a count at t, is
p(t2)A2 = e(v, 1)vZre"(tt0o)A2 (312)
where v, 1 takes into account the fact that the neutron detected at
time tl was lost to the fission chain. The three probabilities F dto,
p(tl)Ai, and p(t2)A2 are independent and can be multiplied to give the
joint probability of the occurrence of a fission at to followed by a count
within AI at tl and another count within A2 at t2 where the neutrons
detected are part of the chain initiated by the fission at to. Hence the
total probability of the preceding sequence of events occurring and
producing chainrelated counts is the integral of the product of the three
probabilities over all time to (from co to ti) available for occurrence of
NEUTRONCOUNTING TECHNIQUES 57
the first fission; i.e.,
pc(tl,t2)A1A2 = /' p(f)Ai p(t2)A2 F dto
= J Fe6(vP )2 vp(v, e"(t'+ti22')A A2 dto
(vsf)2
= Fe2 pp(V, 1)  e"(t2,)AiA (313)
2a
Note that v,(v, 1) indicates a suitable averaging over the distribution
of prompt neutrons emitted per fission as given in Eq. 36. We can
write Eq. 313 in a more familiar form by substituting the identity
kpZa kp
(314)
Zf Zfvl
and the definitions of a from Eq. 31 to give
pc(t1,t2)AlA2 = Fe2 v(V, 1) k ec,(_t)A2 (315)
2P, (1 kp)l
The probability of a random pair of counts in A1 and A2 is given as
pR(ti,t2)A1A2 = F262AIA2 (316)
Thus the total probability of a pair of counts in A1 and A2 is the sum of the
random and chainrelated probabilities:
p(t1,ts)A1A2 = F2E2A21 + Fe2 vp(vp 1)k e_(ttA
2lp(1 k,)l
= FEA2 FE + EDvk( e( )A,2 (317)
1 2 (1 k,) 1 1
where FeA is the probability that a count occurs in the interval A and D,
is Diven's parameter given by Eq. 37. If we set FeA1 equal to 1, thereby
requiring that a count occur at ti, then FeA2 is the probability of a random
count in the interval As, and the second term in the brackets in Eq. 317
is the probability of a chainrelated count at t2 following a count at tl.
This can be generalized so that the probability of a chainrelated count
at time t following a count at t = 0 is
eD,~k
pc(t)A = e"tA (318)
2(1 k,)l
Orndoff' has shown that this expression must be corrected for the prob
ability of a count being introduced at t as a consequence of the fission
58 RANDOM NOISE TECHNIQUES
and detection process at t = 0 by replacing (v, 1)vp with
1 kj
vp(vp 1) + 2ip 
k,
where 8 is the effective number of neutrons resulting from the fission and
detection process at t = 0. Since 8 is dependent on the detector charac
teristics and location, it must be evaluated for each experimental setup.
Generally, this correction is small, about 1%, and is often neglected.
The total probability of a count at time t in interval A following a count
at t = 0 is
p(t)A = pR(t)A + pc(t)A
Ef{,p(v, 1) + 2P,(1 kp)8/ke, }j2
= FeA + 2(1 ) e"A (319)
2 (1 kp)l
which has the form of Eq. 39:
p(t)A = AA + BeaA (320)
where
A = FE (321)
is the average counting rate and
B e[v(Vp 1) + 2Pp(1 k,)6/k,]k4
2 (1 k,)l
EDk2
EDp (322)
2(1 kp,)l
Equation 320 is the result obtained from Rossialpha experiments where
AA represents the background due to uncorrelated counts and can be
removed to leave a single exponential term from which the decay con
stant a can be evaluated. Note that the uncorrelated term depends on
the fission rate (i.e., power level in a critical reactor or source level in a
subcritical system) whereas the chain related or correlated term is inde
pendent of power level. Thus lowering the fission rate will increase the
signaltonoise ratio of the measurement.
33.2 Experimental Measurements. Regardless of whether one or
two detectors are used, the instrumentation of Fig. 32 serves primarily
as a clock that measures the time interval between the trigger and sub
sequent pulses. If sufficient delays and coincidence channels are pro
vided, several neutrons may be detected after each trigger pulse. Since
this type of instrumentation is expensive, it is desirable to utilize com
mercially available multichannel analyzers. Several different modes of
operation have been used, depending on the time resolution required
NEUTRONCOUNTING TECHNIQUES 59
for the experiments, i.e., whether the neutron lifetime is a fraction of a
microsecond, a few microseconds, or several hundred microseconds.
A procedure that is similar to the one used by Orndoff involves using
a multichannel analyzer as a multiscaler. The first pulse starts the
internal clock, and detector pulses are registered in the appropriate time
channel. Commercially available multiscalers provide channel widths
of less than 10 usec with a few microseconds' dead time after each recorded
pulse. A special system designed by Diaz and Uhrig15 has a digital
computer as a specialpurpose multiscaler to provide 3psec channels
and an alternating input system to eliminate the dead time (i.e., one
input channel collects the counts) while the other stores the counts)
collected in the previous time increment).
If the counting rate is low (i.e., <1000 counts/sec), a multichannel
analyzer can be used in the timeofflight mode to provide channel widths
down to 0.1 usec, but each pulse is followed by the dead time of the
analyzer (typically 10 usec). Special equipment using several channels
of buffer memory to temporarily store pulses until the end of the cycle
provide very narrow channel widths (i.e., down to 0.01 gsec) without
dead time.
A slightly modified technique used by Brunson et al.16 uses a multi
scaler system in which the pulse from the first detector starts the clock
and the pulse from the second system stops it and which stores the pulse
in the appropriate memory location and resets the analyzer. Such a
procedure actually measures the time between detected events but pref
erentially measures the shorter time intervals. Brunson et al. indicate
that the correct probability of detecting a neutron in the nth channel is
P" = cN (323)
Co + 2 ci
i=n
where c, and c, are the number of counts in the ith and nth channels,
respectively, co is the number of cycles during which no event is recorded,
and N is the total number of channels being used in the analyzer. This
procedure is discussed further in Sec. 37.
Mihalczo"7 modified the technique used by Brunson et al. by inserting
a variable time delay between detector 2 and the analyzer. Hence pulses
in detector 2, which preceded the trigger pulse in detector 1, are collected.
The probability p(t) of Eq. 320 now becomes
p(t')A = AA + Be'A (324)
where t' = t td < 0. The term td is the delay time, and the other terms
are the same as in Eq. 39. This procedure, which is analogous to the
60 RANDOM NOISE TECHNIQUES
correlation of the pulse sequences, yields two measurements of a from a
single run.
The results of experimental Rossialpha measurements are fitted to
Eq. 39 by using a leastsquares technique, and the parameters A, B, and
a are evaluated. Then the equations
A = Fe (325)
B e l) Dk (326)
"2 (1 k,)l 2a12
can be used to obtain any two of the five quantities F, e, D,, kp, or I if
the other three are known.
Karam's has pointed out that in many cases, particularly when a
reflector is present, the experimental data support an equation of the
form
p(t)A = AA + Be"tA + B'e"'tA (327)
where a' > a, except for the case of fast reactors with moderating
reflectors. This matter is discussed extensively by Suwalski,19 but no
completely satisfactory explanation has been put forth. Some spatial
effects have also been observed, although they have not been studied
extensively.
The relation between a and p expressed in Eq. 31, the definition of a,
does not hold for fast reflected assemblies. Cohn20 has suggested that
these difficulties are related to the physical meaning of a and a'. Clearly,
the lumpedparameter model is not adequate for reflected systems, par
ticularly when the reflector differs significantly in composition from the
core.
34 VARIANCETOMEAN (FEYNMAN) METHOD
34.1 Theoretical Considerations. Another statistical method
closely related to the Rossialpha method is the Feynman technique2 of
relating the ratio of the variance to the mean of the number of counts
collected in a fixed time interval. If we repeatedly measure the number
of counts occurring in a given time interval in a nuclear system, we can
relate the parameters of the nuclear system to the variancetomean
ratio (s2/l) of the number of counts; i.e.,
2 c2 2(
= (328)
c c
NEUTRONCOUNTING TECHNIQUES 61
where 5 represents the average number of counts in the interval T. The
number of pairs of counts expected in this interval is given by
c! c(c 1)
(329)
(c 2)! 2! 2
since the number of combinations of a set of c events taken two at a time
is {c!/[2!(c 2)!]}. Hence the average or expected number of pairs of
counts in the interval T is
C(c 1) (c(c 1))
=2 2 1,= f0 p(tl,t2) dtl dt2 (330)
2 2 hOJd=0
where p(tl,t2) is the total probability of a pair of counts in dti and dt2.
Using the differential form of Eq. 317 for p(ti,t2) gives
C(C 1) fT t2 dti FE dt2 + a dt2
2 J JFo d d2 2(1 k,)l1
F2E2T2 FE2Dk T ( 1 e
S + ( (331)
2 2(1 k,)2 aT
Since
A = FeT (332)
we can rearrange Eq. 331 to obtain
2 e D, 1 ea
S1+ (11 1
S(1 k) aT
= 1 + 1T = 1+ Y (333)
where
eD( 1 T
Y 1= (334)
and p, is the "prompt reactivity"* defined by
k 1
PP (335)
kp
Equation 333 can be put in the form
c2 2 2 2
= Y (336)
The prompt reactivity pp is a grouping of terms in a form analogous to the defi
nition of reactivity p. The two "reactivities" are related by
Pp = (p 3)/(1 3) = 0
62 RANDOM NOISE TECHNIQUES
where s, is the variance of the Poisson distribution. Hence Y can be
interpreted as the difference between the relative (or reduced) variances
s2/5 of the chainrelated variable and a Poisson random variable. Since
the quantity Y is equal to zero for random Poisson fluctuations, it is a
measure of the additional fluctuations (in excess of random) that exist
when chainrelated events occur. This technique was originally used
by Feynman et al.2 to obtain the dispersion P in the number of neutrons
per 2"U thermal fission by counting Y for T >> 1/a so that the term in
brackets in Eq. 333 approaches unity. If the counter efficiency and the
prompt multiplication factor are known, vP can be determined.
In many reactor applications it is a good approximation to ignore the
delayed neutrons because they are virtually constant over the time inter
vals used in the experiments. However, for thermal and some inter
mediate systems, it is necessary to include the influence of the delayed
neutrons. Bennett21 has derived an expression including the effect of
delayed neutrons. When delayed neutrons are included, Eq. 333
becomes
C2 +2 2A ( 1 eiT
= 1 + ED, Ho(a) 1 (337)
c i=,i ajiT
where A, and ai are defined in terms of the zeropower transfer function
Ho(w),
6 a
j Pk 7
7
k=1 Xk + jW A
Ho(w) 6 L(338)
jw(l + k X 4jw) p i=1
Bennett" gave the values of A,, aj, and Ho(ao) for Ipl < 3/10 and 1 < 5
X 104 sec to be the values given in Table 32 for critical or slightly
subcritical systems. The delayed neutrons also have another undesirable
effect. As pointed out by Pal,s the successive measured time intervals
are correlated, and Eq. 333 has to be corrected also for this correlation.
Pil suggested a waiting time 0 between the successive measured time
intervals to reduce this correlation but did not give any formula for the
correction term. Babala6 indicated that the effect of this correlation
becomes small as the number of observations increase. Pacilio," how
ever, has indicated that he could not find experimental evidence of this
correlation.
34.2 Experimental Procedures. The experimental procedure for
the variancetomean technique is fairly simple: one measures the number
of counts in a large number of time intervals of length T and calculates
the variance. The procedure is repeated for other time intervals T of
NEUTRONCOUNTING TECHNIQUES 63
different lengths. From the plot of the reduced variance vs. T, one can
determine a from a leastsquares fit of the data to Eq. 333. A gated
scaler, which is controlled by a precision timer, is usually used to count
the events detected in the interval T; and the output is printed or punched
on tape or cards. The output operation actually represents an interrup
tion of the experiment and introduces a dead time between consecutive
observations. The error due to dead time is minimized by the use of a
modern multichannel analyzer as a multiscaler where the dead time can
be as short as 10 to 20 psec and as many as 1000 to 4000 channels may be
available. Although special equipment could be built to allow the collec
tion and storage of data simultaneously and thereby eliminate the dead
time, the alternate procedure described subsequently is more commonly
used today.
In addition to the dead time problem, the preceding procedure requires
the collection of a large amount of data. In an alternate procedure first
suggested by Stegemann,23 a multichannel analyzer is used in which the
detector counts advance the channel address. At the end of an interval
T, a single count is added to the analyzer memory at the final address
and the channel address is reset simultaneously. (For instance, if there
are 341 events detected in a time interval T, a single count is inserted
into memory position 342. The final address is always one greater than
the number of counts since the memory address is reset to an address of
one.) This procedure then gives the discrete probability function, and
we can modify Eqs. 238 and 239 to calculate the mean and mean
square values and therefore variance and variancetomean ratio:
M
2 (i 1)Ni M
C = M (i 1)N, (339)
2 (i 1) =2
S i1
M
2 (i 1)2
i=1
where M is the number of channels (memory positions) in the analyzer
and N, is the number of counts stored in the ith channel. Care must be
taken to see that the number of counts in a time interval does not exceed
the number of channels available in the analyzer or that some special
arrangement, such as an auxiliary printout system, is used when this
occurs.
Both procedures are subject to the limitations associated with the
stationarity of the system being studied. Hence it is common procedure
64 RANDOM NOISE TECHNIQUES
to record a sufficiently long record of the output of a detector on magnetic
tape and to process this time record repeatedly until the necessary infor
mation is obtained (see Albrecht24 and Johnson25). This recording also
allows the results of the different methods to be compared with each
other. An alternate procedure used by Turkcan and Dragt26 is to use a
very short basic time interval so that the successive samples can be
added to form longer time intervals that are multiples of the basic
interval.
34.3 Parameter Measurements. It is apparent from Eqs. 333
and 337 that there are several parameters that can be evaluated by
variancetomean measurements (e.g., the prompt decay constant a, the
dispersion of the number of neutrons emitted per fission, the reactivity
of a subcritical system, and the power level of a critical system). Obvi
ously, not all of these can be evaluated independently. Furthermore,
the type of system (fast, intermediate, or thermal) being studied also
determines which parameters can be evaluated. Pacilio27 has expressed
the limitations on the use of Eq. 333 in terms of acT; i.e., it can be used
until the inequality
a2T<< 1 (341)
is no longer valid. Physically, this means that the interval T is suffi
ciently short that delayed neutron effects are not significant, i.e., T <
50 msec for critical or nearcritical systems. However, the effects of
delayed neutrons become less important as the reactor becomes more
subcritical.
Pacilio28 points out that the number of intervals counted, N, influence
the precision of the measurements, even though it does not appear in
Eq. 333 or 337. He also derived the relation for the relative standard
deviation, where successive samples are regarded as uncorrelated, to be
= [(4 + +(2 + +l (342)
where Y is defined by Eq. 334. He carried out a parametric study of
Eqs. 337 and 342 and concluded that:
1. A large number of short test intervals is preferable to a small number of
long intervals.
2. The dependence of Y on a occurs for aT < 1 but then vanishes as T
increases.
3. The requirements for a Feynman variancetomean measurement are (a)
very low power, (b) high detector efficiency (103 to 104 for uranium
systems), and (c) a large number of short measurements.
NEUTRONCOUNTING TECHNIQUES 65
If we restrict T to the range
1 1
<< T << (343)
Eq. 333 becomes
c2 c2 eD
S1 + = 1 + Y (for subcritical systems) (344)
c pi
C2 j2 gDj(1  0)2
 1 + = 1 + Ycrit (for critical systems) (345)
c P2
The conditions of Eq. 343 cannot be met in graphite or heavywater
systems. Even so, these expressions have been used by Feynman et al.2
and Kurusyna29 to measure D,, by McCulloch38 to measure # for a
plutonium system, and by Lindeman and Ruby"1 to measure subcritical
ity. The subcriticality measurements are based on the relation
Y^ [*D (1 ))']/ = p(1 )'
j [ED( )2] (1 1 = [1 p($)]2 (346)
Y eD,/p' #2
This method does not require that the generation time remain constant for
changes of reactivity, but it does require that the detector efficiency
remain constant. The reported results have been in good agreement
with pulsed neutron experiments down to $3.5 subcritical.32 The effi
ciency e can be calculated from Eq. 328 if the reactivity has been deter
mined from the measurements of a and ac and if 3 is determined by a
calculation. The absolute fission rate F in the system is given by
F = A (347)
where A is the average counting rate in the experiment.
35 BENNETT VARIANCE METHOD
As the delayed criticality is approached, the reduced variance calcu
lated from Eq. 336 diverges (a7 approaches zero since a7 = p/11.6).
To circumvent this difficulty, Bennett21 suggested an alternate method
which does not diverge at delayed critical, namely, measurements of the
second moment of differences of counts in subsequent time intervals
(differential method). From the point of view of neutron statistics, the
reactor then behaves as a subcritical system. Bennett has derived the
relation
((ck+l Ck)2) 7 (a) 2a T 2e (348)
2(Ck 1 + I Ho(a) 1 T
2c= \iT i)
66 RANDOM NOISE TECHNIQUES
where ck is the number of counts in the kth time increment of length T
and the other symbols have their previous meanings. The ensemble
averaging is carried out over N time increments. If the condition
a2T
((Ck+1 Ck)2 1 2D, ( + e2,T 2e",T 1
((Cl k)) ED, 1  = 1 + W (349)
2(ck) iT
where
ED,( + e2T + 2e"T
W = 1 (350)
pW aT
In a way analogous to the Y of the Feynman variancetomean experi
ment, W represents the increase in fluctuations due to the correlated
events of the neutron chains over the fluctuations that would have
occurred had they been random normal. However, W is smaller than Y,
indicating that the correlation between the differences in the number of
counts in successive intervals is less than the correlation between the
number of counts in successive intervals. As T becomes short, both W
and Y approach zero. Similarly, as T becomes long, W and Y approach
asymptotic values of eD,/p in a similar but not identical matter.
In experiments the gated circuits used for the Feynman experiments
can also be used, but the procedure for analyzing the data is different.
Dead time between runs, particularly for the very short time intervals,
is as important as for the Feynman method. The Stegemann probabil
ity analyzer used for the Feynman experiments cannot be used for this
technique. Therefore a large amount of data is necessary, and the experi
mental error is larger than in variance measurements.
36 COUNT PROBABILITY METHODS
There are several methods of measuring parameters of nuclear reactor
systems which are based on the relation of pi(A), the probability of count
ing i pulses in a time interval A. When chainrelated counts are present,
pi(A) is a function of j, the average number of counts in the interval A,
and the correlation term Y, the measure of additional fluctuations in
excess of random which occur when chainrelated events occur. For
uncorrelated random events, pi(A) is only a function of 5.
Experimental measurements involve measuring the frequencies fi(A),
or frequency distribution, and comparing them with pi(A), probability
distribution. The probabilities thus obtained are then used to evaluate
the variancetomean ratio, from which the parameters can be evaluated
by using the Feynman method (i.e., by using Eq. 333). Alternately, the
NEUTRONCOUNTING TECHNIQUES 67
probability pi(A) can be expressed in terms of the parameters of the
nuclear system.
36.1 Zero Probability (Mogilner) Method. The use of the zero
probability method was first suggested by Mogilner and Zolotukhin2 in
1961. The average fraction of empty channels (i.e., zero counts during
interval A) in an analyzer containing M channels is measured for a series
of tests in which A is varied over a wide range.
Mogilner and Zolotukhin use probability generating functions to calcu
late the probability distribution for a discrete random variable as defined
by Eq. 247 because of the ease of computing probabilities and moments.
However, their original derivation was based on the assumed negative
binomial distribution F(A,z) of neutron counts, where
F(A,z) = e ei pi(A)
i=0
= [1 + (1 e)Y]l' (351)
z is an auxiliary variable, j is the average number of counts in time
interval A, and Y is the correlation parameter defined by Eq. 334. If
the number of counts i = 0, the auxiliary variable z approaches o0, and
the zero probability is given as
In po(A) =F(A, o)
= In (1+ Y) (352)
or
po(A) = (1 + Y)I/Y (353)
From experimental values of po(A), we can obtain Y and hence a.
Pal8 has given a theoretical basis for the zero probability using a more
exact theory and gives the expression
S2A 2 [( + 1)2 ( 1)2ey^1A
In po(A) = 1 +  In
37 + 1 (7 1)A 47J
(354)
where
,= 1+2 D (355)
and all other terms have been defined previously in this chapter.
Pa133 indicates that the first two terms of Eq. 354 expanded in a power
series in eD'/p2 are the same as the corresponding terms for In po(A) given
in Eq. 352. Such a power expansion is possible only for eD,/p < 1, which
means that the variance of the counts is hardly different from that of a
68 RANDOM NOISE TECHNIQUES
Poisson distribution. However, a is much more easily determined for
ED,/p >> 1; i.e., the variance of the counts is quite different from that for a
Poisson distribution. Pil has recommended that the more exact expres
sion in Eq. 354 be used since his work indicates the approximations used
by Mogilner are valid only for A < 3 msec. Babala6 has derived Eq. 354
using a threeinterval probability generating function and concurs with
the recommendation of Pa1. However, Pacilio34 indicates that the exper
imental agreement between the results using Eqs. 353 and 354 are
consistent over a range that is wider than expected.
The experimental equipment used for this type of experiment is the
probability analyzer described in Sec. 34, which gives the discrete prob
abilities pi(A) as an output. Only po(A) and j are needed for this experi
ment, where po(A) is given by
po(A) = (356)
N
where No is the number of counts in the first channel (zero counts during
A) and N is the total number of counts collected in all channels. The
average number of counts c can be obtained from a monitoring scaler. A
leastsquares fitting of po(A) vs. A will give a and eD,/p2. Pacilio34 has
used this technique to measure absolute power level, and Lindeman and
Ruby31 have used it to measure subcritical reactivity.
The zero probability method is usually applied to thermal reactors at
very low power since there must be a substantial number of intervals
with no counts if the method is to be useful.
36.2 PolyaModel Method. The Polyamodel method is an exten
sion of the Mogilner method in which all values of pi(A), as approximated
by the probability profile of Polya,'5 are compared with the frequency
distribution of counts, i.e., the ensemble of fractions fi of the channels
with i counts. The distribution of the Polya model is actually the nega
tive binomial distribution. The expression for pi(A) has been derived by
successive differentiation of a probability distribution generating func
tion. The result is a recursive relation:
5 + (i 1)Y
pi(A) = p + ( P1Y (A) (357)
i(1 + Y)
where the last term of the series
po(A) = (1 + Y)l (358)
is the zero probability of the Mogilner method. The recursive relation
is an approximation of a more rigorous but complicated analytical expres
sion derived by PAl8 and Mogilner and Zolotukhin.'
NEUTRONCOUNTING TECHNIQUES 69
The experimental procedure is to use a probability analyzer such as
that described in Sec. 34 to determine the frequency distribution of
counts for various values of time interval A. The problem of dead time
for short time intervals is substantially the same as for the Feynman
method. A leastsquares fitting procedure is then used to obtain opti
mum values of c and Y. The approach recommended by Mogilner and
Zolotukhin3 involves the minimization of the quantity x2 where
S(c c.()
x2 = (C C)(359)
i=O Cpi
where ci is the actual number of counts collected in the ith channel and
c,, is the number expected on the basis of the theoretical probability
distribution relations of Eqs. 357 and 358. Pacilio34 suggests an alter
nate method in which the quantity to be minimized is
xP = S wi(#j bi) (360)
io
where wi is the weighting function, usually taken to be unity, and bi and
Pi are defined by
pi pi_i (i + Y) (361)
b (361)
p C1 i(1 + Y)
"C* Ci
P c ci (362)
i(1 + Y)
If the value of j is obtained from a monitor scaler, the variance to mean
can be shown to be
1 1
( M 1) M M
(5 + 1)2 2( + 1) + M
=i 1 i l li=1 i=1
where M is the number of values of i used in the summations. This
technique of processing data has shown good agreement with the Feyn
man variance method.
37 INTERVAL DISTRIBUTION (BABALA) METHOD
Recent work by Babala5 using the distribution of the lengths of inter
vals between counts seems to offer a number of advantages over some
of the other counting techniques. In the case of a sequence of counts
with (uncorrelated) statistics, the interval distribution is given by the
70 RANDOM NOISE TECHNIQUES
probability of no count in a time interval t, multiplied by the probability
of a count in an infinitely small time interval dt, immediately following
p(t) dt = po(t) pc(dt) = eF^FE dt (364)
Since in this case the counts are independent of each other, Eq. 364
represents the probability distribution of time intervals between counts.
More rigorously, one should write
(t) d = pc(dt') po(t) p(dt) (365)
p(dt')
where dt' is an infinitely small time interval immediately preceding the
interval t. The denominator in Eq. 365 is required to satisfy the
normalization condition
/0 p(t) dt = 1
Equation 364 gives the probability that, after a time origin t = 0
chosen at random, the first count arrives in the time interval dt at t, and
Eq. 365 gives the probability that, after a count at t = 0, the next count
comes in dt at t. For correlated sequences of counts, these probabilities
are different from each other. Therefore we shall adopt the nomen
clature of Babala and refer to the expressions of Eqs. 364 and 365 as
the randomorigin (RO) interval distribution and the counttocount
(CC) interval distribution, respectively, and designate the corresponding
probabilities to be pRo(t) and pcc(t).
37.1 CounttoCount Interval Distribution. Babala5 has derived an
expression for the counttocount interval distribution using a three
interval probability generating function and letting the first and third
intervals go to zero. The result is
pcc(t) dt = C1(t) dt + C2(t) eut dt (366)
where
[ (  1) 1 (,  ) e"T ]2
C,(t) = 4Fe po(t) [( + 1 + (Y 1eJ (367)
(L + 1)2 (Y 1)2 eyt
8FP po(t) 72
C2(t) = (368)
0 [(y 1)2 (y 1) eayt]2
where r, the equivalent neutron source strength, is given by
SA Fp
a = or a = (369)
D, aD,
The parameter 7 is given by Eq. 355, and po(t), the probability of no counts
in the interval from 0 to t, is given by Eq. 354 with A = t. S is the
NEUTRONCOUNTING TECHNIQUES 71
neutron source strength when the system is subcritical. The two forms
of Eq. 369 are for subcritical and critical nuclear systems, respectively.
Equation 366 has certain features that are of interest. If we let
e < p2, i.e., y 1, and increase the source S so that C2(t)  0, the result
is
pcc(t) dt = FeeFt dt (370)
which is identical to Eq. 364 for a Poissonian distribution of counts;
i.e., the process is uncorrelated.
The probability pcc(t) is dependent on both the power (or source)
level and the detector efficiency but does offer advantages over other
statistical techniques. At high power levels where the Rossialpha tech
nique is useless for parameter measurements, C2 + 0, and thus we have
pcc(t) dt = Cl(t) dt (371)
which can be used for parameter measurements.
If the efficiency is very low (i.e., e<< pp and 7  1), all efficiency
limiting techniques are useless. However, under the condition 2ED,/p' <<
1, Eq. 365 becomes
pcc(t) = eF^t [Fe + (Do e
I \2Ap
= eAt[A + Be"t] (372)
which, except for the term eF^, is comparable to the Rossialpha expres
sion. For fast reactors where efficiencies are low, the counting rate is
low, and the time intervals are short, the exponential term eFe' is approxi
mately unity, and Eq. 372 becomes
pcc(t) dt = (A + Be"e) dt (373)
which is identical to Eq. 320 for the Rossialpha experiment. This is
the explanation for the success of the Rossialpha technique used by
Brunson et al.16 (see Sec. 33) when they were actually measuring the
counttocount times.
The experimental procedure has been described in Sec. 33. The first
pulse triggers the analyzer in which the channels are advanced by a
precision timer. The second count stops the analyzer, a count is inserted
into the memory position corresponding to the channel where the analyzer
was stopped, and the system is reset to wait for the next count. Such an
arrangement records only half the data, i.e., the time interval between
every other pulse. If the analyzer is automatically triggered by the
stopandreset action, all the data can be recorded. However, this
procedure does shorten the measured time interval by an amount equal
to the dead time (time required to stop the analyzer, store the count, and
72 RANDOM NOISE TECHNIQUES
reset and start the analyzer). With a modern analyzer this dead time
can be made quite short; however, an appropriate correction should be
made routinely.
37.2 RandomOrigin Interval Distribution Method. Closely related
to the counttocount interval distribution method is the randomorigin
interval distribution method. The primary difference is that the origin
of the interval is randomly chosen by a process that is uncorrelated with
the nuclear phenomenon being studied. Babala6 has derived the expres
sion for the probability distribution for randomorigin intervals to be
[ (y L 1) 4 (y 1) e 1dt (
pRO(t) dt = 2Fe po(t) (Y + 1) ( 1) e dt (374)
L(y + 1)2 (y 1)2 eay'
where 7 and po(t) are defined by Eqs. 369 and 354, respectively. As in
the case of the counttocount interval distribution, the process becomes
Poissonian and
PRO(t) dt = FeeF^t dt (375)
when 7  1 because of decreased efficiency e or very subcritical systems.
Pacilio32 has pointed out that f pRo(t) dt represents the probability
that, after a time t = 0 chosen at random, the first pulse will arrive
between 0 and t. Since po(t) is the probability that the same event occurs
between t and infinity, we obtain
po(t) + / pRO(t) dt = 1 (376)
from which
pRO( = (377)
9t
We can also consider pRO(t) to be the probability that after a pulse arrives
between the random origin and dt an empty interval follows. This
probability can be expressed as the product of the probability of one
count between 0 and dt and the probability of the next count occurring
at any time greater than t. Hence
pRo(t) dt = (Fe dt)[1 J pcc(t) dt] (378)
where the integral is the probability that after a count at time t = 0
the next count arrives between 0 and t. Hence
1 9pRo(t) 1 92po(t)
pce(t) = t (379)
Fe at Fe at2
The experimental procedure is substantially the same as that used
for the counttocount interval distribution procedure except that the
NEUTRONCOUNTING TECHNIQUES 73
analyzer is triggered by a randomly occurring pulse after the analyzer
is reset.
This procedure is effective for thermalreactor systems but is efficiency
limited and cannot effectively be used for fastreactor systems. Austin
et al.36 have used this procedure, which they called the "waiting time
alpha method" and obtained good agreement with Rossialpha and
pulsedneutron measurements.
38 DEADTIME (SRINIVASAN) METHOD
An alternate method of measuring a has been recently suggested by
Srinivasan.37 It is based on the fact that by introducing an artificial
variable dead time into the measuring instrument, one influences the
correlation between counts. We shall discuss this influence for the case
of a paralyzable instrument, defined in the following way by Srinivasan:
Suppose a sequence of input pulses (true counts) from a neutron detector
is fed into an instrument that yields a sequence of output pulses (output
counts). If the instrument transmits a true count to the output, it is
unable to provide a second output count unless there is a time interval
of at least d (dead time) between two successive true counts. Thus this
instrument registers a number of intervals longer than d between true
counts.
For uncorrelated counts the relation between the count rate Cd on the
output of a paralyzable instrument and the true count rate C is given by
Cd = Cecd (380)
where the exponential function is simply the probability that an interval
between two true counts is longer than d. The variance of output
counts in a time A of such a system for a process having a Poisson
distribution of input pulses is given by Srinivasan to be
c2 = + 2 [2(A d) > 0] (381)
from which
= 1  1 (382)
where
c = CdA (383)
74 RANDOM NOISE TECHNIQUES
For correlated counts such as those which occur in a zeropower nuclear
reactor, the variance in counts for a paralyzable instrument is
c2 = j + j2
2CdB /A d\r 1 e()
+  ) ( d7 ed (384)
a \ A a(A d)
where for short dead times
B= 1 Cd [1 + ( I (385)
Rearranging Eq. 384 gives
=15 1(nf L
2a d(A d)
+] __[1 __I)  ead
=l [1 (__)2
eD, A d 1 ea(d)
+ P pco(d) Be" A 1 e d) (386)
p2 A a (A d)
where pco(d) is the probability that an interval between two counts is
greater than d and is given by
[ (, + 1) + (, 1)e"d,
S (y + 1)2 (' 1)2edyd
where po(d) is given by Eq. 354 with A = d and all other terms are as
defined previously.
Equation 385 is perhaps too complicated for the purpose of practical
determination of a by varying the dead time d. It can be used, however,
for estimating the effect that the dead time of an instrument has on
variance. It is readily seen, for example, that the deadtime effect can
be neglected if Cd
A similar analysis of a nonparalyzable instrument appears to be
much more difficult and will not be attempted here.
39 CORRELATION ANALYSIS TECHNIQUES
The crosscorrelation function ,xy(r), of which the autocorrelation
function 4x(r) is a special case where x = y, of a stationary process has
NEUTRONCOUNTING TECHNIQUES 75
been defined by Eqs. 2144 and 2147 to be
^ = E[x(t) y(t + r)]
= f_ (t1) y(t2) P[x(tl),y(t2)] dx dy (388)
where p[x(ti),y(t2)] is the joint probability function that event x occurs
at time tl and event y occurs at time t2, and T is defined by
7 = t2 11 (389)
If we let x be the detection of a neutron by detector 1 and y be the detec
tion of a neutron by detector 2 (or by 1 for the special case where x = y),
then the correlation function is readily seen to be the probability of a pair
of counts occurring in A1 at ti and in A2 at t2; i.e., they occur at an interval
7 apart. This is the same quantity studied in Sec. 33 in the discussion
of the Rossialpha. Hence
Oxy(T) = p(tl,t2) = PC(tl,t2) + pR(tl,t2) (390)
where p(tl,t2) is the probability of a count at tl followed by a count at t2
and the subscripts C and R refer to correlated and random events. If
one detector is used, then Eq. 390 becomes equal to Eq. 317, except
for the presence of a Dirac delta term at 7 = 0.
D,l2
xM(r) = F2e + FE2 D kp e" + Fe 6(r) = A2 + ABe" + A 6(r)
2(1 k,)I
= A(A + Be") + A S(r) (391)
where A and B have been defined by Eqs. 325 and 326. This is known
as the autocorrelation analysis technique. From a theoretical point of
view, it is substantially the same as the Rossialpha technique, but the
measurement technique is entirely different. Note that the random or
background term is dependent on the square of the fission rate F (or
power), whereas the amplitude of the exponential term is dependent only
on F. Hence such a technique is limited to very low fission rates. The
Dirac delta term does not occur in the Rossialpha measurements due to
the delay located in front of the first coincidence channel (see Fig. 32).
If two detectors with the same efficiencies, e, are used, the random count
collected is independent and hence uncorrelated since the neutrons are
detected by absorption. Equation 391 now becomes
Fe2D,k2
4,y(r) = Fe2 + F e
2(1 k,)l
= A2 + ABe" = A (A + Be") (392)
76 RANDOM NOISE TECHNIQUES
and is known as the crosscorrelation analysis technique. The elimination
of the Dirac delta term in Eq. 392 is the principal difference when the
twodetector crosscorrelation technique is used in the time domain. As
we will see later, this corresponds to the elimination of the constant back
ground term in the frequency domain and allows measurements to be
taken with relatively low efficiency detectors. Since the preceding
derivation is based on a lumpedparameter model, the detectors are
usually located reasonably close to each other; the reactor must be small
enough that spatial effects are not significant.
The typical correlation experiment with pulses from a detector is
carried out by recording the pulses from one or two detectors and replay
ing the record for each value of r. Typically, the number of counts x and
y in small time increments A is taken as the counting rates over the time
interval A, and the data are processed according to the relation
1 NI
,(kA) = Nk x[t + iA] y[t + (i + k)A] (393)
where the time lag is an integral number of time increments A
r = kA (394)
The calculations associated with Eq. 392 are time consuming and
usually require a digital computer. Often it is more convenient to use
the Rossialpha procedure than to carry out an autocorrelation measure
ment. Sometimes the pulses are converted to an analog variable, or
ionizationchambertype detectors are used to provide an analog variable
that can be correlated with analogtype correlators. The relation of
Eq. 392 is valid only for reactors small enough to be represented by a
lumpedparameter model. Spatial effects can distort the results if
they are not properly taken into account or are not recognized.
310 COVARIANCE MEASUREMENTS
The covariance is defined by Eq. 2140 to be
x,, = E(xy) E(x) E(y) (395)
i.e., it is the difference between the expected product of the variables
and the product of the expected values. If this difference vanishes, the
two variables are not correlated; if it does not, it is a good measure of the
correlation between them. If the outputs of two neutron detectors are
sampled for a large number of times for an interval A to give the ensemble
NEUTRONCOUNTING TECHNIQUES 77
of counts {ci(A),c2(A)}, the covariance can be calculated by
e12(A) = ((c (c6))(c2 (c2)))
= (clC2) (Ci)(C2) (396)
Cohn38 indicated that, if the promptneutron approximation
a2A<< 1 (397)
is valid, we can modify the Feynman variancetomean expression (Eq.
333) to obtain the alternate (but equally valid) expressions
(A) 2D, 1 e
S 1 (398)
(C) pP aA
and
e1I(A) _6D, / 1 e
(c) = A) (399)
(C2 p2 aA I
If the promptneutron approximation of Eq. 397 is not valid,
Eqs. 398 and 399 become
e12(A) 7 2A Ho 1 e(30
= e2D, (a) 1 (3100)
(C6) i=1 a\ aiA /
ez2(A) 7 2A ( 1 e"ai
D, I o(ai) 1 (3101)
(2) i1 i iA /
where Ai, Ho(a(), and ac are the same as in Eq. 336 and as given in Table
32. Note that the unity term in Eq. 333 which represented the random
background (actually, the Poissonian relative variance) has been elim
inated by the crosscorrelation involved in this process.
Table 32
Constants for ZeroPower Transfer Function 235UFueled Reactor Near
Delayed Criticality*
3 = 0.0064, A < 5 X 104 sec, [p\ < 0.10
i 1 2 3 4 5 6 7
a sec1 (3 p)/A 2.89 1.02 0.195 0.068 0.0143 p/11.6
Aisec1 (1 3)/A 29 20 11.2 6.1 1.2 11.6
Ho(ai) (1 ) 164 186 237 284 343 415 
2(3 p) 2p
From E. F. Bennett, The Rice Formulation of Reactor Noise, Nucl. Sci. Eng.,
8(1): 53 (1960).
78 RANDOM NOISE TECHNIQUES
The covariance technique is superior to the conventional Feynman
technique because it partially eliminates the bias effects of measurements
for finite times which are present in the latter technique (i.e., the Poisson
relative variance is assumed to be unity, whereas it may actually differ
somewhat from unity for a finite time measurement).
311 ENDOGENOUSPULSEDSOURCE TECHNIQUE*
In the implementation of the Rossialpha procedure for measurements
using a multichannel analyzer as a multiscaler, the first pulse starts the
analyzer and subsequent pulses are recorded in the appropriate channel.
No regard is given to whether the neutron density is increasing, decreas
ing, or remaining constant. The endogenous pulsed technique uses a
triggering pulse that occurs when the fluctuating neutron density reaches
a preselected level above the mean level. The spontaneous bursts to
levels significantly higher than the mean level may be considered to be
due to variations in the fission rate; the decay to a lower level is character
ized by the fundamental decay constant a. The improvement of this
technique over the conventional Rossialpha measurements using a multi
scaler is due to the preselection of measuring periods when the neutron
density is decaying. This provides an increased signaltobackground
ratio because only decay chains of significant amplitude are analyzed.
Such a technique has some of the features of a pulsedneutron measure
ment while retaining the simplicity, economy, and convenience of the
conventional Rossialpha measurements. The reduction in time required
over a conventional Rossialpha measurement is such that it is practical
to carry out endogenouspulsedsource measurements on thermal reactors.
Similar advantages can be expected for fastreactor systems.
311.1 Theoretical Considerations. The neutron density can be
described by the counts detected in the interval A
c(t) = coe" + c (3102)
where c is the mean value of the background given by Eq. 332 to be
F = FeA (3103)
for a critical reactor. For a subcritical system
SeA
c (3104)
v(1 k)
The amplitude of the spontaneous burst co above the mean value c is
S
co = c (3105)
B
This technique has sometimes been called the inherentpulsedsource technique.
NEUTRONCOUNTING TECHNIQUES 79
where S/B is the signaltobackground ratio. Pacilio89 has pointed out
that this technique is equivalent to a pulsedneutron technique with the
intensity (above the steadystate level) given by Eq. 3105 and a repe
tition rate given by
1
R = pi (3106)
A (S/B+1)
where pi is the probability of counting i pulses in a time interval A when
j is the mean number of counts per interval A. Pacilio39 has tabulated
values of co and R for various experimental conditions and calculated the
time necessary to collect a given number of burst decays in such measure
ments on thermalreactor systems. The result has been significant
improvement in statistical accuracy and decreased measuring time com
pared with conventional Rossialpha procedures. Although this study
presumes an efficiency associated with an incore detector for both types
of measurements, recent work by Pacilio22 indicates that such measure
ments can be taken with the detectors located in the reflector.
311.2 Experimental Measurements. The experimental setup is
substantially the same as that used for the onedetector Rossialpha
experiment except that a special preselection and triggering device is
used. Several types of such devices have been used:
1. Pacilio39 used a fastresponding rate meter to observe the neutron
population. When a predetermined threshold level is reached, the instru
mentation system is triggered. This threshold level must be adjusted
with power level and efficiency of the detector.
2. Pacilio39 has also digitally counted the number of pulses collected
in a predetermined time interval A. When this number of pulses reaches
a preselected level, the analyzer is triggered.
3. Chwaszchewski et al.40 used two countrate meters, one with a slow
time constant r, and the other with a fast time constant rf. When a
burst occurred, the fast rate meter responded while the slow one did not,
thereby triggering the instrumentation system. This procedure has the
rather severe limitation
1
7T < < 7, (3107)
a
4. Borgwaldt41 and Pacilio39 used a simple triplecoincidence trigger
that functions in the following manner. A pulse from the detector opens
a coincidence gate for a time interval inversely related to the counting
rate selected. If two more pulses arrive from the detector in the time
interval, the instrumentation is triggered. Obviously, other combina
tions of gates are possible.
80 RANDOM NOISE TECHNIQUES
Experimental data are fitted to Eq. 3102 to obtain a value of a,
usually as a means of measuring reactivity. Chwaszchewski et al.40
found agreement within 2% with conventional pulsedneutron experi
ments in the reactivity range $0.05 to $0.35 in a watergraphite
moderated enriched system. Pacilio39 carried out endogenouspulsed
source measurements in the reactivity range from criticality to $13 in
an organicmoderated enriched system. The results were in good agree
ment with pulsedneutron experiments.
REFERENCES
1. J. D. ORNDOFF, Prompt Neutron Periods of Metal Critical Assemblies,
Nucl. Sci. Eng., 2: 450 (July 1957).
2. R. P. FEYNMAN, F. DE HOFFMAN, and R. SERBER, Dispersion of the Neutron
Emission in U235 Fission, J. Nucl. Energy, 3: 64 (1956).
3. A. I. MOGILNER and V. G. ZOLOTUKHIN, The Statistical rMethod of Meas
uring the Kinetic Parameters of a Reactor, At. Energ. (USSR), 10: 377 (1961).
4. J. A. THIE, Reactor Noise, Rowman and Littlefield, Inc., New York, 1963.
5. D. BABALA, Neutron Counting Statistics in Nuclear Reactors, Norwegian
Report KR114, November 1966.
6. A. N. KOLMOGOROV and N. A. DMITRIEV, Theory of Branching Processes,
Dokl. Akad. Nauk. SSSR, 56: 7 (1947).
7. E. D. COURANT and P. R. WALLACE, Fluctuations of the Number of Neu
trons in a Pile, Phys. Rev., 72: 1038 (1947).
8. L. I. PAL, Statistical Fluctuations of Neutron Multiplication, in Proceedings
of the Second United Nations International Conference on the Peaceful Uses
of Atomic Energy, Geneva, 1958, Vol. 16, p. 687, United Nations, New York,
1959.
9. R. L. MURRAY, Nuclear Reactor Theory, PrenticeHall, Inc., Englewood
Cliffs, N.J., 1957.
10. S. GLASSTONE and M. C. EDLUND, The Elements of Nuclear Reactor Theory,
D. Van Nostrand Co., Inc., New York, 1952.
11. B. C. DIVEN, H. C. MARTIN, R. F. TASCHEK, and J. TERRELL, Multiplicities
of Fission Neutrons, Phys. Rev., 101: 1012 (1956).
12. W. MATTHES, Statistical Fluctuations and Their Correlation in Reactor
Neutron Distribution, Nukleonik, 4: 213 (1962).
13. H. BORGWALDT and D. STEGEMANN, A Common Theory for Neutronic Noise
Analysis Experiments in Nuclear Reactors, Nukleonik, 7: 313 (1965).
14. T. IIJIMA, Remark on RossiAlpha Experiment, Nukleonik, 10: 93 (1967).
15. H. DIAZ and R. E. UHRIG, A Digital Computer Controlled Data Acquisition
and Processing System for Nuclear Experiments, Trans. Amer. Nucl. Soc.,
8: 588 (November 1965).
16. G. S. BRUNSON, R. N. CURRAN, J. M. GASIDLO, and R. J. HUBER, A Survey
of PromptNeutron Lifetimes in Fast Critical Systems, USAEC Report
ANL6681, Argonne National Laboratory, August 1963.
NEUTRONCOUNTING TECHNIQUES 81
17. J. T. MIHALCZO, PromptNeutron Lifetime in Critical EnrichedUranium
Metal Cylinders and Annuli, Nucl. Sci. Eng., 20: 60 (1964).
18. R. A. KARAM, Measurements of RossiAlpha in Reflected Reactors, Trans.
Amer. Nucl. Soc., 7: 283 (June 1964).
19. W. SUWALSKI, NORA First H20 Core Noise Measurements: Part I, Rossi
Alpha Method, Norwegian Report NORAMemo112, 1965.
20. C. E. COHN, Reflected Reactor Kinetics, Nucl. Sci. Eng., 13(1): 12 (1962).
21. E. F. BENNETT, The Rice Formulation of Reactor Noise, Nucl. Sci. Eng.,
8(1): 53 (1960).
22. N. PACILIO, Comitato Nazionale per l'Energia Nucleare, personal communi
cation, 1968.
23. D. STEGEMANN, Die Analyse des Neutronenravschens in Reaktoren, German
Report INR4/661, 1966.
24. R. W. ALBRECHT, The Measurement of Dynamic Nuclear Reactor Parameters
Using the Variance of the Number of Neutrons Detected, Nucl. Sci. Eng.,
14(2): 153 (1962).
25. R. L. JOHNSON, A Statistical Determination of the Reduced Prompt Genera
tion Time in the SPERT IV Reactor, USAEC Report IDO16903, Phillips
Petroleum Company, August 1963.
26. E. TURKCAN and J. B. DRAGT, Experimental Study of Different Techniques
for Analyzing Reactor Noise Measured by a Neutron Counter, Dutch Report
RCNINT75, 1967.
27. N. PACILIO, Review of Statistical Methods for Reactor Parameter Measure
ments Developed at C.S.N. Casaccia, Italian Report RT/FI 6637, 1966.
28. N. PACILIO, Short Time Variance Method for Prompt Neutron Lifetime
Measurements, Nucl. Sci. Eng., 22(2): 266 (1965).
29. K. KURUSYNA, Analysis of Nuclear Reactor Noise, Genshiryoku Kogyo, 8: 49
(1962).
30. D. B. McCULLOCH, An Absolute Measurement of the Effective Delayed
Neutron Fraction in the Fast Reactor ZEPHYR, British Report AERE
R/M176, July 1958.
31. A. J. LINDEMAN and L. RUBY, Subcritical Reactivity from Neutron Statistics,
Nucl. Sci. Eng., 28(2): 308 (1967).
32. N. PACILIO, ReactorNoise Analysis in the Time Domain, USAEC Critical
Review Series, USAEC Report TID24512, April 1969.
33. L. I. PAL, Statistical Theory of Neutron Chain Reactors, in Proceedings of
the Third United Nations International Conference on the Peaceful Uses of
Atomic Energy, Geneva, 1964, Vol. 2, pp. 218224, United Nations, New
York, 1965.
34. N. PACILIO, The Polya Model and the Distribution of Neutrons in a Steady
State Reactor, Nucl. Sci. Eng., 26(4): 565 (1966).
35. G. POLYA and F. EGGENBERGER, Uber die Statistik Verketterer Vorgange,
Z. Angew. Math. Mech., 3: 279 (1923).
36. D. T. AUSTIN et al., Comparison of the WaitingTime Alpha with the Rossi
Alpha, Trans. Amer. Nucl. Soc., 10(2): 591 (1967).
82 RANDOM NOISE TECHNIQUES
37. M. SRINIVASAN and D. C. SAHNI, A Modified Statistical Technique for the
Measurement of a in Fast and Intermediate Reactor Assemblies, Nukleonik,
9(3): 155157 (1967).
38. C. E. COHN, Argonne National Laboratory, personal communication, 1968.
39. N. PACILIO, Neutron Statistics Techniques Applied to the ROSPO Reactor,
in Proceedings of the Karlsruhe EAES Symposium III, European Atomic
Energy Society, p. 9, 1966.
40. S. CHWASZCHEWSKI et al., Improved Methods for Prompt Neutron Period
Measurements, Nucl. Sci. Eng., 25(2): 201 (1966).
41. H. BORGWALDT, Karlsruhe Nuclear Research Center, personal communica
tion, 1966.
4
Basic Relations of Random
Noise Theory
41 INTRODUCTION
The probability that a particular 235U atom in a nuclear system will
absorb a neutron and produce fission is dependent on its location, the
surrounding materials and their absorption cross sections, the neutron
energy, and the direction of motion of both the neutron and the 235U
atom. These factors give rise to a statistical variation in the lengths
of time between fissions in a nuclear system. Since the probability of
fission occurring is influenced by the characteristics of the nuclear system,
some of these characteristics of the system can be determined by an
analysis of these statistical variations. As shown in Chap. 3, the pres
ence of the correlated events associated with fission chains increases the
magnitude of the fluctuations over that which would otherwise occur.
42 AUTOCORRELATION FUNCTION
An autocorrelation function is an extension of the concept of a mean
square value to cover an interval of time. Whereas the meansquare
value is the average of the square of the value of a function at a particular
time, the autocorrelation function 04,(r) is the average of the product of
two values of the variable separated by a time interval r. At time tk
the autocorrelation function of the process x(t), shown in Fig. 11, is
N
S xi(tk) Xi(tk + )
xM(tk,r) = lim i= (t) + T) (41)
N.am N
If the process is time stationary, the definition becomes independent of
time:
N
= i X(t) xi(t + r)
N8 3(
83
4
Basic Relations of Random
Noise Theory
41 INTRODUCTION
The probability that a particular 235U atom in a nuclear system will
absorb a neutron and produce fission is dependent on its location, the
surrounding materials and their absorption cross sections, the neutron
energy, and the direction of motion of both the neutron and the 235U
atom. These factors give rise to a statistical variation in the lengths
of time between fissions in a nuclear system. Since the probability of
fission occurring is influenced by the characteristics of the nuclear system,
some of these characteristics of the system can be determined by an
analysis of these statistical variations. As shown in Chap. 3, the pres
ence of the correlated events associated with fission chains increases the
magnitude of the fluctuations over that which would otherwise occur.
42 AUTOCORRELATION FUNCTION
An autocorrelation function is an extension of the concept of a mean
square value to cover an interval of time. Whereas the meansquare
value is the average of the square of the value of a function at a particular
time, the autocorrelation function 04,(r) is the average of the product of
two values of the variable separated by a time interval r. At time tk
the autocorrelation function of the process x(t), shown in Fig. 11, is
N
S xi(tk) Xi(tk + )
xM(tk,r) = lim i= (t) + T) (41)
N.am N
If the process is time stationary, the definition becomes independent of
time:
N
= i X(t) xi(t + r)
N8 3(
83
84 RANDOM NOISE TECHNIQUES
The fundamental process involved in correlation is displacing one variable
with respect to another, multiplying the displaced variable by the original
variable, and averaging over an infinite period of time or number of
ensembles. For an ergodic process we can substitute a time average
for the ensemble average, and the autocorrelation function becomes
1 T
(r) = lim f xi(t) xi(t + r) dt
T. 2T _T
= E[xi(t) xi(t + 7)] (43)
where xi(t) is any of the sample records. It is necessary to let the limits T
approach infinity unless xi(t) is periodic. For the special case when the
time lag is zero, the autocorrelation function becomes
1 T
4.x..(0) = lim I [xi(t)] dt
T.oo 2T J_T
= E[x2(t)] = ,i (44)
which, by definition, is the meansquare value of x(t).
If x(t) is time stationary, it can be considered to be the sum of a
fluctuating component x'(t) and a steady component that is the mean
value u,; i.e.,
x(t) = Jx + x'(t) (45)
Substituting Eq. 45 into Eq. 44 gives
Ox.(0) = E[,A] + E[2, x'(t)]
+ E[x'(t)2] (46)
where, according to the definitions given in Chap. 2, the last term is the
variance oa (square of the standard deviation) and the first term is the
square of the mean, Ip. The second term can be shown to be equal to
zero,
E[2c, x'(t)] = 2AE[x'(t)]
= 2 0x x, = 0 (47)
since Ax is zero by the definition of x'. Hence Eq. 46 becomes
OX(0) = = 2 + A (48)
The autocorrelation function xx(Tr) also contains some information
concerning the frequency distribution of the random signal x(t). If
xx,(r) changes rapidly with r, high frequencies predominate; but, if
4x(r) changes very slowly with r, low frequencies predominate.
It can be easily shown that the autocorrelation function
BASIC RELATIONS OF RANDOM NOISE THEORY 85
even function and hence symmetrical about the vertical axis. This
symmetry can be expressed by
=hr) = 0.(T) (49)
Furthermore, Pxx(r) never exceeds 0,,(0); i.e., ,xx(r) < 0xx(0) for all r.
This follows from the inequality
[x(t) x(t + r)]2 > 0 (410)
Expanding this expression, transposing terms, integrating from T to T,
dividing by 2T, and taking the limit as T approaches infinity gives
Ixx(r)) < = 4x(0) (411)
when the definitions of Eqs. 43 and 44 for a timestationary variable
are used. This expression can be rearranged to give the ratio
S< 1 (412)
which is often called the normalized autocorrelation function and always
has a value less than unity except at 7 = 0.
If x(t) contains a periodic component, txx(r) will also contain a periodic
component with the same period; but x,,(r) gives no information about
the phase of the periodic component. However, 0xx(r) approaches zero
as r approaches infinity if x(t) contains only random components and ix
equals zero. This means x(t + r) becomes uncorrelated with x(t) as r
approaches infinity.
If two uncorrelated random variables such as xl and x2 have zero means
and autocorrelation functions 011(r) and 022(7), then the autocorrelation
function of xl + x2 is [11(7r) + 022(r)]. This can be shown by sub
stituting x = xx + x2 into Eq. 43.
43 AUTOCOVARIANCE FUNCTION
In many practical applications the mean value A, is zero and the pre
ceding relations are simplified. In fact, it is often necessary (and usually
standard procedure) to remove the mean value from experimental data
before further processing it. Thus it is convenient to define the auto
correlation function of a variable for which the mean value is zero as the
autocovariance function and designate it by the symbol exx(r). The
relation between the autocorrelation function and the autocovariance
function can be shown by substituting Eq. 45 into Eq. 43 and proceed
ing in the manner used to derive Eq. 48. Using the definitions of mean
86 RANDOM NOISE TECHNIQUES
value, meansquare value, variance, and standard deviation gives
(r) = ,xx'(r) + xx', + ixx' + '2
= 'x'(r) + = C(7r) + p (413)
since A', by definition, is equal to zero. The autocovariance function
exx(r) is identical to the autocorrelation function if the mean value is
equal to zero or if the mean value has been removed. The effect of the
presence of the mean value yx in the variable x(t) is to displace the auto
correlation function by an amount IA. This will be discussed later when
the effect of the presence of a mean value is considered.
In many practical situations the mean value of the variable ju is equal
to zero, and Eq. 413 becomes
0.(r) = .1,x,(T) = exx(r) (414)
In most analyses of experimental results, the mean value t is removed
from the sample record before the data are processed. Hence there is no
difference between the autocovariance and autocorrelation functions of
the adjusted variable provided that the mean value of the sample record
is equal to the mean value of x(t). In this text we will use the auto
correlation function ,xx(r) when there is no requirement that the mean
value be equal to zero. When there is such a requirement, we will
specify it or use the autocovariance function e,(r). By doing this, we
hope to follow the nomenclature of the randomnoise field while still main
taining the distinction between correlation and covariance functions.
44 POWER SPECTRAL DENSITY
In working with the autocorrelation function, one is dealing with the
behavior of the function of time and hence is working in the time domain.
An alternate approach is to work in the frequency domain and separate.
the signal into its frequency components. For a nonperiodic function 
it is usually necessary to take the Fourier transform of the function t6
transfer it to the frequency domain since there is a continuum of fre
quencies represented. However, in the case of a stationary random or
stochastic process, x(t) cannot become arbitrarily small for a large t,
because the statistical properties must remain constant with time.
Therefore Ix(t) dt does not converge, and the Fourier transform
does not exist.
This difficulty can be overcome by defining a new function called the
power spectral density, designated by the symbol )(w), as the Fourier
