Special considerations in estimating detection limits


Material Information

Special considerations in estimating detection limits
Physical Description:
vii, 161 leaves : ill. ; 29 cm.
Stevenson, Christopher Leonidas, 1964-
Publication Date:


bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )


Thesis (Ph. D.)--University of Florida, 1991.
Includes bibliographical references (leaves 154-160).
Statement of Responsibility:
by Christopher Leonidas Stevenson.
General Note:
General Note:

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001715260
notis - AJC7627
oclc - 25605227
System ID:

Full Text








There have been many group members who have been supportive throughout

the past years, and who have made this time much more interesting. Most

particularly I would like to thank Joe Simeonsson and Giuseppe Petrucci for their

invaluable friendship and help in the lab; Giuseppe especially was helpful in keeping

me alert through his constant prowling about for unwatched lab equipment. Joe also

kept me on my toes -- he dogged my trail from North Carolina to Florida, and kept

showing up at every residence I ever had in Gainesville.

By far the three people most influential on me during my stay here have been

three outstanding scientists and teachers: Benny Smith, Nico Omenetto, and Jim

Winefordner. It would be fortunate indeed to have come into contact with any one

of these three during graduate school; having worked with all of them has been an

unforgettable experience. I especially would like to thank Jim for his support and

encouragement, and the opportunity to be a part of his group.

Finally, my deepest love and gratitude go to my family, who have never

faltered in their support. Nothing would have been possible without the tremendous

love of my parents and sister. Maybe someday I may even move back to California.


ACKNOWLEDGEMENTS ..................................... ii

ABSTRACT ................................................ vi


INTRODUCTION ........................................ 1


THEORY OF LIMITS OF DETECTION ...................... 4

Analyte Signal Detection ................................... 5
Minimum Detectable Concentration .......................... 13
Limit of Guaranteed Detection ............................. 20
Limit of Quantitation .................................... 22
Summary ............................................. 24



Laser Spectroscopic Methods of Analysis ...................... 25
Single Atom/Molecule Detection ............................ 28
Past SAD Experiments: A Sampling of Applications and Techniques 29



Estimation Theory ....................................... 37
The Limit of Detection as a Population Parameter ............... 38
Variability of LOD ...................................... 40
Confidence Limits and Comparing Values of LOD ............... 42
Sum m ary .............................................. 43



Introduction ................................... .. ....... 45
Definition of an SAD Method ............................ 47
General Model of SAD Methods .......................... 48
Signal Detection Limit for the SAD Model .................... 60
Detection Efficiency of a near-SAD Method .................... 61
Requirements for SAD ................................... 62
Precision of Counting Atoms .......... ...... .............. 69
Scope of an SAD Method ................................. 76
Continuous Monitoring of Atoms ............................ 79
Overall Efficiency of Detection ............................. 87


EXPERIMENTAL .................................... 92

G general .............................................. 92
Simulations to Investigate the Variance of the LOD ............. 93
Simulations of SAD by LIF ................................ 99


OF THE LIMIT OF DETECTION ......................... 106

Introduction ........................................... 106
Effect of Increasing Values of Slope Error .................... 107
Effect of Increasing Number of Blank Measurements ............ 109
Application to ETA-LIF ................................. 113
Conclusions ........................................... 118



Introduction ........................................... 120
Detection Efficiency at the Intrinsic Noise Limit ............... 122
SAD in the Presence of Noise ............................. 124
Counting Precision ...................................... 126
"Extra" Variance in the Cylindrical Probe Model ............... 131
Continuous Monitoring of Atoms with CW-LIF ................ 134
Conclusions ........................................... 149


RANDOM NUMBER GENERATORS ...................... 152

REFERENCES ............................................. 153

BIOGRAPHICAL SKETCH ................................... 160

Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy



Christopher Leonidas Stevenson

December 1991

Chairperson: James D. Winefordner
Major Department: Chemistry

A valuable figure of merit in evaluating and comparing analytical techniques

is the limit of detection (LOD), which represents the minimum detectable

concentration or amount of analyte in a sample. The factors which influence the

value of the LOD are theoretically evaluated through the application of estimation

theory and propagation of errors to the LOD concept. Equations are derived which

can be used to estimate the magnitude of the random fluctuations which result from

the use of sample statistics in the estimation of the LOD. The resulting confidence

intervals can be used to evaluate and compare the true LOD values of analytical

systems, and to determine the most efficient procedure for quick estimation of LOD.

Monte Carlo simulations of typical analytical situations are used to evaluate the

effectiveness of the derived equations.

Application of conventional detection limit theory is not straightforward when

considering laser spectroscopic methods which are capable of detecting single atoms

or molecules in the laser beam. Theoretical considerations in the detection and

evaluation of these methods are addressed based on a model of a typical laser

spectroscopic experiment with destructive and nondestructive detection methods.

Computer simulations are used to verify the application of the theory of single atom

detection (SAD) to typical experimental situations, as well as to discuss the scope of

SAD methods and the various possible signal processing methods which can be used

to continuously monitor and count the atoms or molecules which flow through the

laser beam.


Any analytical technique designed to determine the concentration or amount

of analyte in a given sample from the magnitude of the resulting signal can be

characterized by a limit of detection (LOD). The LOD is designed to give an

indication of the lower limit of analyte concentration or amount that the given

analytical technique can distinguish from the background noise, which is present even

when the analyte is absent. The LOD is very often a significant characteristic of a

given analytical procedure, since calculated LODs can be used for comparison and/or

evaluation of the procedure relative to other analytical procedures. For example, the

LOD may be used to indicate the improvement of the detecting power of a given

analytical protocol; the improvement is measured by comparing the LOD to the

values reported for the procedure in the past. Alternatively, a particular application

may have certain sensitivity requirements; in this case, the reported LOD value may

be used as a first basis of evaluation as to the suitability of the method for the


The importance of the LOD to characterize a given method's detection

abilities signifies that great care must be taken when estimating and reporting LODs.

The actual value for the LOD calculated for a given analytical procedure may vary

widely due to either of two factors: (1) the method used to measure and calculate the



LOD; and (2) the use of different definitions to characterize a minimal detectable

concentration/amount of analyte. Despite the effort made in the past two decades

to define the LOD unambiguously, and to recommend guidelines for its measurement

[1, 2], this diversity in definition and measurement protocol persists.

The random variation in calculated LOD is often intuitively sensed by

practicing analytical chemists, who realize that LODs which are close (within a factor

of 2 or 3) may not be statistically different. For example, if the same chemist

determines the LOD of the same analytical technique two consecutive times, it is

usually recognized that the same value for the LOD will not result. Additionally, if

the LOD is measured on the next day by a different chemist, then it may be even

more likely that another value will be obtained. This variability in calculated LOD

is usually taken into account only in a general sense when comparing LODs from

different methods; i.e., two LODs which are close are often considered equivalent.

However, it would be desirable to know just how much might be of the variability in

calculated LOD values might be due to random fluctuations in measurement, and

how much due to actual differences in analytical conditions (sensitivity, background

noise, etc).

The various different definitions used for the LOD were recognized and

reconciled in the pioneering works of Kaiser [3] and Currie [4], and the situation has

greatly improved since that time. However, there are now emerging laser

spectroscopic methods which claim the ability to detect very small numbers of analyte

atoms or molecules within the laser beam, all the way down to the level of single

atoms/molecules. The application of the LOD concept to these newer methods is

not always obvious, but the comparison and evaluation of these methods needs

consistent and appropriate definitions specifically designed to address the difficulties

which result from the ability to detect small numbers of atoms.

The intent of this dissertation is to address both of the subjects discussed

above. A review of some of the relevant concepts and past literature is presented

in chapters 2 and 3; these chapters form the basis for the new work presented in the

remainder of the dissertation. The variability of the calculated LODs will be

addressed from a theoretical standpoint in chapter 4; in chapter 5, logical and precise

definitions will be presented which are designed to clarify misunderstandings which

may arise when attempting to define detection limits and other figures of merit for

laser-based analytical methods capable of detecting single atoms/molecules. Later

chapters serve to confirm, evaluate, and illustrate the concepts introduced in chapters

four and five through the use of simple Monte Carlo computer simulations.


The limit of detection (LOD) is an analytical figure of merit (FOM) which

gives the minimal concentration' of analyte in a sample which can be distinguished

from the blank. The LOD of a procedure is only one of several possible FOMs.

The purpose of FOMs is to objectively summarize important characteristics of the

analytical procedure. Other than the LOD, there are a number of important FOMs,

such as the linear dynamic range, the sensitivity of the technique, the resolution, the

accuracy and precision of determination, the informing power of the method, price

and time of analysis, interference, and others which are less commonly used. The

use and importance of many of these has been covered in reviews [5-7] and books

[8-11] and will not be covered here other than to note that the LOD is only one of

a group of FOMs which describe the total analytical procedure, although the LOD

is one of the most important FOMs.

The present concept of the LOD has been promoted in several landmark

papers [3, 4, 12]. These works mark the beginning of applying a statistical approach

to the problem of determining the minimum detectable concentration of analyte for

a given analytical procedure. This chapter will summarize all the relevant principles

'for convenience the term "concentration" will signify either analyte concentration or
amount, whichever is appropriate.

of detection limits in both the signal domain and in the concentration domain. There

will be no attempt at a comprehensive review of past literature; these have been

presented in various review articles and textbooks [8, 10, 11, 13-16].

Analyte Signal Detection

Signal Detection Limit

Any instrumental analytical method is indirect in the sense that the "result" of

a single analysis is a mean instrument response in the signal domain (e.g., current,

charge, potential difference) rather than a direct reading of the analyte concentration

in the sample. In a steady-state measurement, the response of the instrument for a

given sample is measured for a certain time Tm, and the average response during this

time is taken as a single measurement of the sample. The relation of a given

measurement to the concentration of analyte is found through a calibration of the

response function of the instrument; for the present time, however, we will only be

concerned with the instrument response in the signal domain.

In the absence of analyte in the sample, there may be a nonzero response

during Tm due to the blank only. This response consists of a nonrandom component

A and random component characterized by the standard deviation ob. Although it

is certainly possible to compensate for the non-random contribution of the blank, the

random fluctuation Tb will still contribute to uncertainty in the measured signal. For

signals which are of the same magnitude as ob there is a need for some criterion to

decide when a given signal is due to the presence of analyte in the sample or when

the measurement may be due to spurious fluctuations due to the background signal.

The minimal detectable signal in the steady state case is best illustrated

through a simple example. A typical situation is shown in figure 1. The response as

a function of time is shown for one measurement (where Tm = 1000 s in this case).

Each individual value shown in fig. l(a) fluctuates about the (unknown) mean p with

variance a,2; the distribution of the values (population distribution) is shown in fig.

l(b). In the particular case shown in this figure, the population of individual

readings in the blank is normally distributed with a mean of 50 mV and a standard

deviation of 10 mV. The average value in fig. l(a) during Tm is 50.259 mV; this is

one measurement of the blank.

According to the central limit theorem [17], the blank measurements will be

approximately normally distributed, no matter the form of the original distribution

of individual values, with a standard deviation given by

ob x [2.1]


N = the number of individual measurements during Tm,

ax = the standard deviation of the individual signal values, and

ab = the standard deviation of the blank measurement.

The above equation holds if the fluctuation of measurement values due to the blank

is characterized by "white" noise, rather than long-term drift (i.e., flicker noise). The


n o0
4 o0
0 0




(Am) ItrrUS




o C)
U >

cc .0

0 .0
,--- f 4)
g*-t *K *
3 'im

Ei 6* *

question now becomes, at what point is a sample measurement said to be "detected"

above the blank noise. In other words, we seek a measurement value large enough

so that the chance of the value belonging to the distribution of possible blank values

is negligible.

The lowest value at which the measurement is considered to be due to a

process other than blank noise is the signal detection limit, Xd. Since the standard

deviation of blank measurements directly limits the ability to detect small signals, the

detection limit is directly related to the value ob:

Xd b + k b [2.2]

where k is known as the confidence factor. The probability that a blank

measurement can give rise to a measurement value greater than or equal to the

detection limit is known as the type I error, or the probability of false positive error:

P(XbXd) a [2.3]


a = the probability of type I error, and

Xb = one blank measurement (the mean during Tm).

The value of a is controlled by the confidence factor chosen in eqn. 2.2.

Choosing the Confidence Factor

According to the central limit theorem, if Tm is long enough it is usually

reasonable to assume that the blank measurements follow a normal distribution N(p,,

ab2). In terms of the z-statistic,

z- N(o, ) [2.4]

where the symbol means "distributed as." With this in mind the confidence factor,

k, can be chosen according to

k z,/2 [2.5]

where z,2 is chosen from tables of the z-distribution. The factor k depends on the

desired one-sided confidence level, given by the probability (1-a).

The detection limit is defined in eqn. 2.2 in terms of the population

parameters m and ,b. However, these parameters are usually unknown and must be

estimated by repeated measurements on the blank. The effect of substituting an

estimate sb2 for ab2 in the z-statistic is to broaden the distribution. In this case, the

t-statistic must be used, with n-1 degrees of freedom, since

X___ b
2 ~ t- [2.6]

where n is the number of blank measurements, each for time Tm, used to calculate

sb2. In addition, the effect of using the estimated mean Xb for ub in the above

statistic can be seen:

X, -
2 2 I. -2 [2.7]
st + s1/n

where the denominator reflects the increased variance in the numerator. The above

expression suggests that the confidence factor should be chosen according to

k tn-2,/2 (1+1/n) 1/2 [2.8]

Thus, the signal detection limit can be found by first choosing the desired confidence

level a, making repeated measurements on the blank signal (at least 16-20

measurements are recommended), calculating the sample mean and standard

deviation, and substituting in the above equation to find the value of k to use in eqn.

2.2. In practice, the (1/n) term in eqn. 2.8 is usually ignored and the t-factor is

calculated with n-1 degrees of freedom. In essence, this is the same as ignoring the

effect of using an estimate for the parameter ps; however, for typical values of n, the

effect of ignoring the 1/n term in eqn. 2.8 is small.

The value recommended by IUPAC for the confidence factor is 3 [18]; with

/b and ab known this would result in a = 0.0014. Of course, even assuming a normal

distribution of blank measurements, the true value of a would increase due to the

imprecise nature of the estimates used for the population parameters. A non-normal

distribution might inflate the type I error probability even further, although by

Chebyshev's theorem [19] this probability cannot exceed 0.11 (for k=3) when the

blank noise is due to random error alone. A recent publication points out that the

true value of a cannot be accurately known due to various factors such as systematic

error, long-term drift, and non-normal distributions of blank measurements [2]. Since

strict interpretation of Xd and LOD in terms of a is usually difficult, a value of k=3

was recommended for consistency.

The detection limit for the example given earlier (fig. 1) is shown in fig. 2.

Figure 2(a) shows Xd in terms of the probability distribution of the blank

measurements, Xb, with k = 3. In order to get a better idea of the magnitude of the

detection limit in comparison with the background fluctuation, figure 2(b) shows a

signal at a level above the detection limit (55.102 mV), compared with a single blank

measurement (50.259 mV). The signal shown is barely distinguishable from the

blank measurement by eye.

There are two important points which should be made concerning the above

discussion of Xd:

(1) If a single "measurement" actually consists of two measurements -- one on the

sample, and one for blank subtraction, then the confidence factor should be

multiplied by a factor of v2. Simultaneous blank subtraction is frequently used to

account for long-term drift in b.

(2) The discussion above was for steady state signals. The value of the detection

limit will depend on the value of Tm chosen and so this value should always be given

as part of the experimental procedure. Application of the above theory to the case

of transient signals is straightforward in the case of peak detection, and a, is used in

the above equations instead of Ub. The correct detection limit value in the case of

peak integration is somewhat less obvious, and will not be discussed here.

(Am) ItuITS

Ca u


a i

1- 4

c n B

4o d
tz t I

8 -
as a

el *1

oB a,o

S Q-m U o Si^- ses





nn 5


Ill q I I I

Minimum Detectable Concentration

The usefulness of the particular value of Xd for an analytical procedure is

limited. Although knowledge of Xd is necessary for any analyst who wishes to detect

the presence of a small signal as an indication of whether an analyte is present, it is

very difficult to use Xd as a comparison between different methods, or between

methods reported in the literature. Therefore, the value X, must be transformed

into a useful measure of a methods ability to detect the presence of small amounts

of analyte. Before this can be done, the analytical procedure must be calibrated; i.e.,

the functional relationship between the measured signal and the analyte

concentration must be known.


The most commonly used method of calibration is by linear least-squares

regression. Textbooks on regression give detailed theory on various types of

regression analysis [20, 21]; this section will only cover the most basic type, simple

first-order linear least-squares regression.

In many cases in analytical chemistry, a linear relation between the signal and

the analyte concentration can be assumed over the range of interest, and the

following model applies:

Iy, a oX + Ib [2.9]
Yi aoX, + I + e


pyi is the true mean response at X = X,;

ao and Ab are the true slope and intercept of the calibration line;

Yi is an observed (variable) response at X = Xi; and

ei is the true error of an observed response Yi (the residual).

The first equation describes the true mean value of the signal at a given value of

analyte concentration; the second equation describes the effect of the random

variation in the observed measurement at X= XN. The parameter described by ao is

also known as the sensitivity of the analytical method.

Estimates of the parameters ao and 4 are usually found by using least-squares

estimators; equations for these estimators and conditions for their validity are readily

available [20, 21]. Using these estimates, the model becomes

Y, aoX, + Xb + e [2.10]


ao = the estimated slope;

/ = the estimated blank response (i.e., the intercept); and

ei = the observed residual at X = Xi.

Of course, the estimators used are subject to variation aa and ab for the slope

and intercept, respectively. The equations for the least-squares estimates and their

estimated standard error, sa and sb, are readily available in textbooks, along with

conditions for their validity [20, 21]; these values are usually automatically computed

in regression software packages. The variability in the estimates for the parameters

of the linear model in eqn. 2.9 means that there will be uncertainty in a given

predicted response value Y,. It is possible to construct a confidence interval within

which the true value of i,, will lie with 100(1-a)% reliability. This confidence

interval is given by

Y t,2s + (X,-X)2 1/2 [2.11]


S (X,-X)2 [2.12]


s = standard deviation of response (assumed constant); and

N = number of points in calibration curve.

The interval above is likely to contain the true mean response at a given

analyte concentration. An interval which describes where a future response at X =

Xi is likely to fall with (1-a) probability is often called the prediction interval, and

is given by:

Y, t./2,_2 (X, r) [2.13]

The relation between the two intervals for a typical calibration curve is shown in

figure 3. The wider intervals are the prediction intervals.

Weighted least-squares regression. It should be noted that the above intervals

were derived with the assumption of constant variance in the measurement response

along the calibration curve. In analytical chemistry, however, there are a number of


techniques with large linear dynamic ranges, where it is likely that the noise on the

signal increases with the concentration. An example was given recently for atomic

emission in the inductively coupled plasma [22]. In addition, certain transformations

of variables have the affect of skewing the error magnitude even if the assumption

of constant variance were valid to begin with [23]. In these cases, weighted

least-squares estimates must be used, particularly when it is important to obtain

information on the sizes of the intervals given in the above equations.

One-point calibration curves. If it is only desired that an estimate of the

sensitivity a0 near Xd be required, then a single standard can be used so long as it

is known that the linear model applies up to the standard concentration, and that the

standard is reliable. In this case, the standard deviation of the sensitivity estimate

is given by:

a L [2.14]


C = standard concentration, and

ay(c) = standard deviation of response at C.

Of course, multiple measurements of the standard are needed to estimate ayc). Note

that this fluctuation in response includes the uncertainty in both the blank and the

signal measurements.

Limit of Detection

Once the analytical system has been calibrated, it is possible to define the

limit of detection, LOD, as the analyte concentration which corresponds to the signal

detection limit, Xd:

LOD Xd-9 kab [2.15]
U0 a0

where all the terms have been previously defined. The value of LOD thus defined

can be used for comparisons between analytical procedures. Thus, in addition to the

estimates for the blank mean and standard deviation necessary to estimate Xd, an

estimate for the sensitivity must also be used to calculate the LOD.

The confidence factor, k, in the definition of LOD is chosen as described in

the section on signal detection. However, the use of the calibration estimate of

sensitivity introduces another source of variability which may serve to increase a, the

probability of type I error. In the past, another approach to estimating LOD based

on the calibration equation has been advocated in order to compensate for this extra

uncertainty [24-27]. In this approach, the (1 a/2) prediction interval for the

intercept (i.e., the response of the blank) from the calibration curve is used. The

upper limit of this interval corresponds to Xd and the corresponding analyte

concentration is the LOD. The procedure is illustrated in figure 4.

The prediction interval for the intercept can be found by using eqn. 2.13 with

Xi = 0. The procedure is equivalent to using a value for the confidence factor, k, in

eqn. 2.15 calculated as follows:





1 -y 2 \1/2
k t,/2,Nz + + + S [2.16]

The similarity between this equation and eqn. 2.8 can easily be seen; the 2nd and 3"

terms in the parenthesis account for the influence of the calibration conditions on the

value of a. With this procedure for calculating the LOD, the k term (and hence the

LOD) will depend on the calibration conditions such as the number and range of the

concentrations of standards used in calibration, and the use of weighted or

unweighted regression to estimate the prediction interval. Using the confidence

factor in eqn. 2.16 compensates for possible change in a due to these calibration


Limit of Guaranteed Detection

IUPAC defines the LOD as "the minimum concentration or quantity

detectable" and that it is "derived from the smallest measure that can be detected

with reasonable certainty [i.e., the value of Xd]" [1]. This definition of LOD is

deceptive since it can lead to the false assumption that if the analyte is present at or

above the LOD value that it will always be detected -- i.e., result in a measurement

above the signal detection limit. Conversely, if a given (unknown) sample does not

give a detectable signal, then it might be falsely assumed that the analyte must be

present at a concentration less than the LOD.

By the definition of LOD in eqn. 2.15, it is apparent that the mean response

,yi for the analyte present at a concentration equal to the LOD is the signal

detection limit, Xd. If the distribution of possible signal values is symmetrical about

the mean, this means that in 50% of the measurements where an analyte is present

in a sample at a concentration equal to the LOD, the resulting signal will not be

detected. The probability that the signal due to the analyte present at a given

concentration does not give rise to a detectable signal is known as the probability of

type II error, 8, or the probability of a false negative. Thus, when the analyte is

present at a concentration equal to the LOD, B = 0.5.

The lack of a detectable signal does not mean that the analyte concentration

level must be below the LOD value. It would be useful to know at what

concentration level the analyte must be present in order to be detected with near

certainty (i.e., with very low B). The inadequate nature of the LOD figure of merit

in this regard has been noted by several authors [3, 4, 14, 16]. It is possible to define

a guaranteed signal detection limit, Xr such that for the distribution of signal

measurement values X,, with a mean equal to X,,

where the value of 8 is chosen according to a predefined risk of type II error. The

corresponding limit in the concentration domain is the limit of guaranteed detection,

LOG, which is defined as follows:

LOG X pb 2kob [2.18]
ao 0o


where the second part of the equation can be used to calculate LOG if the standard

deviation of the signal is the same as that of the blank; in such a situation, a = B.

Figure 5 shows the distributions of measurements for the blank and analyte

concentrations equal to the LOD and LOG values with k=3. As can be seen, the

LOG is a useful FOM since, if a given sample is not detected above X, the analyst

can confidently state to the customer (with only B probability of error) that the

analyte is not present at a concentration at or above LOG.

Limit of Quantitation

One final FOM should be mentioned which is related to the limits of

detection and guaranteed detection: the limit of quantitation (LOQ), sometimes

called the limit of precision or the limit of determination [3, 4, 16]. Although the

two limits, LOD and LOG, are important for the process of analyte detection, the

analyst is frequently most interested in quantitation. It is obvious that the precision

in quantitation is frequently degraded near the detection limit since the signal and

the noise approach the same magnitudes. The quantitation limit, Xq, in the signal

domain, is defined as

X, + kqr [2.19]


aq = true standard deviation on the analyte signal, and

1/kq = desired relative standard deviation (RSD).




a a0

h a .>


S c a

II II- -*v.<40

Finding the corresponding value LOQ in the concentration domain is straightforward.

The meaning of this limit is the LOQ is the lowest concentration of analyte which

can be determined at a pre-defined level of precision (RSD). If it is assumed that

a is constant and 10% RSD is required, then

LOQ -10l b [2.20]

The signal probability distribution (with 10% RSD) of analyte present at the LOQ

is also shown in fig. 5.


The theory behind three related and useful figures of merit, the limit of

detection, the limit of guaranteed detection, and the limit of quantitation, have been

reviewed in this section. The purpose of the first two FOMs is to give some

indication of the analytical procedure's ability to detect small amounts of analyte.

Although the LOG is a more useful value in this regard, the LOD has been far more

widely reported for various analytical procedures. From the standpoint of

comparison of techniques' detection power, either FOM can be used as long as

consistent definitions are used, all the relevant experimental detail is given (the

measurement time and the electronic bandwidth are often ignored) and an

appropriate experimental protocol is used to estimate values for ab and ao. The

procedure used to obtain these last two values should be given in any report of LOD

or LOG as well.


Laser Spectroscopic Methods of Analysis

Lasers have become a powerful and versatile tool in the arsenal of the

physical and analytical spectroscopist. Lasers possess a number of unique and

valuable properties such as high degrees of directionality, intensity, coherence,

monochromaticity, and polarization; these qualities have opened up a realm of

experiments previously impossible using only conventional light sources. In

particular, in the field of ultra-trace analysis, the intensity and monochromaticity of

the laser allow for the unique blend of very high sensitivity and selectivity, especially

in the field of atomic spectroscopy.

The impact of lasers in spectroscopy can perhaps be appreciated by reviewing

briefly some of the possible results of the interaction of an electromagnetic field with

an atom or a molecule in the ground state, and some of the related methods which

use these processes as basis for analysis. These processes are shown in fig. 6.

Interaction of the ground state of the analyte with radiation at a specific frequency

results in the production of the excited-state species A' at a rate that is proportional

to the spectral energy density of the radiation. Once in the excited state, some of the

processes which can be detected for analysis are the production of radiation, charged

species, or heat dissipation into the surrounding medium. Since the number density

of A' (nA ) is directly proportional to the ground-state population present before

irradiation, monitoring the events shown in the figure can give information related

to the concentration of analyte initially present in the ground-state level. The

processes and some related analytical techniques shown in the figure can be

described as follows:

1. Stimulated absorption of incident radiation: atomic or molecular absorption

spectroscopy, in which the attenuation of light irradiating the sample is


2. Stimulated emission from A': atomic and molecular stimulated emission

spectroscopy. Methods based on this process have not been exploited,

although techniques based on this method have been used to monitor flame

species [28, 29].

3. Spontaneous emission from AK: atomic and molecular fluorescence. Of

course, atomic emission spectroscopy is also based on this process, although

the excitation is not provided by a light source.

4. Collisional deactivation of A': photothermal spectroscopy, where the

collisional heating of the surrounding medium is monitored.

5. Radiation-induced ionization: photoionization spectroscopy, in which the

incident radiation produces the ion-electron pairs.

5 \6

1 2 3 42

Figure 6. Excitation of analyte by interaction with electromagnetic
radiation, and some of the processes that can occur as a


6. Collisionally induced ionization: optogalvanic spectroscopy, in which the

enhance rate of production of analyte ion/electron pairs with irradiation is

monitored in an external voltage field.

For a given volume of irradiated analyte, lasers can result in spectral energy

densities that are roughly 4-10 orders of magnitude greater than most conventional

sources. In the processes listed above, using the laser as the light source has resulted

in a great enhancement in sensitivity and selectivity. Indeed, analytical techniques

based on processes 2 and 4-6 are not practical without using lasers. Note that

scattering processes and related techniques such as Raman spectroscopy, which have

also become important analytical methods since the use of laser sources, are not

listed in the figure.

Single Atom/Molecule Detection

Some of the analytical techniques outlined above are so sensitive and selective

that it is possible to detect analyte-specific events when only a few atoms or

molecules interact with the laser. Indeed, laser-based methods capable of detecting

single atoms or molecules were first reported over a decade ago [30-32]. Interpreting

these methods in terms of the concepts of detection limits, as reviewed in chapter 2,

has not proven to be straightforward. Laser-based techniques in which the sensitivity

is high enough (and the noise is low enough) that individual species can be detected

shall be called single-atom detection (SAD) methods.' Strict requirements for

techniques to be termed SAD will be outlined in chapter 5; all others which possess

detection limits of only a few atoms can be called near-SAD methods.

Past SAD Experiments: A Sampling of Applications and Techniques

Why be concerned with the ability to detect single atoms? Certainly it would

seem that the vast majority of analytical methods, even those based on laser-induced

processes for ultra-trace analysis, would fall far short of requiring the capability of

SAD. However, there are several beneficial reasons to being aware of the particular

concepts which apply to (near-)SAD techniques; these are now briefly outlined.

First of all, there are applications in physics and chemistry, including analytical

chemistry, which do in fact require a capability of SAD. Even if the goal is not

quantitation of analyte, the capability of a laser-based method to detect single atoms

often must be evaluated, and many of the concepts of SAD theory apply.

Current ultra-trace laser spectroscopic methods are getting ever closer to the

SAD regime of analysis. The LODs reported in conventional bulk analysis often

translate into very few atoms in the laser during the measurement time. In addition,

for some of the methods, the selectivity is so great that it is possible to reduce the

noise to almost nonexistent levels; many of the concepts from SAD theory are

relevant in these cases.

'The term "SAD" will be used even though the detected species may be atoms,
molecules, ions or radicals. The term "atom" in reference to SAD techniques will
apply to all such species as well.

Finally, the ultimate goal of an analytical method is the detection of single

atoms. The ability to quantitate the amount of analyte in a sample at the atomic

level certainly represents this ultimate goal in chemical analysis, and many unique

and interesting experiments in analysis and other fields would no doubt result from

such an ability.

Applications of SAD

This is by no means meant to be an exhaustive list of possible applications of

SAD. There have been reports in the literature of possible applications of SAD

techniques [33-36]; this list covers some of these as well as various others. The

applications listed here specifically call for the ability to detect the signal due to

single atoms.

Physical applications

1. It is possible to observe and measure the transport, diffusion or otherwise, of

individual atoms through the volume defined by the laser beam, as well as

other statistical mechanics applications in which the fluctuations in various

atomic/molecular processes must be monitored. The observation of the

gas-phase reactions of individual atoms or molecules [37] might be included

in this type of application.

2. The (mean) lifetimes of various excited states of individual atoms or

molecules can be reported and studied in various environments. This


information is usually only available as an ensemble average over many


3. The spectroscopic features of free atoms or ions [37, 38] can be studied; the

spectroscopy of molecules in various sites in a solid matrix can also be

investigated [39-42].

4. Sorting of individual species (atoms, ions, molecules) requires the ability to

"tag" and detect them with the laser. This is an example of Maxwell's sorting

demon [35, 43].

5. The detection of rare events, such as solar neutrinos, by their effect on

individual atoms [33, 34, 44].

6. The observation of the orientation of individual species in clusters. SAD has

been applied to clusters of ions in a laser-cooled ion trap [45].

Analytical applications

1. Detection of very low bulk concentration of analyte, where the limiting "noise"

on the signal is due to the statistical appearance of individual atoms in the


2. Detection of very rare isotopes and application in various related fields such

as geochemistry and environmental analysis [43, 46] and cosmochemistry [35].

3. Application of SAD methods in surface analysis is necessary for even

relatively "high" bulk and surface concentrations since very few analyte atoms

will be analyzed at any time by combinations of sputtering methods with

laser-based detection [47].

4. Detection of analyte in difficult matrices. Simple dilution is often an effective

method of correction for matrix effects, but is limited by the sensitivity of the

analytical technique. With the development of methods capable of SAD in

large samples, the sample can be diluted to an arbitrarily high degree.

Techniques Capable of SAD

Techniques which have achieved (near-)SAD in the past can be broadly

categorized as using either destructive or nondestructive methods of detection. In

the former class, the atom is consumed during the detection process, producing at

most one single "count"; in the second category, each atom can produce multiple

detectable events during its interaction with the laser. In this work, most attention

will be focused on two laser-based methods which have had the most success at

detecting single atoms: resonance ionization spectroscopy (RIS) and related methods,

and laser-induced fluorescence (LIF). These two methods can represent the two

classes of SAD methods: RIS is almost always destructive, while with LIF it is

frequently possible for each atom to emit many photons during its interaction with

the laser.

Resonance ionization spectroscopy

Resonance ionization spectroscopy involves the laser-assisted production and

subsequent detection of analyte ions. Production of ion-electron pairs using lasers

can approach 100% efficiencies. Since the manipulation of charged particles is

relatively easy and detection of these particles can also be highly efficient, it is not

surprising that there have been a number of (near-)SAD reports using RIS [31, 32,

43, 44, 48-54] as well as a number of review articles which cover various theoretical

and practical aspects of the technique [35, 36, 55, 56].

There are two basic ways in which the ion has been produced through

laser-induced processes; these are shown in figure 7. The method in fig. 7(a), direct

photoionization by the laser, has been developed at Oak Ridge National Laboratories

as outlined by Hurst and Payne [35]; the detection of the ion in a buffer gas has

usually been achieved by a proportional counter. The process shown in fig. 7(b)

involves excitation to a long-lived Rydberg level in a vacuum with subsequent field

ionization, and detection by a secondary electron multiplier (with or without a mass

spectrometer). The advantages of excitation to a Rydberg level is that less powerful

lasers are necessary for saturation, which may result in a more general technique and

less laser-induced background. The development of this method by Letokhov and

Bekov has been outlined in a recent book [36].

Laser-induced fluorescence

Techniques of involving the detection of spontaneous emission of laser-excited

analyte atoms have been characterized by extremely high selectivity and sensitivity.

This high sensitivity has been accompanied by a number of reports of claiming SAD

[30, 37, 38, 45, 57-69]. There are a large number of possible combinations of

excitation/detection schemes using LIF; figure 8 shows three of them which represent

the types of schemes which can be used.

A' eC


A+ *-

Two possible ionization methods for RIS.
(a) photoionization from an intermediate level.
(b) field ionization from a Rydberg level.

Figure 7.




og jC .
12 I-.

6% % 0

2 ".

'5 a

*o a
5 So

*B *a
3~ co^

ti 4 =S

Eo a

s-I f~t3

The technique of LIF can be either destructive or nondestructive, depending

on the particular analyte, the environment and the time-scale of the interaction with

the laser. The presence of a metastable level can act as an effective "trap" for atoms

during the measurement; for example, if a scheme similar to the one shown in fig.

8(b) is used when the middle level has a very long lifetime and the analysis takes

place in a vacuum, then LIF will be a destructive technique. Similarly,

photodestruction of molecules by the laser may occur and effectively limit the

emitted photons to a small number. On the other hand, when there is good

collisional and optical coupling between all the levels involved in the

excitation/detection scheme, then "cycling" of the atom back to the ground-state is

possible, where the atom can then further interact with the laser.


The value of the LOD calculated for a given analytical procedure can vary

due to a number of factors. In this chapter, these factors will be investigated, and

the variation in calculated LOD due to random fluctuations will be theoretically

evaluated. Before proceeding further, however, a brief introduction to estimation

theory is necessary to fully appreciate the concepts involved in calculating an LOD

for a given analytical procedure.

Estimation Theory

Variables are defined in terms of the population parameters of their

probability distribution, such as the mean, u., and variance, ao2, of the variable x.

However, the true values of these parameters are very rarely known exactly and must

be estimated from a sample of the population. Functions of the sample which

provide such estimates are known as sample statistics; common sample statistics

include the sample mean i and variance s2. The value of a single sample statistic

is also known as a point estimate of the corresponding population parameter.

The value of a point estimate depends upon the particular sample chosen;

thus, point estimators are themselves variables, with properties dependent on the size

of the sample chosen and the estimation function used. The theory of the behavior

of sample statistics as variable estimators of the population parameters is known as

estimation theory. Besides the two already given, some other common estimators

include least-square estimators (seen in chapter 2) for the calibration parameters of

slope, ao, and intercept, /b. Desirable characteristics of sample statistics include the

property of being unbiased estimators, when the mean of the point estimates is the

equal to the population parameter, and the property of efficiency, which is indicated

by a low variance of the estimate. The standard deviation of the statistic is also

known as the standard error.

If information is known about the standard deviation of the estimate, then it

is possible to construct an interval about the point estimate within which the true

value of the population parameter is likely to lie. Such an interval is known as the

confidence interval. The probability that the parameter lies outside the interval, a,

is inversely proportional to the size of the interval. Recall that confidence intervals

were briefly mentioned earlier in relation to estimating the true analytical response

from a calibration curve.

The Limit of Detection as a Population Parameter

The LOD was defined in equation 2.15 according to

LOD w kab [4.1]


k = the confidence factor,

ob = the standard deviation in blank measurement, and

o = the analytical sensitivity.

In light of the previous discussion on estimation, it can be easily understood that

since the LOD is defined in terms of population parameters, the LOD is itself a

parameter of the analytical system, which can be estimated according to

LOD kb [4.2]

LOD = calculated estimate of true LOD value.

Thus, regardless of how the confidence factor k is chosen in the above
equations, the value of LOD has a given variance associated with it. Since the true

value of the LOD is frequently of great importance as a performance characteristic

of a given analytical method, it is of interest to estimate this variance, a2Lo.
Estimating the variability of a given LOD would serve several purposes: (1)

confidence intervals within which the true LOD value would lie could be calculated;
(2) hypothesis tests with different LOD values can be made in order to determine if

there is a significant difference; and (3) the influence of various experimental

procedures on aLOD can be assessed. In many research laboratories, for example,

calibration curves may only be used to provide an estimate of the sensitivity, ao, to
use in calculating LD. Time saving methods of providing this sensitivity estimate
use in calculating LOD. Time saving methods of providing this sensitivity estimate

which do not adversely affect the quality of the LOD estimate (ie increase aLOD)

would be welcome.

Variability of LOD

To derive an equation for aLOD, a propagation of errors approach can be used

with eqn. 4.2:

a r-[ k2 J2I 1/2 [4.3]

To solve this equation, it is necessary to estimate the variance of s,. The variance

of this estimate can be related to the variance of the estimator for a2, again by

propagation of errors:

G2 (S) (2Sb) 2 (Sb) [4.4]

The estimated variance, s2b, for n measurements is distributed as follows:

.2 2
s2 X- 1 o [4.5]

where the X2 distribution has a mean of (n-1) and a variance of 2(n-1) [70]. From

this, we can deduce

02 (S 2) 2 (n-1)
b n- [4.6]


Now we can estimate the variance of the background standard deviation as

sS2 [4.7]
S2 (n-1)

Substituting this into eqn. 4.3 gives the following estimate for aLOD:

S)2 211/2
S +s2 k sI [4.8]
soo io 2 (n-l) ao 2]


sa = estimate of the standard error of the slope (sensitivity), and
SLOD = estimate for the standard error of LOD.

Rearrangement allows for two useful forms of the above equation:

CVI + CVa [4.9]
2 (n-1)

2 1/2
S1 + s- [4.10]
SW 2 (n-) 2


CV, = the coefficient of variation of variable (x = aj~/), and

n = number of blank measurements to calculate sb.

Both of the above equations are valid no matter how the confidence coefficient is

chosen in calculating the LOD by eqn. 4.2

Confidence Limits and Comparing Values of LOD

The validity of the propagation of errors approach used to derive an equation

depends on the following two conditions: (1) the errors in the slope and background

variances must be independent; and (2) the term CVa must be small (below about

0.10). Effects of violation of the first condition (i.e., including blank values in the

calibration curve) are probably only slight when a reasonable number of points are

included in the calibration set. The second restriction arises due to the non-linear

relationship between the LOD and the slope; for high coefficients of variation of the

latter, the propagation of errors approach breaks down [71]; effects of high CV,

values will be investigated in chapter 7.

Keeping the above two restrictions in mind, the utility of eqns. 4.9 and 4.10

are as follows. Equation 4.10 gives an indication of the expected fluctuation in LOD

estimate due only to the variable nature of point estimates; this value can be used

to construct confidence intervals within which the true LOD is likely to lie.

Examination of eqn. 4.9 shows how the variability of LOD can be divided between

the uncertainty in the estimates of sb and sa. For a given set of conditions used to

calculate LOD, eqn. 4.9 can be used to determine the relative contribution of the two
error sources. The method used to determine the confidence interval about LOD

depends on the relative magnitude of the two terms. If the first term dominates,

then most of the variation in the estimate is due to the uncertainty in estimating ab.

This situation is probably the most common in analytical chemistry. In these cases,
the X2 distribution can be used to construct confidence intervals on the LOD value

in the same manner as for the sample standard deviation [72]; Kaiser has

demonstrated similar calculations [3]. For example, for a two-sided 95% interval and

20 measurements of the blank,

0.76 (LOD) s LOD s 1.46 (LOD) [4.11]

Confidence intervals for different numbers of blank measurements can easily be
constructed by using the appropriate degrees of freedom. For comparison of LOD

values obtained under similar conditions, F-tests can be used [72].

As the second term in eqn. 4.9 becomes more important, the distribution of
LOD will come to approximate a normal distribution; in these cases, standard
t-tables can be used to construct confidence intervals of the type LOD kSLOD, and

to compare LOD estimates with the t-test. In either case, eqn. 4.10 is valid and gives
a good idea of the variability of a given value of LOD due to random fluctuation of

the sample statistics.


Inspection of eqns. 4.1 and 4.2 leads to the conclusion that there are two

sources of variation in calculated values of LOD: the first source is due to actual

changes in population parameters ab and ao of the analytical technique -- a shift in

alignment, slightly increased background noise, the presence of interference,

improved analytical methodology -- and the second source is due to the variability

of the point estimates used in eqn. 4.2. The source and magnitude of the latter

fluctuation in LOD can be seen by application of eqns. 4.9 and 4.10. One benefit of


viewing the calculated value of the LOD as an estimate of the true value of the LOD

is that these two sources of fluctuation of LOD can be separated; if the LOD

fluctuates by an amount much greater than the calculated value of SLOD indicates,

then the probable source of the change in the observed LOD is an actual change in

the analytical parameters of the system. The performance and use of eqns. 4.9 and

4.10 will be further investigated in chapter 7.



Some of the practical aspects of detecting individual atoms or molecules with

lasers were outlined in chapter 3. As stated in that chapter, certain laser-based

methods have achieved high enough sensitivity so that single atom detection may be

possible. In the literature there are reports of methods which were able to detect

single atoms with very high S/N ratio [30, 37, 44]. However, most of the reports of

SAD have had a comparatively low S/N ratio for each atom; in these cases, there is

a question of whether or not single atoms can actually be detected above the

background (if any background is present). Evaluation of these methods often

involves the following questions:

1. Can individual atoms can be detected?

2. Under what conditions is single atom detection possible?

3. If it is possible to detect single atoms, is it also possible to count the numbers

of atoms passing through the laser beam?

4. What are the characteristics of these (possible SAD) methods when utilized

in a more conventional sense, ie to measure bulk concentration of analyte?

There are problems in applying the theory of detection limits summarized in

chapter 2 in attempting to answer these questions. In comparison with practical

aspects, theoretical considerations for (possible) SAD methods have received very

little attention in the literature. There have been a handful of papers which have

evaluated laser spectroscopic techniques with respect to the potential to detect atoms

[73-78]; these include some theoretical discussion of SAD methods. Recently, several

papers have attempted to deal with the problem of verifying that single molecules

were being detected by LIF as they flow through the laser [65-68]. However, the

work of Alkemade remains the only in-depth, systematic general theoretical

treatment of SAD to date; this work is presented in two classic papers [79, 80] and

has recently been reviewed and extended [81].

The purpose of this chapter is to answer the questions posed earlier which

may arise for methods which produce a signal for individual atoms which might be

detectable above the background noise. The concepts from chapter 2 are applied in

a logical manner to SAD methods, and strict definitions and important figures of

merit for (near-)SAD methods are presented. Several other important factors in

evaluating possible SAD methods will also be discussed. The treatment of SAD in

this chapter owes much to Alkemade's original treatment of the subject; many of the

terms used are identical, although the meanings may have been modified. The intent

of this treatment is to generalize Alkemade's pioneering work and to present a

general theory of SAD in a form useful in the development of methods which may

achieve the goal of detecting and counting atoms. Most of the concepts presented


are verified and further illustrated in chapter 8 through the use of computer models

of SAD experiments.

Definition of an SAD Method

The definition of an SAD method is a generalization of Alkemade's four

criteria for true SAD [80] and is the following:

A method is an SAD method if each and every atom which interacts

with the laser can be detected above the background noise.

The above definition will be re-stated in a more rigorous form later in this

chapter; nevertheless, the general concept of an SAD method can be appreciated.

Implicit in the above statement is that we are concerned only with the atoms which

actually interact with the laser. Equally important is to notice the difference between

a method in which some (but not all!) individual atoms can be detected above the

background and an SAD method: it is not enough to simply have a certain likelihood

of detecting single atoms, but every atom which is probed by the laser must be


General Model of SAD Methods

The Poisson Process

Many of the processes involved in SAD methods will be assumed to be related

to the Poisson distribution. These include the number of atoms probed by the laser,

the detection probability of an atom, and the number of detected events per atom.

To fully understand the nature and limitations of SAD model presented in this

chapter, a firm grasp of the properties of Poisson variables is necessary [82, 83].

Experiments which measure the (variable) number of discrete occurrences of

an event in a certain length of time, or in a given area of space, frequently deal with

variables possessing a Poisson distribution. For a Poisson variable, the probability

of X number of events occurring during a given fixed interval time or space, t, is

given by:

P(X)- (tx e- [5.1]


S= flux of events per unit time/space, and

x = -ax2 = Ot.

A variable which is truly a Poisson variable (characteristic of a Poisson

process) possesses the following qualities:

1. The probability of an event occurring within the interval of time or space is

small compared to the probability that it can occur elsewhere.


2. The probability of an individual event occurring within the length of time or

space is independent of all other events which have occurred, either during

that length of time/space, or outside it.

The Poisson distribution is closely related to several other distributions,

including the uniform random distribution, and the Gamma and exponential

distributions. This relationship can be understood by considering a Poisson variable

in time. When a given event can occur randomly in time (i.e., the probability

distribution of the time of occurrence is a uniform distribution), then the number of

events during a given time interval follows a Poisson distribution. This is the basis

of a typical Poisson process. For such a process, the probability distribution of the

time interval, t, between events is given by an exponential distribution

P (t) 4< e-4 [5.2]

where the meaning of 0 is the same as in eqn. 5.1. The mean time interval between

events is given by 0-1. The exponential distribution is a simplified form of the

Gamma distribution. The variable in the Gamma distribution is the amount of time,

t,, for a specified number, v, of events to occur:

P (t,) xV-e-"
f [5.3]
Px-1 e-dt

Typical SAD Experiment

The general form of a laser-based SAD method can be illustrated with the aid

of figure 9. In the figure, analyte atoms flow past a region of interaction with a laser

beam; atoms which interact with the laser produce a number of detectable events.

The method of detection can be either destructive (e.g., ion detection in an atomic

beam) or nondestructive (e.g., fluorescence detection of molecules in a flowing

stream). The number of events detected during a measurement time Tm are counted.

During this time, a certain volume of sample containing analyte, Va, flows past the

laser beam; for the analyte atoms within this volume there is a probability of entering

a region in which it is possible to interact with the laser beam and emit detectable

events. This volume is the probe volume, Vp, and is defined by the region of

intersection of the flowing stream with the laser beam which can be viewed by the

detector. A certain number Np of atoms which enter V, interact with the laser beam

during Tm; a given atom interacts with the laser beam for time ti. The atom's

interaction time is a function of various factors such as the magnitude of Tm, the type

of laser used (probed, continuous or modulated), and the atom's residence time, t,

within VP. The value of tr is usually a variable, with mean r, depending on such

factors as velocity, diffusion, and size and shape of V,. The atom's corresponding

interaction time ti may also be a variable, with mean ri, depending on the relative

magnitude of Tm and T, and the conditions of the experiment.

-0 ->
4 ---4
w~ *"^

Detection efficiency: general definition

For the general layout of a typical SAD experiment as given above, it is

convenient to define the detection efficiency of atoms which enter the probe volume.

The detection efficiency, ed, is defined as the probability that any given atom, during

its interaction with the laser, produces a signal that can be distinguished from the

background which arises during the measurement time. Alkemade gives a similar

definition for the detection efficiency [80], but there are two important differences:

(1) Alkemade was only concerned with the case where there was no background

noise; hence, ed was simply the probability that an atom which interacted with the

laser produced at least one detectable event; (2) Alkemade's term only applied

during a single probing time (e.g., one single laser pulse in a pulsed experiment)

rather than during an atom's entire interaction time with the laser. One

characteristic which the two definitions have in common is that they are only

concerned with atoms which actually interact with the laser (i.e., atoms for which

ti > 0).

Signal production

This section will introduce terms relating to the signal detected during a single

measurement; assumptions which will be made regarding the probability distribution

of these signals in the SAD model will be discussed in the next section. During the

measurement time Tm, there may be a certain (variable) number of background

counts, Ib, with a mean given by

Ib 4bTr



Ob = mean flux of background noise counts/)

b = mean noise during Tm (counts).

The total signal, I4, recorded during T. is due to the contribution of noise and

analyte signal. For nondestructive detection, this can be given as

I Ib + i, [5.5]


i = number of detected events due to each individual atom which flows

through Vp. This variable has a mean given by

t, r, [5.6]


0, = mean flux of signal from a given atom (count s'1 atom').

The mean total signal for a destructive technique is given by

',- +e N, [5.7]
I pb + ed N.

where Np is the number of atoms which were probed during Tm and Ed is the

detection efficiency, the probability that a given atom will be detected.

The physical meaning of the term o, depends on the method used. If a

nondestructive method such as cyclic LIF is used, the term refers to the actual rate

of detected events from single atoms. For destructive methods, however, the

meaning is obviously different since there can be no more than one detected event


per atom. In this case, the term is actually the reciprocal of the mean detection time

of the atom within the laser. In other words, for a given atom which interacts with

the laser within V, the mean time until a detected event is produced is 0 1.

Theoretical expressions for g, for a number of cases for LIF and RIS can be found

in the literature [78, 81, 84, 85].

Figure 9 has only shown one type of possible SAD experiment, specifically for

a case where atoms continuously flow through Vp. Figure 10 depicts several

alternative examples of SAD methods. As can be seen, various possible relative

magnitudes of T. and rr are possible: e.g., when a continuous-flow atomizer is used

with a pulsed dye laser, if the detected events are counted for each pulse, then it is

frequently true that rr > > Tm, and the atoms are "frozen" during the measurement.

This situation was termed the stationary case by Alkemade [80]. The nonstationary

case occurs when the measurement time Tm is of the same magnitude or larger than

the typical value of t, For example, in a heated cell, the atoms or molecules may

be free to diffuse in and out of Vp during Tn; a case such as shown in fig. 9 with a

continuous wave (CW) laser would be nonstationary.

Basic Assumptions for the SAD Model

Number of probed atoms. Np

In a solution or sample which contains analyte atoms, it is often reasonable

to assume that the analyte atoms are randomly distributed throughout the sample.

Such being the case, it can be assumed that the appearance of analyte atoms in Vp

atomic beam


I I ?- \

I \\ \

S I I '.

t r
I 1 I
I Ce I
S\ "I I
I ) I I
\ ,I I
' I* i i

,' I
\ \ \ !
S. I
\ .,



Examples of possible SAD methods. Typical examples of
nonstationary methods are methods (a) and (b); methods such
as (c) and (d) are often under stationary conditions during Tm.






Figure 10.

is a Poisson process, and the number Np is a Poisson variable governed by eqn. 5.1

with a mean which is dependent on V,, Tm, and the analyte concentration. This

assumption will almost always be valid in the absence of severe clustering effects or

very small probe volumes (ie when analyte atoms cannot be treated as infinitely

small); these cases violate the requirements of a Poisson process.

Number of detected events. It

The distribution of the background counts, Ib, will be assumed to follow a

Poisson distribution. In other words, the limiting noise will be background shot

noise. Even though this assumption is made for convenience (and applies in many

counting situations), it is quite possible for the background noise SAD experiment

to be flicker noise [69]; in such cases, the concepts in this and other chapters still

apply, but with some slight modification.

The detection process in SAD methods will also be assumed to be a Poisson

process. For nondestructive methods, such as cyclic LIF, this means that the number

of counts per atom, is, detected during Tm, is a Poisson variable in time such as was

described in an earlier section; all of the detected photons are randomly distributed

during the atom's interaction time with the laser. Since Ib is also assumed to follow

a Poisson distribution with a mean of A, = ebTm, then when a single atom passes

through V, during Tm, It has a Poisson probability distribution with a mean of pb +


For a destructive detection method, such as RIS, the detection process is also

a Poisson process, although in a more subtle way. A large number of atoms N, all


interacting with the laser beam at one time would result in a mean flux 0, of

detected ions; the detection times of these ions are assumed to be uniform random

variables, and the time intervals between detection follow an exponential distribution

such as in eqn. 5.2, with mean time between detection of 0,1. If this assumption is

true, then the probability that at least one single ion will be detected by time ti can

be found by integrating the exponential distribution from t= 0 to t=ti:

P( Ot:ti) 1 e-4'*' [5.8]

Since this is assumed to be a Poisson process, the production and detection of ions

are independent events; thus, the above equation applies even if there is only one

atom irradiated by the laser. However, the meaning of the 0, term has changed,

since a "flux" of ions is of course not possible with only one possible ion. As

explained earlier, o, is now considered to be the reciprocal mean time to detection

of the ion produced from the single atom. As with any Poisson process, the

distribution of detection times is an exponential distribution. Note that eqn. 5.8

applies also for nondestructive detection, which is also assumed to be a Poisson

process; in this case, the variable "detection time" is the time before one event due

to a single atom is detected.

The mean signal due to Np atoms irradiated during Tm was given in equation

5.7 as EdNp. We now consider the distribution of the signal produced: the number

of detected ions, Ni, when a fixed number of atoms, Np, are probed by the laser is

given by the binomial distribution:

P (N) (N (e ) (-e) [5.9]

However, as explained above, N, is assumed to be a Poisson variable; hence, the

probability distribution of Ni will also follow a Poisson distribution, with mean EdNp.

Obviously, when only one atom crosses through VP during Tm, the signal due to the

analyte is either zero (not detected) or one (detected).

Variability of It

As described in the previous section, the detection process for the signal

produced by atoms in the laser beam is considered to be a Poisson process. In this

section, the variability in the total signal, represented by the value of the total

variance, at2, will be investigated for cyclic LIF.

Equation 5.5 gives the total signal due to N, atoms interacting with the laser

beam during a measurement time Tm. Since the background is assumed to possess

a Poisson probability distribution, and the number of detected photons from each

individual atom also follows a Poisson distribution with a mean given by eqn. 5.6, it

would seem that It should also be a Poisson distribution with mean and variance

given by

T o2 n + DiA [5.10]

However, eqn. 5.10 is only valid when both o, and ti are constant for every

atom which can interact with the laser. Although this can be true, depending on the

experimental conditions, it can easily be the case that both p, and t, are variables,

with mean t, and ri, respectively. The values of p, and ti for a particular atom may

depend on the path of that atom through V,. For example, 0, depends upon the

optical collection efficiency, and this may not be constant over the entire probe

volume; another source of variation in o, with path is if the transition of the atom

is not saturated, and the laser intensity is not constant throughout Vp. The

interaction time, ti, of an atom will not be constant if diffusion effects play a

significant role during Tm, or if the shape of V, is such that (for example) an atom

travelling down the center of Vp will have a larger interaction time than one which

skirts the edge.

Thus, although the number of detected photons from a given atom which

interacts with the laser will follow a Poisson distribution with a mean given by eqn.

5.6, the overall distribution of photoelectrons due to a single atom, I, will have a

mean given by


and I, will not follow a Poisson distribution. The mean of the total signal can thus

be written for the general LIF case:

The effect of having either o, or ti variable is to increase the variance of I.

The variance of It (for a fixed value of NP) can be partitioned between the variance

due to the Poisson detection process in both the background and signal counts ("shot

noise") and the "extra" variance due to any variability in and ti:


,- ; + (N,) 22 () [5.13]

where the second term in the equation is due to the "extra" variance. Of course,

when o, and ti can both be assumed to be reasonably constant, then this term

approaches zero and It will be approximately Poisson.

Note that this section only discussed the case of LIF. Of course, similar

considerations are involved in any technique based on destructive detection. The

case of variable interaction time and LIF detection will be addressed in chapter 8.

Signal Detection Limit for the SAD Model

With the SAD model and definition as given above, the application of

detection limit theory is as follows. For a given measurement time Tm, the

distribution of I, is Poisson with mean and variance h. From eqn. 2.3, the signal

detection limit Xd is set according to a pre-defined tolerance (denoted by a, the

probability for type I error) for false positives:

P (I, 2 Xd) a [5.14]

A value for a must be chosen before the experiment; the value of X, is set so that

the observed probability of one or more false counts during Tm (due to background

noise) is at or below this level. Note that, since Ib is a discrete variable, the value

of a will not be uniformly decreased by increasing the value of Xd. If I, is a Poisson

variable then estimating the mean ub during Tm and application of eqn. 5.1 will allow

Xd to be set through the use of tables of the Poisson probability distribution [86].

If it is found that, for given values of Tm and a,

P (Ib>) < a [5.15]

then there is essentially no background noise during Tm; this is called the

intrinsic-noise limit, since the only noise on the analyte signal is due to variance of

the signal itself. At the intrinsic limit, a single count (or more) indicates the

presence of analyte; the value of Xd is one count.

Detection Efficiency of a near-SAD Method

The general definition for the detection efficiency, cd, has been given

previously as that probability of a given atom will result in a signal detectable above

the background noise. Now we can state more clearly that the detection efficiency

is defined such that, when a single atom interacts with the laser during Tm,

ed P(It Xd) [5.16]

where Xd is chosen according to a pre-defined tolerance for false positives. At the

intrinsic limit, the detection efficiency is limited by the noise inherent on the signal

itself. Since we have assumed a Poisson detection process for both destructive and

nondestructive detection, we can see from eqn. 5.8 that, for A = 0, the probability

that an atom entering V, will be detected is given by:

ed 1 exp (-<,)


This equation is equivalent to the one used by Alkemade [80], who was only

concerned with the intrinsic-limited case. When noise is indeed present, however,

eqn. 5.16 must be used.

Requirements for SAD

General Requirement

The general definition of an SAD method, given earlier, is that the method

detects each and every atom which interacts with the laser with near-certainty. This

requirement is now given in a more succinct form: an SAD method is a method in


ed 21 [5.18]

where the detection efficiency is defined in eqn. 5.16 and B is the probability of type

I error (false negative) as described in chapter 2. The highest allowable value of B

must be decided prior to the evaluation of the (possible) SAD method.

The application of the above general requirement for SAD is different in the

cases of RIS and cyclic LIF. The difference between the two as SAD methods can

best be understood by studying figure 11 for fixed Np and equivalent detection

efficiencies. The intrinsic-limited case is shown in the figure, and ed can be

calculated by using eqn. 5.17. It is assumed that o and ti are constant for all N,

atoms in both cases (such a situation is reasonable for an atomic beam experiment

with a pulsed dye laser and a small Vp).





to >


1 0
Q *






.6= "
"0 .C


o 0
40 0

a g


*w _,
C, i PcC



Achieving SAD with RIS (Destructive Detection)

With a destructive method such as RIS, a single atom can only give rise to one

count at the most. The detection efficiency in this case is simply the binomial

probability of "success" (eqn. 5.9) -- for RIS, the probability that the given atom will

be ionized during its interaction time ti. This probability is given in eqn. 5.8 for a

Poisson process. However, the only way in which the requirement for SAD will be

met as set forth in eqn. 5.18 is if Xd = 1; i.e., the intrinsic-limited case. Thus, for

true SAD using RIS (or any other destructive technique) the following two conditions

must hold (from eqns. 5.15, 5.17 and 5.18):

P(Ib>0) < a

Note that the second condition assumes constant 0, and t1. The effect of variable

values of these parameters on the overall detection efficiency must be taken into

account if necessary.

Achieving SAD with LIF (Nondestructive Detection)

Guaranteed detection limit (X). Recall that the concept of a guaranteed

signal detection limit, X. was introduced in chapter 2. The value of Xg is helpful in

clarifying the requirements for a nondestructive technique to be a true SAD method.

For a given value of Xd, the value Xg is defined so that the probability of a variable

in a distribution with mean X, being less than X, is negligible (less than a desired

probability B of type I error). If both the background and the signal are described

by Poisson probability distributions, it is easy enough to assign values to X, and Xg

for any value of by using tables of Poisson values [86]. Table 1 shows these values

for a number of cases, and different values of a and B. The procedure in

determining these signal detection limits is as follows: from the value of pb, Xd is

chosen so that P(Ib > Xd) w a. From this value of Xd, a Poisson distribution is found

such that P(X > Xd) M 1-B. The mean of this distribution is Xg.

If it is assumed that both o, and ti are constant, then if X, is found from

Poisson tables as described, the requirement for SAD by LIF is

Is X-4bT, [5.19]

Figure 12 shows a situation with Ab = 1 count and SAD is possible by LIF detection

(a = B = 0.0014). When p = 1 count during T., Xd = 6 counts and Xg = 16

counts; thus, by eqn. 15, SAD is possible with i1 > 15 counts/atom. Note from table

1 that even in the intrinsic-limited case (Pb = 0), a value of i, = 6.6 counts/atom is

needed for SAD by LIF (at the 99.86% confidence level).

Summary: Requirements for SAD

The basic requirement for SAD by any laser-based method is given by eqn.

5.18. The practical consequences of this requirement for both destructive and

nondestructive cases have also been discussed in this section. True SAD is possible

with destructive detection only at the intrinsic-limit; however, SAD is possible by

nondestructive means even in the presence of noise if the sensitivity is high enough.

Table 1
The Two Limits for an SAD Experiment

Detection limit (Xd) Guaranteed limit (Xg)
Mean blank level (Pb) a <0.0014 p 0.0014
(a < 0.05) (9 = 0.05)
0.00 1 6.6
(1) (3.3)
0.05 2 8.9
(1) (3)
0.25 4 12.6
(2) (4.7)
1.00 6 16
(4) (7.8)
5.00 14 28
(10) (16)
10.00 22 39
(16) (23)
100.00 132 [130]a [169]a
117 [117]a [136]a

The limits are given for the signal domain, with all the signals given as counts.
aThe values in the square brackets were found by assuming a Gaussian distribution
with the appropriate k values. At these higher signal levels, the Poisson distribution
can be approximated by a Gaussian with .i = .












eouaJJnooo jo Ajj!lqaqoJd


I -






2 .-

o a)o

. 0 cc

t3 .g


0a .)

S o
S as

a) a -

0 U) .r


o I

It should be emphasized that it is possible for a given technique to detect individual

atoms and still not be a true SAD method; for example, with A = 0 the requirement

for SAD (at the 99.86% confidence level) with LIF is 6.6 counts/atom (assuming o.

and ti constant); however, if is = 1 count/atom, then individual atoms would still be

detected quite often (Ed = 0.632). This is an example of a near-SAD method, where

detection of an atom in the laser beam during Tm is possible (and perhaps likely) but

not certain.

Detection Efficiency as a FOM. Notice that even though the detection limit

theory of chapter 2 was applied to the SAD case in the signal domain, it is somewhat

difficult to speak of near-SAD and SAD methods in terms of LOD.' Intuitively, it

is not possible to have LOD < 1 atom, or as a non-integral number of atoms. This

would seem to indicate that LOD (and LOG) is not an ideal FOM for the evaluation

and comparison of (near-) SAD methods, as it is for more conventional methods.

Another possible FOM is to compare S/N for one (or more) atoms, where S/N is

the ratio of the mean signal due to one atom to the background noise. A more

informative FOM than S/N for near-SAD methods is the detection efficiency, d*.

This parameter contains information about the magnitudes of both the mean signal

and the background noise, as well as the noise on the signal due to single atoms.

Improvements in near-SAD methods should be evaluated by improvements in the

value of Ed rather than an increase in S/N (which may be at a cost to ed if there is

vith LOD in terms of numbers of atoms in the laser beam, not bulk concentration
(this aspect will be discussed later).

an increased variance in the signal due to one atom). Once SAD has been achieved

(d ; 1), then improvements in the SAD method would best be described in terms

of increases in S/N due to single atoms.

Precision of Counting Atoms

Thus far in this chapter we have treated the likelihood of detecting single

atoms. However, the question remains as to whether it is possible to precisely count

the number of atoms which pass through the laser if N, > 1 atom. Recall that in

chapter 2, a FOM called the limit of quantitation (LOQ) was introduced which

specified an analyte concentration above which it was possible to determine an

unknown sample with a predefined degree of precision (usually with RSD = 0.10,

following the suggestion of Kaiser [3]). For an SAD method, it is possible to define

a similar FOM in terms of atoms which pass through the laser beam during Tm; in

this section, a related FOM shall be investigated: the minimum sensitivity, (i ),,

necessary to count atoms with a pre-defined precision at all levels of N,.

Precision of Signal

In the measurement of a fixed Np in the SAD model,2 the signal distributions

in the LIF and RIS experiments are given by the Poisson and binomial distributions,

2it is assumed that o, and ti are constant throughout the section on precision of
counting atoms.

and by substituting the appropriate values for a from these distributions, the

following equations are obtained for the intrinsic-limited case:

RIS: RSD- (l-e, [5.20(a)]

LIF: RSD [5.20(b)]

From eqn. 520(a), it can be calculated that for a precision of 10% or better with RIS

it is necessary that Ed > 0.99. Since the RSD improves as N, increases for RIS, an

RIS method with this detection efficiency or better is capable of counting atoms for

any value of Np.

The situation for LIF is different, however, since many events can be detected

from a single atom. Substituting N, = 1 atom in eqn. 5.20(b) results in a

requirement of 100 photoelectrons/atom for RSD = 0.10 with Np = 1. Thus, it

would seem that for precise counting of atoms with LIF, a sensitivity of at least 100

photoelectrons/atom during t, is required at the intrinsic limit. This is a far more

stringent requirement than the 6.6 photoelectrons/atom which are necessary for SAD

(at the 99.86% level).

There is a problem, however, when the RSD is calculated using the above

formula that stems from the difference between the precision of signal measurement,

RSDm, and the precision of counting atoms, RSD, when using a nondestructive SAD

method. This problem does not arise with RIS since, when SAD is possible, it is

essentially true that every atom gives rise to a single count; thus, the signal exactly

follows the number of atoms.

Consider the situation for LIF with 100 photoelectrons/atom, shown in figure

13. This figure illustrates the difference between the signal precision and the

counting precision. Although, by eqn. 5.20(b), this situation corresponds to

RSDm = 0.1 for Np = 1 atom, it is obvious that there is very little possibility of

incorrectly counting atoms when Np = 1 since there is almost no overlap between the

distributions. Obviously, the value for RSDm is not a reflection of the counting

precision of the LIF method.

Counting Precision

This section is concerned with nondestructive detection only, since the

requirements for precise counting by RIS are essentially the same as the

requirements for SAD. For a nondestructive technique, the value of N, is estimated

according to the following equation:

N Z'ti-b'1 1

where the RND function rounds the expression in the parentheses to the nearest

whole number, and the integer Nm is the number of measured atoms (i.e., the

estimate for Np). We can define the counting precision, RSDC, as

RSD o (N) [5.22]



O 0
O 00
E o a
o o
0 O c
o 05

S0 O 0
o o-
do co e

0 o
o O
> c 0

1o o-
o o5


o 0 0 0 0

OOOO O o11

ci~ ~ L t 1


where a(Nm) is the standard deviation in the number of measured atoms with fixed


The difference between RSDC and RSDm can be seen in figure 14, which

shows the signal probability distribution with p = 0 counts, I, = 20 counts/atom and

N, = 5 atoms. The top axis displays the values of Nm which would be calculated at

a given signal value using eqn. 5.21. The dashed lines in the figure show the portions

of the probability distribution which would result in values of 4, 5, or 6 atoms for Nm.

Notice that Nm is an integer; thus, a signal of 75 photoelectrons, for example, would

result in a value of Nm = 4 atoms (and not 3.75 atoms!) as the best estimate for the

true value of N,.

Calculation of RSD,

In order to investigate the counting precision of an SAD method, and

calculate a value of (i1 ), for a given background, the theoretical value of RSDC for

various fixed Np must be determined. The theoretical value of RSDC is not as easily

calculated as for RSDm in eqn. 5.20. To do so, an expression for a(Nm) in eqn. 5.22

must be found for given conditions of Np, sensitivity and noise. From the definition

of the variance of a discrete variable [87], it is known that

o2 ) (N 2 P (N2 ) [5.23]

where P(Nm) is the probability distribution of Nm. This probability distribution can

be written as



0 0
So .Ei

< -- U .


00 0 0
C; d

2 --- oo o 8
I ^ IQQo 8 ^ I

P (N) i P (i) [5.24]

where the summation limits, X, and X, are found as follows.

For Nm = 0:

X, -0
Xu Xd-1

For Nm = 1:

X, Xd
X, INT[(N,-0.5s) ,t, + ib]

For Nm > 1:

X INT[(N,-0.5)O/i + Pb]+1
INT[(N,+0.5).tij + I'b]

In all cases the INT function represents the integral part of the expression in the


The above expressions are tedious to solve manually; a computer program can

be written to evaluate these expressions for the SAD model for various values of Np,

,b and o, to determine (i, )c if the distribution of I, is known. The results, and

comparison of the theoretical value with the observed value of RSDC from computer

simulations, are presented in chapter 8. For the intrinsic limit the above equations

give a value of (i, )c = 35 counts/atom for RSD, < 0.1 for all values of Np.

In any practical situation, the value of RSDc would be almost impossible to

determine since Np would not be fixed but would be a variable that would change for


different measurements (according to a Poisson distribution for the model presented

in this chapter). Nevertheless, the purpose behind this section is to illustrate that for

a nondestructive technique such as LIF, the requirement of SAD is not sufficient to

count atoms during Tm to an arbitrary precision (unlike the case with a destructive

SAD method). There is an additional increase in sensitivity required before this is

possible by LIF.

Scope of an SAD Method

Effect of the Measurement Time on an SAD Method

The requirements for an SAD method have been presented; as an illustration

of what an "SAD method" means in a more conventional analytical situation, let us

assume that somehow the flow of atoms can be directed so that every atom in a

given sample interacts with the laser. If the technique is capable of SAD, does this

mean that the method possesses infinite detection capabilities? In other words, can

any concentration of analyte in the sample can be detected?

The general SAD model presented the requirements for an SAD method for

a given measurement time, Tm. The meaning and limitation of this requirement

should be very clear: a method which is truly capable of SAD is capable of detecting

above the background noise every atom which crosses the laser during Tm. Thus, in

partial answer to the above questions, it is not possible to simply state that a given

technique is an "SAD method" without stating the conditions under which this is

possible -- most particularly the measurement time, Tm. A simple numerical example

with SAD by pulsed LIF will illustrate this point and serve to answer the above


Numerical example: LIF with pulsed lasers

Let us imagine the interaction of a beam with a pulsed dye laser, where

stationary conditions apply.3 During a single laser pulse it is found that p = 0.25

counts; i.e., there is one "noise" count every four laser pulses on the average. During

a single laser pulse (Tm is the pulse duration), we can say for the SAD model

presented in this chapter that Xd = 3 (a = 0.00216) and Xg = 10.8 (8 = 0.00143).

Thus, from eqn. 5.19, we see that if i, = 10.55 counts/atom, then SAD is possible

during a single laser pulse.

Thus, this is a true SAD technique. Now suppose that a given sample is

analyzed and will take 10,000 laser shots to completely flow past the laser. Again,

assume that every atom in the sample interacts with the laser for time ti. Is it

possible to reliably detect a single atom in the sample with the technique just


The answer is no, because the scope of the above SAD method is only a single

laser pulse. When Tm is changed to 10,000 laser shots, the requirements for SAD

will change. If Xd = 3 counts is used as a criterion to distinguish the presence of

analyte atoms from the background noise, then from the value of a for one laser

each atom interacts with the laser for one pulse only and t, is fixed. It is assumed
that O, is constant as well. It will be assumed that the number of detected events (ie
photoelectrons above a discriminator level) can be unambiguously counted. In
reality, this may not be so easy with typical pulsed dye laser experiments.


pulse, we see that during Tm = 10,000 shots there will be 216 false positives on the

average. Obviously it would be impossible to detect a single atom with this value for

With the new value of Tm, we must choose X, high enough so that a is at the

desired value. When 4 = 0.25 counts/pulse and the background follows a Poisson

distribution it can be calculated that

P(Ib>7) 9.734 x 10-9

so that for 10,000 shots with X, = 7 counts, a = 0.000973, an acceptable level. This

value of Xd gives Xg = 18 (6= 0.001043); the requirement to detect a single atom in

the sample is that i, = 17.25 counts/atom, instead of 10.55 counts/atom.

This simple illustration shows that a given SAD method has a certain

"scope" -- i.e., a certain value of Tm over which the technique can detect a single

atom. Beyond this measurement time, the method can no longer detect single atoms

with the required values of a and B. If the sensitivity of the method is very high,

however, the scope of the SAD method (measurement time over which SAD is

possible) may be so long as to be practically infinite. In other words, Xd can be set

so high that it is extremely unlikely that Ib will ever exceed Xd during a single laser

pulse, no matter how many pulses are counted. If SAD is still possible with such an

Xd value, then the method is truly capable of detecting a single analyte atom in any

(reasonable) size sample.

Counting precision. RSD,. The value of RSD, and the requirement for

precise counting of N, in each laser pulse depends only on the value p (and on the


variability of It; however, we assume 0s and ti are constant). Thus, increasing the

length of Tm does not affect the value of minimum sensitivity necessary to precisely

count the number of atoms present in each laser pulse.

Continuous Monitoring of Atoms

One of the most promising methods of achieving SAD is by using continuous

lasers with LIF detection (CW-LIF). This method has been used in the past to

detect single molecules [62, 65-68] and atoms [59, 60, 63] as they flow through the

laser. The evaluation of near-SAD methods based on CW-LIF is a task which must

be carefully approached. This section will discuss some possible signal-processing

methods which apply the general SAD theory discussed thus far. In chapter 8, some

of the methods and ideas discussed in this section will be demonstrated.

Signal Processing Methods

Simple integration over the measurement time. Tm

Most analyses based on atomic or molecular fluorescence of bulk analyte in

a sample solution, in the simplest case, will integrate the signal for the measurement

time and the sum (or its normalized analog, the average) will be the measurement

value. This situation was depicted in fig. 1 and fig. 2. Such an approach is of course

possible with a method based on CW-LIF. Application of the SAD theory when

using this "simple integration" signal processing method is straightforward: the

detection efficiency is measured for a given value of X, (chosen for the pre-defined

tolerance to false positive detection) and eqn. 5.18 is used to determine if SAD is

possible during Tm.

The simple integration method is very inefficient when Tm > > Ti since the

mean signal due to a single analyte atom will never exceed *,Ti while the mean blank

value will increase linearly with Tm. In addition, there is no temporal information

on the exact time the atom is in the laser beam; it is only known that the atom

entered VP sometime during Tm. Nevertheless, there may be certain situations in

which the simplicity of the method has its advantages. In the analysis of discrete

samples which flow quickly through the laser beam (e.g.,in LIF of analyte atoms

atomized in a furnace, or in flow injection analysis), then SAD may be possible when

Tm is chosen so that the entire sample is analyzed.

Far more efficient methods for the analysis of continuously flowing analyte

solution through Vp are based on the use of a time "window," of length t, applied

repeatedly during Tm. Three such methods will be presented here. Their common

feature is that the only the number of photoelectron counts within the window t, are

considered at any one time, and the duration of t, is chosen to be of the magnitude

of T,. Thus, the S/N ratio due to single analyte atoms is increased.

Sequential Application of Time Window (t. < 7

It may be that the sensitivity of the CW-LIF method is so high that it is still

possible to detect the presence of a single atom even if t T r/110. The S/N due to

a single atom would decrease relative to a situation in which t, = T,; however, if


SAD is still possible by eqn. 5.18, then this is the best method to use. Such situations

have only rarely been reported in the literature [30, 37].

The application of this method is simply the use of sequential integration for

time t for the entire measurement time Tm. In choosing the value of X, for t, it

should be remembered that X, should be high enough so that the total number of

false positives from T./tw windows will be at the desired tolerance level. The

disadvantage of this method is that higher sensitivity is required to achieve SAD than

by either of the next two methods which use a time window. However, if this

sensitivity is available, then the method is very simple to use and can count atoms in

real time as they traverse the laser beam. For CW-LIF which is on the borderline

of becoming a true SAD method and does not have a high enough sensitivity for this

method, one of the following two methods can be used.

Photon burst method

This method has been reported in the literature as a method of recording

spectra of single atoms in an atomic beam [61, 63]. Figure 15 demonstrates how the

"photon burst" method may be applied to CW-LIF. A single photoelectron count

triggers open the gate, which is pre-set to count the photoelectrons for the gate time

t, The number of photoelectrons (including the trigger pulse) constitutes the "signal"

during t, The gate duration is set so that t, w r, The photon burst method can be

implemented during real time so that bursts above a pre-set value (Xd) can signal the

presence of an analyte atom in the laser beam. Alternately, the number of counts

from all the photon bursts can be stored in a computer and analyzed later; only the

4) "0
e 0

O *0


C- wl
= 4c
0 a
o S


ro .
a oa gr
o5o v *I _



bursts with a sum equal to or greater than Xd can be considered due to analyte

atoms with 1-a probability of false positive. All the normal requirements given

previously for an SAD method apply. The advantage of the photon burst method

over simple integration is that only the noise during t, can contribute to false

positives, and consequently a lower value of Xd can be chosen and the sensitivity

necessary for SAD is not as high.

False positives during Tm. As explained previously, as Tm increases, it is

frequently necessary to increase Xd to keep the probability of a false positive during

Tm down to an acceptable value. For the "simple integration" and "sequential

window" methods above, it is relatively easy to determine the value of a for given

values of X, and Tm. The theoretically calculated value of a (assuming a Poisson

distribution of Ib) for the "photon burst" method is slightly more complicated. In the

presence of background noise only, the sum, S, of the number of photoelectrons in

a given photon "burst" follows a Poisson distribution based on the mean background

level such that,

P(S) P(Ib-S-1)
e-* ( ) s-i [5.25]
(S-1) !

since S > 1 count. Thus, for a certain background level, the probability of a given

burst giving a false positive is

P(S>Xd) X P(Ib) [5.26]

so that during the measurement time,

a B1P (I4) [5.27]


b = average number of bursts during Tm.

For a given value of Pb and Tm, the average number of bursts can be calculated

according to

Sp (S) [5.28]

The dependence of a on Tm can readily be seen from eqns. 5.27 and 5.28. However,

the denominator in eqn. 5.28 ensures that a increases very slowly (compared to the

simple integration method) for a given Xd as Tm increases.

The above calculations of the theoretical value of a can aid to choose a

correct value for Xd in cases where the shot noise of the background is limiting. In

most practical situations, however, this would need to be confirmed by performing

many blank measurements and observing the effect of different Xd values on the

number of false positives.

Sliding sum method

The "sliding sum" signal processing method is an intuitively obvious technique.

After the raw data is collected, which consists of the counts as a function of time

during Tm, a data transformation is applied in which the value sum of the number

of counts over a width t. is assigned to the middle of the time window. The window

is then moved one step and the process is repeated. For continuous monitoring of

atoms with CW-LIF, the step size should be a fraction of the residence time of the

atom within Vp; e.g., At F r,/20. Smaller step sizes are of course better, but only up

to the point where the signal processing step becomes too long. The optimum value

for the duration of the moving sum is t, % T, (and no longer than the largest possible

residence time).

Peak Detection. The sliding sum peak maximum will occur when t, and tr

exactly coincide. Thus, it would seem desirable to use the sliding sum peak

maximum value as a signal of the presence or absence of the analyte atom. The

distribution of peak maximum will follow the distribution of It; thus, application of

the SAD theory from the previous section is straightforward. When the maximum

exceeds Xd (chosen based on the values of a, Ab and Tm), then the sliding sum peak

is presumed to be due to the presence of an atom in Vp. In a similar manner, peak

maxima can be used to evaluate the detection efficiency and determine if the method

is truly SAD.

A problem with using the peak value as an indicator of It will occur when the

analyte concentration is high enough so that there are problems with peak overlap;

ie, there is more than one atom in V, at a given time. Sliding sum peak detection

for counting Np during Tm is only practical when there is a very small probability that

such an overlap will occur.

False positives with sliding sum peak detection. Calculation of the theoretical

value of a when using sliding sum peak detection is complicated; a good first

approximation can be calculated with the use of the Gamma distribution (if a Poisson

distribution of the background can be assumed) if X, is large enough that P(Ib> Xd)

is negligible. The details will not be given here, but the most practical method of

determining a for given values of Xd, Ob, Tm, and t, is from the distribution of peak

maxima from repeated blank measurements. Theoretically, the values of a from this

method and the "photon burst" method are very similar for the same conditions. A

comparison of the number of false positives for these two methods will be shown in

chapter 8.

Peak area detection. The problem with overlapping peaks when using peak

heights of the sliding sums was mentioned above. One method of alleviating this

problem is to simply use the integrated peak area of sliding sum peaks as the "signal";

when the residence times of two atoms overlap, the resulting peak will be longer and

the area will be the sum of the contributions of the signal due to both atoms.

A single photoelectron count in the raw data array results in a contribution

of t,/At counts to the resulting sliding sum peak area as the window is moved passed,

where At is the step size. Thus, a cluster of It counts (due to analyte atoms and

background noise) will result in a peak of area (t,/At)I,. In general, the distribution

of peak areas, Ia, will have the following characteristics

a At, t [5.29]

(t, )
a t


Ua = standard deviation of peak area.

The advantage of using peak area detection instead of peak heights is that,

with no loss of information or detection efficiency, higher concentrations of analyte

can be analyzed. The total peak area from overlapping atoms is a measured of the

number of atoms which contributed to the peak. The theory from the counting

precision can be directly applied in this situation in order to count the number of

atoms which contribute to a given sliding sum peak; i.e., it may be possible to

precisely count the number of atoms contributing to a given peak (in addition to the

total number, Np, which pass through Vp during Ti) if the sensitivity is high enough.

Overall Efficiency of Detection

Much of this chapter has been concerned with detection efficiency; indeed, an

SAD method is defined essentially as a method which has almost unity detection

efficiency. However, it is important to recall the Ed applies only to atoms which

interact with the laser beam. A high detection efficiency does not necessary result

in a technique with a corresponding low LOD in terms of bulk concentration of

analyte in the sample, since it is quite possible that most of the analyte atoms in the

sample never interact with the laser beam. A better parameter to indicate the

detection power of a laser-based method is the overall efficiency of detection [80, 81],

eo, which is given by a product of efficiency terms:

e, eaeeed [530]


Ea = the efficiency of atomization,

eP = the spatial probing efficiency,

Et = the temporal probing efficiency, and

Ed = the detection efficiency of atoms which interact with the laser.

The atomization efficiency describes the probability that an analyte atom in

the sample will be converted into free atoms in an energy level suitable for

interaction with the laser. In the case of molecules, this term is the probability that

the analyte molecules in the sample will be prepared (perhaps by a chemical reaction

to tag the molecules with a fluorophore) in a state suitable to produce an

analyte-specific signal. The spatial probing efficiency, e,, describes the probability

that the free analyte atoms will pass through Vp, the portion of the analyte "volume"

probed by the laser. The temporal probing efficiency, e,, is the probability that an

atom which passes through V, will interact with the laser. This fraction of analyte

atoms is determined by the duty cycle of the laser and the length of Tm. Finally, the

detection efficiency is of course a familiar term by now; this term in the above

equation takes into account the probability of detecting a signal above the

background noise level.

Figure 16 depicts the various stages of a typical laser-based analytical

measurement. Note that the detection efficiency, Ed, as defined earlier, only comes

into play at the last stage. We have discussed "SAD" methods in this chapter, but


fig. 16 indicates the difference between SAD in Vp and true single atom detection

within the sample. For such a feat to be possible, e, must be nearly unity. Figure

16 also shows clearly that increasing er or ed at the expense of any other term in eqn.

5.30 will not necessarily result in a more sensitive analytical technique, even if SAD

is achieved thereby. For example, focusing the laser beam may result in a higher

value for ed but may decrease e, and result in a lower value for e.

Conventional LOD for SAD Methods

The scope of a given SAD method can be used to determine the value for

LOD in terms of bulk analyte concentration if the overall efficiency of detection is

known. Obviously, for the maximum value for Tm over which SAD is possible (ie the

scope of the method), it is possible to detect every atom which appears within Vp and

interacts with the laser. If we have knowledge of the amount of sample consumed

during Tm and have reasonable estimates of the efficiency terms in eqn. 5.30 then it

seems that we should be able to calculate the value of the LOD in terms of bulk

concentration of analyte in the sample, rather than in terms of atoms which interact

with the laser.

If the product EpEtEa is known for an SAD method and is much less than

unity, then the value for the LOD of the technique is

LOD (eaepe,)-1


U 2 0
1, 0 4)
2 8

o0 a




'a U
z a


in terms of atoms of analyte per volume of sample consumed during Tm. Note that

when analyzing large samples which continuously flow through Vp (as opposed to

discrete sample amount), the value of LOD in terms of bulk concentration of analyte

per sample analyzed will be improved through the use of larger values of Tm, even

though the method may no longer be considered SAD. The reason for this

improvement, of course, is that the amount of sample analyzed increases linearly with

Tm while (in the shot noise limit) the noise of the background has a square root

dependence on Tm.


Monte Carlo computer simulations of simple analytical experiments were used

as a means for demonstration and verification of the theoretical work set forth in

chapters four and five. These simulations, particularly those based on the SAD

model, were also helpful in the formulation of the theory which has been presented.

The intention of experiments based on these computer simulations is not to perform

an exhaustive study of all aspects of the models discussed, but merely to prove their

validity and provide some insight into their usefulness. This chapter will outline the

general form of some of the programs which were used in these simulations; more

details will be given when appropriate.


All Monte Carlo simulations were carried out on IBM-compatible

microcomputers which were equipped with either a 25 MHz 80386 CPU and a 80387

math co-processor chip, or a 20 MHz 80286 CPU with a 80287 co-processor. All

programs were written and compiled in Microsoft QuickBASIC (version 4.5,

Microsoft Corp., Redmond, WA). Algorithms to generate random numbers

according to normal distributions and Poisson distributions were written according

to guidelines presented by Knuth [88]; these routines are also presented in the

Appendix, and are based on QuickBASIC's pseudo-random number generator.

Simulations to Investigate the Variance of the LOD

Chapter 4 presented the concept of the calculated LOD of an analytical

procedure as a variable estimator of the true, unknown LOD of the technique. Two

different types of models were used to investigate the properties of the LOD

estimator given by eqn. 4.2, in light of eqns. 4.9 and 4.10: (1) a generic situation with

arbitrary standards and various background noise levels; and (2) a model based upon

conditions found in the trace analysis of metals by electrothermal atomization in a

commercial graphite furnace and laser-induced fluorescence detection of the analyte

atoms (ETA-LIF). The ETA-LIF model conditions were based on the recent

analysis of thallium at the sub-femtogram level [89]. The variation of the LOD

estimate was determined under different experimental conditions; specifics will be

given when appropriate.

Experimental Determination of Shot Noise

In order to simulate conditions of a typical ETA-LIF experiment, it was

necessary to have a knowledge of the shot noise in a photomultiplier for a given

anodic current output. Although in most cases the shot noise can be calculated

theoretically using well-known formulas [90], in the case of boxcar detection the task

is considerably more difficult since the "effective" electronic bandwidth is unknown.