Citation
Monte Carlo Calculations for Rapidly Changing Distributions

Material Information

Title:
Monte Carlo Calculations for Rapidly Changing Distributions
Creator:
Shmed, Arif H.
Klauder, John ( Mentor )
Place of Publication:
Gainesville, Fla.
Publisher:
University of Florida
Publication Date:
Language:
English

Subjects

Genre:
serial ( sobekcm )

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.

Downloads

This item has the following downloads:


Full Text






Journ31l of in.nderr.3adu.3- -Rese-arch

..OluiTe -, issue * - Jul,. u -u ui iuS l: 'LI*



Monte Carlo Calculations for Rapidly Changing Distributions

AI f H. Ahmed and John R. Klauder


ABSTRACT


Many dimensional integrals are frequently approximated numerically by using importance sampling in Monte

Carlo calculations. However, certain distributions of interest involve rapidly changing behavior and

conventional Monte Carlo methods are unsuitable in such cases. For this work, we show that a modified Monte

Carlo procedure produces satisfactory results.



INTRODUCTION


The Monte Carlo method can be best described in terms of a relationship between geometric and algebraic aspects

of theoretical science. Over the years, powerful methods expressing solutions of probabilistic problems in

analytical terms have been developed. With the advancement of computational power, now the statistical

problems can be modeled directly, allowing solving of any problem with an analytical expression resembling

a probability problem by constructing a realization of the random variable problem.



Contemporary problems in physics often require working with systems with a large number of degrees of

freedom. Among these are many atoms in a chunk of condensed matter, the many electrons in an atom, or

the infinitely many values of a quantum field at all points in a region of space-time. These systems are

often described by the evaluation of integrals of very high dimension. The straightforward evaluation of such

integrals is almost impossible in conventional ways. The Monte Carlo method [1-3] provides ways of

efficiently evaluating integrals of high dimension.



The Monte Carlo Method


Since it is not very clear how one can actually calculate an integral with random numbers, we can start out with

a simple two dimensional problem. Consider a square with corners at (�1/2, �1/2) and we want to calculate the

area of a circle inscribed within it. If we select a number of uniformly distributed points in the square at random,

we know that the probability that they will fall within the circle is equal to the ratio of the areas, which is n/4.

The fraction of points that fall within the circle will represent the area of the circle.


Consider an integral of the form






b
Sg(x)f(x)dx
"a


g 9W)!O


By normalization, we can modify the problem such that


f g(x) dx = 1
'a


Now g(x) can be regarded as a probability distribution function, over which we can compute the average of
the function f(x) (denoted as ).


(f) = (x)f(x) dx
"a


If we draw a large number, N, of random values of x from the distribution described by g(x), we can directly
estimate the average value of f(x) by


N
Y f(xi)
" T)


This equation forms the basis of Monte Carlo evaluations of integrals, and the set of N xi are the realization of
the probability distribution function, g(x).

Even though computationally intensive, in the Monte Carlo method the computation time increase is only
linear (sometimes quadratic) with additional dimensions, while for a classical approach, it is exponential.

The Metropolis Algorithm

Even though the basic Monte Carlo method itself improves the multidimensional integral problem we are trying
to solve, it is not the most efficient technique to sample. The Metropolis algorithm represents the function in a
new way, leading to a more efficient and accurate way of solving physical problems.







The Metropolis algorithm is based on the concept of a random walk. In the two dimensional classical random walk,

a sample point starts at the origin of an x-y coordinate system, and with successive iteration, is allowed to move

one unit in any coordinate direction in the plane with equal probability. In principle, with much iteration, this

will cover the entire 2-dimensional space. But for an efficient approach, we want the walker to spend more time

in regions where the function to be sampled is large, rather than covering the entire space. This will provide a

better realization of the probability density function since the probability of finding a sample walker in a given

region should be the same as g(xI, x2 ... ... Xa).



Difficulties in evaluating rapidly changing probability density functions and possible solutions


One of the main principles of a Monte Carlo calculation for integrals is that we have a distribution function g(x)

that represents the weight function. To describe how g(x) acts, one obvious question that arises is how we

choose the step size of the walker. If the step size is large, then most of the trial steps will be rejected. If the

step size is very small, then most trial steps will be accepted, but the random walker will never move too far,

thus generating a poor sample of the function of interest.



Also, not all distribution functions are expected to have a smooth shape that can be easily captured by

numerical simulation. Most real life problems involve discontinuities or sharp changes.



In our specific application, the function of interest [4] is:






(n { ,)2 () + -( . L-,2-V)

(5)


which has the following probability distribution function:






2 1l(1 2 2 3(
2 4 4 22 2_2 iam2'--
x_ 4rx. _4 )

gm e 2( 2 23



SX) = C 2 3 \
22 (x2 a2 mT l
2 2 4rx 4TO E 4
� 2 x 4rx4 m2C 2 2 )


e 2 23 dx





J- (6)


where m=1, r=0, and our main goal was to observe the effect of E in evaluating our function of interest.


To help visualize the rapid change in g, we can consider the plot of the function g(x) with E = 0 and e = 0.1.


We notice that for smaller E values, the change is much sharper than for larger E values, confirming our expectations.


In the modified Metropolis algorithm, random G(x) values were picked and corresponding x values were used to

find the proper walk for the simulation. From the plots in figure 2 we can observe that much of the rapid
changes occurring near the vertical axis were captured well, thus providing an accurate representation of the
function of interest.


Calculation of average quantities for conditions of interest


For our particular problem, we considered three average quantities of interest. Measurements for 5 different E
values for 3 different dimensions were evaluated. For each walk associated with a particular dimension, 10,000

steps were taken to reach the best g-distributed value, and the following tables listing the average quantities
were calculated.



Table la

Average quantities for multiple E values in 4 dimensional case

Dimension E < (Xl+X2+...+xa)2> < (x+x2+...+xa)4> < (x-x2+...-xa)2>

4 0.01 1 2.372894 27.600760 0.014256

2 2.254091 26.043820 0.014312


3 2.257402 27.149090


0.014750












Average

0.03


Average

0.1


Average

0.3


Average

1


Average


2.290345

2.275923

2.290131

4.103755

3.974865

4.054404

3.949187

4.160061

4.048454

4.203332

4.018090

4.046690

4.056378

4.039746

4.072847

3.196445

3.402900

3.238157

3.250775

3.307907

3.279237

3.812564

3.823280

3.801923

3.808497

3.898975

3.829048


Table lb

Average quantities for multiple E values in 8 dimensional case

Dimension E <(Xl+X2+...+xa)2> <(xl+X2+...+xa)4> <(x -x2+...-xa)2>


27.421420

27.353530

27.113724

49.661130

47.617950

49.273250

44.467420

51.102490

48.424448

53.437510

47.792260

49.089840

52.086820

49.550120

50.391310

37.292830

41.011890

37.976700

37.765000

38.607500

38.530784

45.014400

45.073460

43.250480

43.529210

44.075400

44.188590


0.014941

0.014876

0.014627

0.018628

0.018215

0.018368

0.018186

0.018057

0.018291

0.019771

0.020733

0.020126

0.019877

0.019974

0.020096

0.087355

0.086050

0.086128

0.086484

0.084001

0.086004

0.822925

0.803394

0.789291

0.809626

0.807208

0.806489




















Average

0.03












Average

0.1












Average

0.3












Average

1












Average


3.187484

3.298165

3.171618

3.267409

3.115363

3.208008

5.981388

6.114906

6.107915

6.128047

6.028111

6.072073

6.279786

5.929829

6.036225

5.831763

6.380465

6.091614

4.431448

4.530405

4.359631

4.511994

4.530857

4.472867

5.936424

5.710717

5.800452

5.818944

5.690309

5.791369


49.517320

50.322020

50.048190

52.860170

49.467500

50.443040

101.426500

112.577800

106.124900

110.454900

110.364400

108.189700

120.921400

110.564400

113.857900

102.249600

121.878200

113.894300

71.604790

78.695210

71.137640

82.563610

81.285420

77.057334

103.787800

99.817270

102.954200

102.076700

96.302720

100.987738


0.098564

0.102182

0.100031

0.096612

0.101744

0.099827

0.115825

0.117399

0.119357

0.117391

0.118371

0.117669

0.118232

0.122510

0.122996

0.122375

0.123563

0.121935

0.157009

0.152096

0.148036

0.150954

0.153071

0.152233

1.205079

1.233117

1.154434

1.195567

1.173799

1.192399






Table 1c

Average quantities for multiple E values in 8 dimensional case


Dimension E <(x1+X2+...+xa)2>

8 0.01 1 4.947606


2 4.935928


3 4.831919


4 4.925486


5 4.914007


Average 4.910989


0.03 1 8.402897


2 8.025216


3 8.131671


4 8.330724


5 8.350036


Average 8.248109


0.1 1 7.892954


2 8.231554


3 8.228297


4 7.906247


5 8.191574


Average 8.090125


0.3 1 6.055274


2 6.315698


3 6.021224


4 6.077973


5 6.121105


Average 6.118255


1 1 7.636328


2 7.623882


3 7.502346


<(x,+x2+...+xa)4>

98.743290


90.248240


89.233780


97.237760


104.121500


95.916914


212.046300


190.487000


196.988500


212.736100


203.040600


203.059700


196.923200


197.371200


204.173500


187.708900


206.288800


198.493120


128.732500


142.669800


123.822600


129.160200


128.387800


130.554580


174.721300


180.288100


174.024500


<(Xl-X2+...-Xa)2>

0.267767


0.262745


0.262485


0.263914


0.266337


0.264650


0.301671


0.299496


0.309308


0.307924


0.303678


0.304415


0.299320


0.312714


0.312640


0.317026


0.314330


0.311206


0.301544


0.296209


0.305802


0.301952


0.309325


0.302966


1.602424


1.589588


1.541765






4 7.853995 190.328700


5 7.677142 175.115900 1.538960

Average 7.658739 178.895700 1.566001



CONCLUSION



From tables la, b and c, we can conclude that even though the calculation involved random numbers, the answers

to the average quantities were very precise for the same parameters. This shows that we can depend on

the calculation method for consistent results. We also note that for particular E values, the average quantities

for different dimensions have a consistent ratio representing a linear trend.



One of our main purposes of the study was to observe whether the Monte Carlo method can detect the

rapidly changing characteristics of the distribution function. From the results obtained above, we notice that

the different E values resulted in different values for the average quantities. We can conclude that the algorithm

does indeed take the rapid change of the probability density function into account.



Even though the modified Metropolis algorithm of the Monte Carlo method was successful in our case, we found it

to be computationally intensive. For larger models, having thousands of dimensions, it might not be a

suitable procedure with current resources available. Further investigation could be carried out to improve

the computation time, for example, by taking fewer walks, but still obtaining a statistically significant answer.






REFERENCES



1. Metropolis, Nicholas, Rosenbluth, Marshall. "Equation of State Calculations by Fast Computing Machines." The

Journal of Chemical Physics. 21 (1953): 1087-1092.

2. Gibbs, William R. "Introduction to Monte Carlo." Computation in Modern Physics. World Scientific (1994). 28-52.

3. Koonin, Steven E. "Monte Carlo Methods." Computational Physics. Benjamin/Cummings (1986).185-215.

4. Klauder, John R. "Ultralocal Models: Functional integral formulation." Beyond Conventional Quantization.

Cambridge University Press (2000). 260-263.





--top--


Back to the Journal of Undergraduate Research


1.557270





College of Liberal Arts and Sciences I University Scholars Program I University of Florida I

� University of Florida, Gainesville, FL 32611; (352) 846-2032.


UPF UNIVERSITY of
U FLORIDA