Citation |

- Permanent Link:
- http://ufdc.ufl.edu/AA00039161/00001
## Material Information- Title:
- Quasiclassical and semiclassical methods in molecular scattering dynamics
- Creator:
- Blass, Anatol, 1972-
- Publication Date:
- 2001
- Language:
- English
- Physical Description:
- vi, 132 leaves : ill. ; 29 cm.
## Subjects- Subjects / Keywords:
- Angular momentum ( jstor )
Approximation ( jstor ) Coordinate systems ( jstor ) Energy ( jstor ) Harmonic oscillators ( jstor ) Molecules ( jstor ) Quantum field theory ( jstor ) Quantum mechanics ( jstor ) Trajectories ( jstor ) Wave functions ( jstor ) Dissertations, Academic -- Physics -- UF ( lcsh ) Physics thesis, Ph. D ( lcsh ) Scattering (Physics) ( lcsh ) - Genre:
- bibliography ( marcgt )
theses ( marcgt ) non-fiction ( marcgt )
## Notes- Thesis:
- Thesis (Ph. D.)--University of Florida, 2001.
- Bibliography:
- Includes bibliographical references (leaves 128-131).
- General Note:
- Printout.
- General Note:
- Vita.
- Statement of Responsibility:
- by Anatol Blass.
## Record Information- Source Institution:
- University of Florida
- Holding Location:
- University of Florida
- Rights Management:
- The University of Florida George A. Smathers Libraries respect the intellectual property rights of others and do not claim any copyright interest in this item. This item may be protected by copyright but is made available here under a claim of fair use (17 U.S.C. Â§107) for non-profit research and educational purposes. Users of this work have responsibility for determining copyright status prior to reusing, publishing or reproducing this item for purposes other than what is allowed by fair use or other copyright exemptions. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder. The Smathers Libraries would like to learn more about this item and invite individuals or organizations to contact the RDS coordinator (ufdissertations@uflib.ufl.edu) with any additional information they can provide.
- Resource Identifier:
- 028193547 ( ALEPH )
49253983 ( OCLC )
## UFDC Membership |

Downloads |

## This item has the following downloads: |

Full Text |

QUASICLASSICAL AND SEMICLASSICAL METHODS IN MOLECULAR SCATTERING DYNAMICS By ANATOL BLASS A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2001 ACKNOWLEDGMENTS I would like to deeply thank Prof. N. Yngve Ohrn and Dr. Erik Deumens for their guidance and patience in my studies with them. They gently opened the doors of quantum chemistry to me and showed me the wonder and beauty of their field of study. A boundless amount of gratitude must be expressed to my colleagues in the Electron Nuclear Dynamics group: to Dr. Mauricio Coutinho-Neto for his time, guidance and humor; to Dr. Magnus Hedstrbm for his advice and love of film; to Dr. Remigio Cabrera-Trujillo for his insight and leadership; to David Masiello and Benjamin Killian for their enthusiasm and friendship. I would like to thank all of members of the Quantum Theory Project for their collegiality and for creating a vibrant and fertile atmosphere for research. I am deeply indebted to all of the support staff here at QTP, especially Judy Parker and Coralu Clements, for everything, but mostly for their patience. I need to express and acknowledge my profound appreciation of my family David, Oscar and Piotr Blass; Jack and Margot Khavinson; and my immensely gifted wife Jill. Their love and support have been the fuel of my pursuit of learning and knowledge. Along with them I would like to thank Minky and Mozu for their constant affection and for providing pleasant and fanciful distractions. TABLE OF CONTENTS ACKNOWLEDGMENTS ........ ....................... . i.. ABSTRACT .......... ............................. v CHAPTERS 1 INTRODUCTION .......... ........................ 1 2 OVERVIEW OF SCATTERING THEORY ...... ............... 4 2.1 Theoretical Considerations ........................ 5 2.2 Classical Theory of Scattering ...................... 8 2.3 Quantum Mechanical Scattering Theory ................. 11 2.3.1 Time-Independent vs. Time-Dependent Scattering ......... 12 2.3.2 Fundamentals of the Time-Independent Theory ........... 14 2.3.3 Scattering Amplitude ............................ 16 2.4 Semiclassical Theory of Scattering ......................... 16 2.4.1 Born Approximation ............................ 16 2.4.2 Partial Wave Expansion .......................... 17 2.4.3 JWKB ..................................... 19 2.4.4 Eikonal Approximation ........................... 20 2.5 Time-Dependent Scattering Theory ......................... 21 2.5.1 Wave-Packet Propagation: Exact Methods ............... 22 2.5.2 Approximate time-dependent methods .................. 22 3 END THEORY OF TIME DEPENDENT MOLECULAR DYNAMICS 24 3.1 The END Theory ................................... 24 3.2 The END Wave function ............................... 29 4 OVERVIEW OF ROVIBRATIONAL REACTION DYNAMICS ..... ...34 4.1 Classical Theory of Vibrations ...................... 34 4.2 Quantum Mechanical Theory of Vibrations ............... 39 4.2.1 The Quantum Mechanical Harmonic Oscillator ........... 39 4.2.2 Quantization of the Harmonic Oscillator ............. 40 4.3 Classical Theory of Rotations ............................ 43 4.3.1 Rotations ................................... 43 4.3.2 Euler Angles ................................. 45 4.3.3 Cayley-Klein Parameters ......................... 46 4.3.4 Rate of Change of a Vector ......................... 49 4.3.5 Rotational Inertia, Angular Momentum and the Euler Equations 51 4.4 Quantum Mechanical Theory of Rotations .................... 53 4.4.1 The Rotationally Invariant System .................... 53 4.4.2 Eigenvalue Problem of L2 and L ...................... 54 5 COHERENT STATES AND ROVIBRATIONAL DYNAMICS ......... 56 5.1 Quasiclassical Coherent States ........................... 56 5.2 Vibrational Coherent States ............................. 57 5.3 Proposed Rotational Coherent States ....................... 59 5.3.1 Irreducible Representation ......................... 61 5.3.2 Construction of Coherent States ..................... 63 6 VIBRATIONAL ANALYSIS ........ ................... 69 6.1 Prony's Method .................................... 69 6.2 Generalized (GPM) Prony's Method ....................... 72 6.3 Implementation .................................... 75 6.3.1 Angular velocity and Rotation Matrix ................. 76 6.3.2 Smoothing Coefficients ........................... 78 6.3.3 Gradient .................................... 79 7 VIBRATIONAL CALCULATION RESULTS ... ............. ...83 7.1 Vibrationally Excited H20 ...... ........................ 84 7.2 Prony's Method, Condition Numbers and Numerical Stability ...... .90 7.2.1 Method .................................... 92 7.2.2 Results ....... .............................. 93 7.2.3 Discussion ...... ............................ 95 7.2.4 Conclusion .................................. 97 7.3 Application: Vibrational analysis of END trajectories ............. 97 7.3.1 Prony Analysis of H20 ........................... 98 7.4 Anharmonicity ...... .............................. 103 7.5 State Resolved scattering: H+ + H20 ...................... 106 7.5.1 Experimental DCS ............................. 110 7.5.2 Results and Discussion ........................... 113 8 ROTATIONAL CALCULATION RESULTS .... ............. ...121 9 CONCLUSION .......... ......................... 126 REFERENCES ........... .......................... 128 BIOGRAPHICAL SKETCH ....... ..................... ..132 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy QUASICLASSICAL AND SEMICLASSICAL METHODS IN MOLECULAR SCATTERING DYNAMICS By Anatol Blass December 2001 Chairman: N. Yngve Ohm Major Department: Physics The exponential increase in computing power that began in the '60s has allowed new methods to evolve that would make time dependent dynamical studies possible. The theoretical calculation of cross sections of chemical reaction provides an understanding of the details at the molecular level. While experimental studies yield important data, a significant fraction of the interesting elementary reaction in nature involve short-lived molecular species that are difficult to isolate and study in the laboratory. The prediction of the rates and cross sections of such chemical reactions is a primary goal of computational chemistry and molecular physics. These data, in addition to its scientific interest, are important for the modeling of complex chemical systems such as combustion, plasmas, and the atmosphere. In this dissertation we present a method developed to take advantage of current theoretical techniques by calculating quantum mechanically resolved results using classical nuclei. Utilizing the nearly classical behavior of nuclei in chemical processes we were able to devise a system to map quasi-classical coherent states to trajectories calculated from the semi-classical approximation of the Electron Nuclear Dynamics theory. Using this new method allows us to analyze the results of Electron Nuclear Dynamics molecular scattering trajectories, calculated using our computer code called ENDyne, and extract state resolved results. In this way we are able to compare our results to experimental data and reveal their underlying chemical and mechanical processes. We present just such applications to the vibrationally resolved differential cross sections of the reaction H+ + H20 and were able to determine the mechanics of the non-adiabatic inelastic scattering channel. Additionally, this result served to verify the ability of the Electron Nuclear Dynamics theory to investigate non-adiabatic molecular scattering. CHAPTER 1 INTRODUCTION Physics cannot be reduced to a few trite formulae. The complexity of real world processes quickly overwhelm the security we find in fundamental equations. Schrdinger's equation is one possible beginning of understanding molecular systems and has not put a single quantum chemist on unemployment. In fact, early quantum mechanics struggled with applying this equation to even the simplest chemical processes. Without the benefit of computers, novel solutions and approximations arose to apply a new understanding of nature to chemistry and molecular scattering dynamics. Time-dependent problems were difficult to solve and even then they were difficult to calculate quantitatively. A whole industry of activity arose around solving time-independent problems and devising schemes to solve dynamical problems to some degree or another. Many of these methods suffered from an incomplete description of the dynamical processes and were chiefly restricted to adiabatic processes. The exponential increase in computing power that began in the '60s has allowed new methods to evolve that make time-dependent dynamical studies tractable. The ab initio calculation of cross sections of chemical reactions provides an understanding of the details at the molecular level. While experimental studies yield important data, a significant fraction of the interesting elementary reactions in nature involve short-lived systems that are difficult to isolate and study in the laboratory. Prediction of the rates and cross sections of such chemical reactions is one primary goal of computational chemistry and molecular physics. These data, in addition to its scientific interest, are important for modeling complex chemical systems such as combustion, plasmas, and the atmosphere. Though highly desirable, theoretical analysis of even the simplest cases of these dynamical experiments can pose serious roadblocks to the scientist. The numerical solutions can use massive amounts of computer time and produce overwhelming amounts of data. Theoretical molecular dynamics strives to identify important parameters and mechanisms involved in the chemical or collision processes for analysis and prediction. With this in mind, a process of identifying and refining theoretical methods leads to the creation of efficient tools for the study of chemical systems. In our group at the University of Florida, lead by Prof. N. Yngve Ohm and Dr. E. Deumens, we have striven to create new and efficient methods for both analysis and prediction to apply to difficult problems in reactive dynamics. Electron Nuclear Dynamics (END) theory developed here provides approximate yet accurate solutions to time-dependent problems. Using the time-dependent variational principle (TDVP) allows us to simultaneously calculate both the nuclear and the electronic dynamics without the use of predetermined potential energy surfaces (PES). Currently this theory is implemented in the ENDyne code at its minimal level of approximation described as quasi-classical single-determinantal electron dynamics (QCSD END). At this level, using classical nuclei and electrons described by a single-determinantal wave function, we have had success in studying nonadiabatic dynamics. While we continue to develop the END theory and improve its implementation we must pause and consider whether a more complex theory or using more computer time is the only correct way of pushing forward. This reflection brings us to the overriding principle of this dissertation: Using coherent states and a posteriori analysis we can extract state-resolved results from a semiclassical description using classical nuclei. Specifically, we strove to study rovibrational dynamics and the state-resolved differential cross sections in molecular scattering. Building a technique on three separate pillars-Coherent States, analysis of classical dynamics, and molecular trajectories cal- culated using QCSD END theory-we successfully studied state resolved dynamics of several interesting systems. Here we present this method and its results. This dissertation is organized in the following way. Chapter 2 introduces the scattering process we study. Time-independent and time-dependent quantum methods are discussed. Traditional semiclassical methods are explained and their relationship to END is elucidated. Chapter 3 is a review of the END theory and its implementation. Chapter 4, rovibrational dynamics, both classical and quantum, are reviewed in the light of scattering theory. We turn our attention to coherent states for vibrations and rotation in chapter 5. The methods developed to study vibrational analysis are presented in Chapter 7 and its application to specific systems are the subject of chapter 8. Chapters 9 and 10 does the same for rotational analysis. Finally we will have some concluding remarks in Chapter 11. CHAPTER 2 OVERVIEW OF SCATTERING THEORY Scattering experiments and theory have been a cornerstone of the development of modem physics. From the experiments of Rutherford to modem high-energy particle accelerators, the use of particle collisions has allowed scientists to infer the structure and properties of solids, gases, molecules, atoms and subatomic particles. Well designed and extremely precise experiments have lead to a wealth of data. Unfortunately, most scattering experiments involve many collisions and complex dynamics leading to experimental observable that are stochastic in nature. Furthermore, any experimental data obtained are limited to an overall change in the state of the system before and after the event. For the theorists this poses a two-fold challenge to explain the physics of the individual collision processes and then to corroborate those findings with the experimental data. The theoretical quantity for comparison with experiment is the cross section, -jj (E), the notional area through which a representative incident particle must pass at a given energy to achieve a given change i -> J in the system. In molecular systems and processes of chemical interest one must recognize that the breakdown of classical mechanics in these regimes demands the use of quantum mechanics. Quantum mechanical effects may come from interference, effects of the uncertainty principle and the quantum nature of many chemical processes. On the molecular scale, an appropriate theoretical basis is that of a semiclassical theory allowing the use of our experience with classical mechanics and including the relevant quantum mechanical effects. Our goal in this chapter is to present the essence of molecular collision theory and its use as an introduction to our own theoretical developments in studying state resolved reaction dynamics and scatter- 5 ing. After the discussion of Child[l] and Sakurai[2] we separate the theory into three broad classes. The three are termed: Elastic. Used in cases of simple momentum transfer for studying mean intermolecular potential functions averaged over internal states. Inelastic. Used if there is also a change in internal energy yielding dependence of the potential functions on the internal coordinates corresponding to that state. Reactive. Used when scattering is accompanied by the change in chemical composition which reveals the nature of the potential energy surface in the neighborhood of the reaction coordinate. 2.1 Theoretical Considerations The simplest molecular collision experiments involve two beams with defined velocities V-1 and V-2 and a detector configured to observe a given transition from state i --+ j of the system. Considerable simplification of subsequent calculations can be made by transforming from the laboratory frame to the center of mass coordinate system (CM). Transforming the positions F1, 2 and velocities i71, 62 of the collision particles, with mass ml and m2 respectively, to the CM coordinate, R, velocity, V and internal coordinate, r-, velocity, V7; _i-,, + _-r2 M Al P1 - i2 (2.1) and T/ m1 . _, '2_, = V1 + V2 i V l- v2- (2.2) m 4 4 -cm ; Vii VI VI I $ " , V2f S.# o# I Figure 2. 1: Newton Diagram Then the classical kinetic energy T and the angular momentum/ may be written T = m~v2 + M2V2 = MV2 + 1 mv 2, 2 = l1 2 - 2m2v2 =. I rn(r' X 1) +m2 (i; '62) =M (FxV)+ m (r'x V), (2.3) where M= l 2, M = -- . (2.4) MI + M2 With the potential energy independent of R, and of the absolute orientation of the system, the center of mass motion may be factored out of all equations. The remaining problem that we confine ourselves to is equivalent to a single particle with reduced mass m, in a given initial internal state i moving with translational energy, 1TV , n ria 2 angular momentum L about a scattering center at r = 0. The inverse transformation between the laboratory and center of mass frames is easily derived from the preceding equations. The collisions of molecules leads to scattering in all directions and the resulting intensity distribution in the CM frame gives the differential cross section (DCS) defined by the ratio doij = Iij (0,) Number of scattered particles per unit time per unit solid angle (2.5) dQ Number of incident particles per unit time per unit area The DCS is the fundamental observable quantity in scattering experiments to which all other quantities may be related. The first derived relevant quantity is the integral cross section Uij (E) = fo fo Iij (0, q) sin 0 dOdo. (2.6) In the case when the investigation of a bulk phase is desired the rate constant can be related to the cross section by k (T) (kij (T)) 0 voij 1mv2) P (v) v2 dv) (2.7) where P (v) denotes the relative velocity distribution, and the brackets indicate an average over initial internal states. In the case of a gas transport processes the thermal conductivity depends on the collision integral (2kT (1 7f m \ 77rm]01,)(v)exp -) dv, (2.8) \7 Tm ] 2kT ]2kT where a,, (v) is the weighted cross section or, (v) f 7r 7 (0, ) (1 - cos2 0) sin 0dOdo. (2.9) In such bulk phase measurements much of the details of individual collision processes is lost in the averaging involved in the observations. The goal of molecular collision theory according to Child[l] is to identify the dominant characteristics of a given model potential function and to extract from them all possible observable consequences. 2.2 Classical Theory of Scattering Classical mechanics provides the roots for modem physics. It provides a familiar basis to begin a more intricate investigation but also quantum mechanics still depends on classical physics through the construction of Hamiltonians and Lagrangians. The purpose of this section is to provide a foundation for our investigation into QM scattering. The discussion leads from classical trajectory to the collision cross sections, and concludes with some comments about their validity. In the CM system the conservation of energy E and angular momentum Z given by Eqn. 2.3 gives the conditions of a given event defined by the incident relative velocity 97 and the impact parameter b in Fig. 2.2. The trajectory must lie in a plane since Z is conserved. The equations of motion may be written in polar coordinates: L = mvb = mrr2 (2.10) E -m 2 r1 nr2+ -rr2 b2ï¿½+V (r) 2 2 2 1 *n2 + r + V (r). (2.11) 2 2mr2 Elimination of the time derivatives gives =r i2[1 - b2/r2 - V (r) /E] ' (2.12) the sign given by the radial velocity; positive for outward and negative for inward motion, respectively. Figure 2.2: Proton-Water Scattering The complete classical trajectory may now be determined by integrating Eqn. 2.12 from oc to a and out again. For our interests, only the overall effects of the collision are observed after the event. The relevant quantity is termed the deflection function (DF), E, related to E and L by f(,) 7 2b0 dr (2.13) f (E,L)=r-2b r2[1 - b2/r2- V(r) /E](. which gives the functional dependence to calculate the deflection angle of a single collision trajectory. We now move our focus from a single trajectory specified by an initial energy and impact parameter to reflect the inability of experiments on a molecular scale to specify a single impact parameter. As we have discussed, the experimental results are reported in terms of the differential cross section, doij/dQ, and the collision cross section, aij which are notional areas leading to a particular angle and/or state. In experiments it is not possible to distinguish between positive and negative deflection angles so we introduce the scattering angle, 0 = 101, which is the magnitude of the DF and is limited by 0 < 0 < 7. We observe that apart from discontinuities at 0 = 0, 0 varies smoothly with b, and hence trajectories within an impact parameter range from b to b + db defining an area 27rb db, are deflected into an appropriate solid angle dQ = 27r sin 0 dO. The DCS is therefore daij = 1 (0, E) b dQ sin 0 dO/db (2.14) If more than one value of b leads to the same angle we can appeal to the experiment and observe that the classical intensity I is increased. Then it can be replaced by an appropriate sum dQ sinOdO/dbk (2.15) where k is an index for all values of b leading to the same deflection angle 0. The total elastic cross section is then defined as t he integral of the differential form: f do a(E) = -d-,dd = 27r I (O,E)sin0dO. (2.16) We observe that the expression for d displays two possible singularities, the socalled glory and rainbow effects in the classical differential cross section. The glory effect, which may occur at 0 = 0 (forward) or 0 = 7r (backward), is due to the sin 0 in the denominator evaluating to zero in Eqn. 2.15. The rainbow singularity, on the other hand corresponds to a extremum in the deflection function, at which dO/db = 0. At this point the density of classical trajectories rises to infinity in analogy to the phenomenon in optics which leads to rainbows. The principle deficiency of the classical theory is of course that no particle trajectory can ever be precisely defined, because of the inherent uncertainties associated with it. The fascinating effect of interference reveals the failure of classical mechanics at the rainbow angle where many trajectories contribute to eliminate the singularity. 2.3 Quantum Mechanical Scattering Theory The standard treatment of quantum mechanical scattering theory employs a stationary wave function and its associated interference pattern in place of the limiting deflection for a classical trajectory. The wave functions represents the whole system and is independent of time describing the behavior of the system over a long period. This is in contrast to time dependent scattering theory which we will discuss later in this chapter. The starting point for this formulation is determining the asymptotic form of the timeindependent scattered wave function and relate its angular dependence to the effect of the scattering potential. Formal scattering theory has been in development since the beginning of the formulation of quantum mechanics until the 1970's[3, 4]. The main concern of the discipline is to rigorously define a scattering process in mathematical terms, to attempt a formal, exact solution of the time-dependent and the time-independent Schrodinger equations, and to define the formal tools to treat a scattering process. The purely formal solutions are usually obtained from an integral equation reformulation of the original Schr6dinger equation in both the time-dependent and the time-independent cases. These developments are of high theoretical value but of little practical consequences since no definite algorithm to solve the scattering problem can be devised in this way. Therefore, a complete presentation of the formal theory is beyond the scope of our discussion. Rather, the main concepts of the formal theory are discussed along with an introduction to the practical methods. 2.3.1 Time-Independent vs. Time-Dependent Scattering A scattering process is a real time-dependent phenomenon which can only be described in full detail by the time-dependent scattering theory. This implies the solution of the time-dependent Schr6dinger equation of the whole system H04 (t) = ih'O (t) (2.17) at when the Schro6dinger picture is adopted. At initial time, both the projectile and the target are described by the wave function V) (t = -oc). In the usual case of a stateto-state scattering problem, the initial wave function is a stationary eigenfunction of the whole Hamiltonian with a trivial time dependence through a phase. However, it is usual in some type of simulations not to employ a single state but some linear combination of state forming a wave packet. In either case, at a remote final time, the evolved wave function 0 (t = oc) can be decomposed into a linear combination of the station- ary eigenfunction of the products. Then, the coefficients of that combination ought to have a trivial time-dependence through their phase again. At any time of the evolution -oo < t < xo, all the properties of the scattering system, including final time properties such as cross sections, can be calculated from the wave function V) (t). The probability of the system to yield by reaction some type of products at a final time can be obtained by projecting the final wave function against the product states. As shown, the scattering problem also demands the previous solution of the time-independent eigenvalue problem of both reactants and products. For a time-independent Hamiltionian, the totally formal solution of this problem for an evolution between the times t, < t2 is S(t2) U (t2 -tI) 0 (tI) p[iH (t2 - ti)] (t1) (2.18) exp h t)(.8 where the central unitary operator U (t2 - t1) is called the time-evolution operator or propagator. As mentioned the most detailed description of a given scattering process or a chemical reaction can only be achieved by the time-dependent formulation of the problem. However, the solution of a realistic scattering problem is far more difficult to find in the time-dependent scheme than in the time-independent one. The difficulties posed by the time-dependent methods had favored the adoption of the time-independent approach until the 1970s. The justification of how essentially time-dependent phenomena such as scattering processes and chemical reactions can be treated time-independently is rigorously explained in the more advanced literature on the subject[3, 4]. Here, it can be loosely stated that the eigenfunctions V (E) of the time-independent Schrodinger equation for a scattering problem HVf (E) = EO (E) (2.19) contain in the asymptotic region an incoming component describing the reactants and different outgoing components describing the products. Most of the relevant properties of the scattering process can be calculated from those outgoing components. However, all the intermediate details of a scattering process or a chemical reaction cannot be obtained in this way. This approach is an extension of the most traditional methods to calculate bound states in isolated molecules to the case of several colliding molecules involving some unbound translational states. This connection to the methods in electronic structure theory has also favored the bias to the time-independent schemes. Because of its historical precedence, the time-independent methods will be presented first. Furthermore, the time-independent approach directly refers to the stationary description of reactants and products, a problem which must be necessarily addressed before attempting a time-dependent solution. It is also important to remember that some of the techniques further developed within END theory have their origins in the time-independent theory. 2.3.2 Fundamentals of the Time-Independent Theory For the simple case of elastic scattering by a localized potential V (r) the wave function is dependent only on the interparticle distance r. After Merzbacher[5] and Sakurai[2] motion of the particle is described by the Hamiltonian p2 H = 2m + V = H0 + V (2.20) 2 r where H0 is the free particle Hamiltonian. The solution to the time-independent Schr6dinger's equation is then given by the Lippmann-Schwinger equation[2] (ï¿½)) = + 1 (2.21) where the energy term in the denominator has been made complex with the infinitesmal parameter E in order to ensure continuity and the ï¿½ indicates outgoing and incoming solutions. Then the outgoing solution to the time-independent Schrodinger equation has the asymptotic dependence ï¿½(+) (r) -+ I, + f (0) e (2.22) r Here 1 1 is the state of the system in the absence of the scattering potential which will be determined by theoretical or experimental considerations. If we make the ansatz that the incident wave function is a plane wave which represents all the impact parameters with a given momentum kh in the z direction then it has the form JJ = eikz. The scattering amplitude f (0) governs the angular distribution of the scattering which is outgoing due to the factor eikr. Other forms of 4F, may be applicable in certain circumstances, for example if the dimension of the incident wave function were small compared to the effective region of the potential requiring a wave packet. Time-independent wave packet scattering theory is fully discussed in Goldberger[3] and Newton[6]. The scattering amplitude is related to the DCS through its definition Eqn. 2.5 and the evaluation of the incident and scattered flux by dcu r2 1Jscattered f drij dQ = = If (0)1 dQ. (2.23) dQ jincident Using this construction the problem then becomes to determine f (0) for a given potential function V (r). We are required to turn our attention from the asymptotic region to the range where the potential function is localized. This is the launching point for various techniques in determining the form of scattering amplitude. 2.3.3 Scattering Amplitude In the time independent formulation given by Sakurai [2] the scattering amplitude is 1 3 2m+ f (0) = - (27) h-(f2'l P ) (2.24) where ' represents the propagation vector reaching the observation point 0. The outgoing wave function OW and potential V terms in Eqn. 2.24 are the fundamental unknowns in this formulation. We will leave the discussion of the calculation of the potential term for later. The determination of OW is realized through several different approximations applicable in different regimes. Here we will discuss the Born Approximation and the method of partial waves which comprise the main quantum mechanical approximations, then we will address the semiclassical JWKB approximation, and Eikonal approximation. 2.4 Semiclassical Theory of Scattering 2.4.1 Born Approximation If the assumption is made that the effect of the scattering potential is not very strong we make an approximation to replace V(+) with the free particle solution 0. In this way we treat the effect of the potential V as a perturbation of the free Hamiltonian. In the fashion of perturbation expansion we can calculate the fist-order Born amplitude which is denoted by f(1): - 1 2m [ ezi",).iVG) d3x' 2.5 f() () 4 h2 () . (2.25) We can generate an iterative solutions by introducing the transition operator T such that VIe(+)) = TIO) (2.26) Using the Lippmann-Schwinger equation Eqn. 2.21, we obtain an expression for T T =V+V I T (2.27) E -Ho -iZ The scattering amplitude then can be written as 1 )3 2m -, f(0)= -( 2-g) k ITk). (2.28) The iterative solution for T is given by T = V + V V+V VE - Ho - iV+ .... (2.29) E - Ho - iE E - Ho - iE Correspondingly, we can expand f as follows: (0) = fW)(0), (2.30) n=l where n indicates the number of times V enters the expansion. 2.4.2 Partial Wave Expansion In this method we assume the potential V is spherically symmetric which allows us to expand the scattering amplitude in terms of angular momentum eigenfunctions and the phase. As a consequence of our assumption about the potential the transition operator T (see Eqn. 2.29) commutes with L2 and Z. Therefore, T is a scalar operator. Application of the Wigner-Eckart theorem to a scalar operator immediately gives (E', 1', m'ITIE, 1, m) = T (E) 6ti,6mm, (2.31) where E is the initial energy, 1 and m are the angular momentum eigenvalues. Clearly T is diagonal in both I and m. Turning our attention Eqn. 2.28 we can show f( (2)3 2m (PITI) f(o) - 4 ( --r - TZItIE=h2k2/2mYtm ( m (k). (2.32) A mn Since we have assumed the incident beam is moving in the z direction and k is outgoing wave at the angle 0 we see that 00 f (0) (2 + 1) ft (E) Pt (cos0) , (2.33) where f, (E) = -7rT (E)/ kl. Considering the conservation of angular momentum and that in our construction the flux of particles is conserved it is possible to relate the f, to the phase of the outgoing wave ei6 sin 1 (2.34) Then the scattering amplitude is f (0) Z (21 + 1) e'6' sin 61P (0) (2.35) IkI 1=0 with 61 real. The effect of this expansion is to change the problem from calculating the scattering amplitude to evaluation of the effect of the potential on the phase which in some cases may be significantly easier. The differential cross section can be obtained just as in Eqn. 2.23 and the total cross section can be succinctly given by Ott= f If (0)12d - J~(6)IdQ (21 + 1) sin (2.36) 1=0 At high energies we may make an additional approximation which is our first step into the realm of semiclassical approximations. That additional approximation is that at high energies the spacing between the energy eigenstates becomes quite small. In this case we may regard the angular momentum quantum number 1 = bfkI as continuous. We can then take the sums that appear in Eqn. 2.36 and Eqn. 2.35. 2.4.3 JWKB Now we turn our attention to semiclassical Jeffreys, Wentzel, Kramers and Brillouin (JWKB) approximation. The wave length of relative motion of two nuclei in a heavy-ion reaction is small and semi-classical methods exploit the smallness of this quantity to obtain simple approximate formulae for scattering phase shifts and partial wave amplitudes. Following the discussion of Brink[7], the JWKB begins by recasting the Schrodinger equation as dx + k2 (x) V) = 0, (2.37) where k2 (x) = (2m/h2) (E - V (x)). Hence, we can interpret k (x) as the local wave number and p (x) = hk (x) is the local momentum of the particle. Substituting V) (x) = e. f 8(x))h (2.38) so that Eqn. 2.37 reduces to 2 2 X) dS -7 = k2 (x) + Zidj (2.39) The JWKB approximation is dependent on the assumption that q is a slowly varying function of x and solving equation Eqn. 2.39 by iteration. The second iteration gives 1 k' rq (x) -- ï¿½k (x) + -Z' (2.40) and therefore '(x>)= Clk exp (fk (x) dx) + C2k- exp (- k (x)dx) (2.41) 2.4.4 Eikonal Approximation The Eikonal Approximation[2] is applied to cases in which the scattering potential V varies very little over a distance of order of the wavelength A of the scatterer. Note that V itself need not be small only that E >> V; which is in contrast to the applicability of the Born Approximation (Pg. 16). This condition allows us to replace the exact wave function OMï¿½) by a semiclassical wave function V)(+) , e's(-)/h, (2.42) where S (x) can be interpreted as the phase of the wave function. Using the flux we can observe that j" (VS (2.43) allowing us to interpret VS as the "momentum" of the wave packet. Using Eqn. 2.42 in the time independent Schrdinger equation leads to the Hamilton-Jacobi equation for S, h- + V = E = (2.44) 2m 2m Here again we have transferred the difficult problem of calculating V)(+) to the calculation of the phase. For the high energy regime the computation of S is often made with the additional approximation of straight line trajectories for the semiclassical path of the scattered wave packet. 2.5 Time-Dependent Scattering Theory While the time-independent methods have dominated the development of scattering theory for most of the history of quantum mechanics it must be remembered that scattering is a time dependent process. Time-dependent theories were not studied in earnest until the advent of modern computers in the 1970s. One of the main reasons being that the time-dependent equations are posed far more demanding then the time-independent ones. This young branch of scattering theory is not so developed and established as its counterpart making a concise summary more difficult. Several reviews have been compiled among them are Deumens et al.[8] and Kr6ger[9]. The time dependent methods can be roughly organized into two camps. The first being the case where the time dependence is reserved only to the nuclear degrees of freedom. In the second one, the time dependence is given to all the degrees of freedom, nuclear and electronic. The first approach is time-dependent nuclear dynamics on one or more predetermined potential energy surfaces (PES). In calculating the PES the electronic degrees of freedom have been included before the dynamical equations are solved and propagated. In the second, the time evolution of electrons and nuclei is performed simultaneously. However, this calculation is very intensive if PES are not used so practical methods have so far been mostly restricted to classical, semiclassical, or quasiclassical descriptions of the nuclear degrees of freedom. 2.5.1 Wave-Packet Propagation: Exact Methods Methods for which the numerical integration of the nuclear time-dependent Schr6dinger equation is performed directly are called exact[1O]. Although these methods contain approximations for the structure of the Hamiltonian and other numerical approximations they can be calculated to any desired level of accuracy. The scattering is simulated and the use of a wave packet either represented in terms of discretized grid of points or expanded in a convenient basis set. 2.5.2 Approximate time-dependent methods Time-dependent self consistent field. For these methods the exact time evolution is replaced with numerically easier approximations and models. One such method is the Time-Dependent Self Consistent Field (TDSCF) method introduced by Bisseling et al.[11]. The nuclei are represented by Hartree products of wave packets on a grid. Each nucleus is then moving in the average field of the others. Further developments of this method add several exit channels and include some correlation effects for the nuclear dynamics by adding a multi-reference scheme of Hartree products. These methods can be used for scattering, photodissociation, atomic transfer and vibrational dissociation. Trajectory Surface Hopping. The trajectory surface hopping method (TSH) uses a probabilistic description[12, 13] to make the solution of the time-dependent equations easier. Instead of solving the coupled set of equations at all nuclear configurations, a hopping probability for the electronic configuration is computed at each instant. Thus the equation for the classical nuclei depends only on the hopping probability between the states involved, usually just two. Direct calculation of the dynamics of the electronic states and phase is ignored in the TSH method[ 14]. Car-Parrinello. A very popular method for dynamics is the the Car-Parrinello (CP) method[15]. The original method was an algorithm to solve minimization problems which lead the the solution of eigenvalue problems. While this method could be used to solve any number of eigenvalue equations it has primarily been applied using the equations derived from Density Functional Theory (DFT) Time-dependent Hartree-Fock. This large class of methods is based on the HartreeFock equation for the dynamics[16] and utilizes semiclassical or classical descriptions for the atomic nuclei. These methods consider an explicit dynamical description of the electronic state. Sometimes the full ab initio Hamiltonian is considered, in other cases model Hamiltonians are set up to drive the dynamics. The coupling between the electrons and nuclei in these models is through the average potential energy surface. The nuclei and the electrons affect each other only through their instantaneous positions in the Fock operator. The result of this that high energy, non-chemical scattering is not properly handled. The ad hoc solution for this problem is the addition of electron translation factors CHAPTER 3 END THEORY OF TIME DEPENDENT MOLECULAR DYNAMICS Electron Nuclear Dynamics (END)[17] is becoming a complete general theory of molecular processes. It is one theory of a few among the dynamics community in that its foundations are based in the Time Dependent Variational Principle (TDVP). This novel approach generates a theoretical framework to calculate the full dynamics of molecular collisions and dynamics without the use of the Born-Oppenheimer approximation thus allowing the study of nonadiabatic interactions. The richness of the theory can be implemented in a hierarchy of complexity ranging from the semiclassical to the fully quantum. At all levels of the realization of this theory there is a fertile ground of interesting physical and chemical problems to investigate. Currently the END theory is implemented in the ENDyne[18] code. The level of implementation uses classical nuclei in the narrow wave packet limit and a single complex determinant to describe the electrons. Even at this basic level of approximation several applications have shown that END can capture much of the physics of nonadiabatic molecular processes[8, 19, 20, 21, 22, 23, 24, 25]. It has also become clear that there are limitations to this level of the theory. Further developments in implementation are in progress. 3.1 The END Theory The starting point for the END theory is the TDV[26] which is the quantum mechanical analogue of the classical principle of least action sometimes called "Hamilton's Principle". The application of the principle of least action to a physical problem often leads to more computationally tractable equations of motion than for example the Schrodinger's equation. Here we will introduce the quantum mechanical action A and demonstrate its connection to the Schrtdingers equation by introducing a general set of complex variational parameters. We will then derive the equations of motion using the overlap matrix and the energy. Finally we will discuss the implementations of END specifically the implementation featured in the ENDyne code. The quantum mechanical action is given by A = L dt, (3.1) with the quantum mechanical Lagrangian defined as (h = 1) L (0< 1 -i(0 - - j)1Vj).(3.2) Here H is the quantum mechanical Hamiltonian of the whole system. Note the "left acting derivative" 5 used in the Lagrangian. It is defined for a differentiable function f as f - f. (3.3) This definition is necessary if we want to promote Eqn. 3.2 to an operator in Hilbert space, L --+ 6, and ensure it is Hermitian. We make the ansatz that qI, the wave function for the whole system, depends on a set of complex variational parameters ( = {,, (2, ,. . ., (m} which depend formally on time, i =- ( (t). We introduce the column vector 1() as the representation of AV =- %P() = 1() in terms of the complex parameters {Ji}. Here we observe that these complex parameters along with their time derivatives serve as generalized positions and velocities for our quantum mechanical Lagrangian. Thus L(J, *) =( L(, (*, 4, )*), (3.4) where = are the generalized velocities. The time evolution of the system is deterdt mined using the TDVP which requires that the action be an extremum with respect to its parameters so 6A = Ldt = 0, (3.5) subject to the boundary conditions 61F) = 6('I = 0 (3.6) at t = t, and t2[27]. Substituting our expressions and employing the above boundary conditions and integration by parts gives us t2 6A 6f' dt ((leJï¿½)((l() = 0 ft dt [h W 1 (( >( (10() + C.C.]. (3.7) Since T, and equivalently (, can be varied throughout the full Hilbert space the integrand of Eqn. 3.7 must satisfy [ih- H]() - 1 (3.8) at (0) If we explicitly consider the phase, -y, of the wave function by writing 1() --* ei" I() we can derive the following expression for the right hand side of Eqn. 3.8 e(l--'eei l> 02 (M !-)'+- ==> '-tL I) = . (3.9) Substituting these results in Eqn. 3.8 gives tih - H] I/ 1/ atzhi - H/]I -1/- 0 [ih! - H] eTClI) = 0. (3.10) This result is the Schrodinger equation. The preceding derivation provides the justification for the form of the action presented in Eqn. 3.4. If we choose to specify a constrained form for the wave functions T then we would restrict ourselves to particular regions of the Hilbert space resulting in a dynamical evolution that would be an approximate solution to the Schbdinger equation. This idea forms the kernel from which all realizations of the END theory are derived. In choosing the set of parameters ( and the form of the Hamiltionian, H, we are able to derive equations of motion that are relevant to many different molecular processes. The primary advantage of this theory is in the fact that we have not constrained the dynamics of the system except in our choice of parameterization. Hence we are not limited to investigating only adiabatic processes as molecular studies using PES often are. Even before making any assumptions about the form of IF we can derive a set of general equations of motion. Let us define the S = S((, (*) = ((I() and the energy E = E((( I*) = (ClK)/(ï¿½ . Then Eqn. 3.7 can be expressed in ( representation as 6A 2 0 I 2nS O WE = ft I 0(0a ( 0()3 + 02 I)S - S)6( ]} dt = 0. (3.11) Since 6(o and ( are independent variations then two sums in the integrand must be separately zero resulting in OE Ca - O ' (3.12) where we define the complex Hermitian matrix C {C.3} =02 in S/O("O(3. The general END dynamical equations are given by Eqn. 3.12 and they may be recast in matrix form as a( (3.13) 0 -C* ) * ) OE'-( We define a generalized Poisson bracket by {fgf- iZ[ 0C C-104O] (3.14) with f ((,7) and g (4, 4*) being differentiable functions. It follows that {, E} and i* ={*, E}. So we can state that the generalized Poisson brackets show that the time evolution of the wave function parameters are governed by Hamilton-like equations. Moreover, the additional relations {Q,/} - {, =0and{, -%iC (3.15) show that (* and ( behave as "classical" coordinates. Observe that if the matrix C were the unit matrix then Eqn. 3.15 would reduce to the commutation rules for a flat phase space. Our generalization has lead to a curved phase space. 3.2 The END Wave function At the time of this writing only the first nontrivial level of END theory has been implemented in the code ENDyne[ 18]. In this model the unnormalized END wave function in the lab frame is written as 'END (X, x, t) = F (X; A(t), P(t) ) fel (x; z(t), 1l(t)) x exp [ Ytotal (t)] (3.16) where X and x are the nuclear and electronic coordinates, 1l(t), 5(t) and z(t) are the variational parameters, and 'Ytotal (t) is the total phase. The time-dependence of the wave function comes only through the parameters and the phase. We describe this wave function as the quasiclassical single determinental END wave function (QCSD END) and it has three important parts: the nuclear wave function Fn,,, (X; !R(t), P(t)), the electronic wave function fej (x; z(t), 1l(t)), and the phase. The nuclear part consists of a simple product of unnormalized, so-called "frozen" Gaussian wave packets (D (Xk; !Rk (t) , Pk (t)) Nnuc, F, , e (X ?(t), Pi(t)) = I(IDk (xk; fik (t),/Pk (t)), (3.17) k=l where (3.18) The variational parameters (t),/3(t), and the constant ak are assumed to be real. The construction of these wave functions establishes the connection of these parameters with average position and momenta of the wave packets. The constant ak can be related to standard deviation or width of the Gaussian functions. The physical meaning of the previous parameters and constants imply that each nuclear wave packet can be seen as parametric plane wave function exp 'Pk Mt). [Xk - !ik (t)] (3.19) centered around the position Rl(t) by a simple Gaussian factor exp{ ak [Xk -Rfk (t)]}. (3.20) Furthermore, since the constants ak are time-independent and the evolution is through the TDVP, each nuclear Gaussian will remain a Gaussian during the evolution without changing shape. The nuclear wave packets only change in their positions and momenta. Contrasting this method with unconstrained propagation of nuclear wavepackets, we observe the QCSD END nuclear wave functions is not allowed to spread, change shape or split into seperate packets. The frozen wavepackets are limited in this regard but have great computational advantage over the general method. The use of frozen Gaussian wavepackets has been widespread[3] and is considered quite valid if one chooses to ignore tunneling. In most molecular experiments the scattering process happens is such that the width of the nuclear wavefunction can be considered negligable. Additionally, this level of wave function for the nuclei allows a description of the internal motion of the molecular components of the scattering process. The electronic function fj (x; z(t), F?(t)) is a single-determinant wavefunction throughout the time evolution. This type of function has been long established in the TDHF theory of nuclear many-body theory[28, 29, 30]. Electron Nuclear Dynamics theory has an attractive feature in its ability to treat both the nuclear and electronic degrees of freedom simultaneously. Additionally, this single determinant is related to either restricted or unrestricted Hartree-Fock description of the reagents and products ground states. The difficult problem of constructing a wavefunction which remains as a single determinant while being evolved was solved by Thouless[29, 30]. The electronic QCSD END wave function is presented following the original derivation by Thouless. It is meaningful to divide the set of K spin orbitals into N occupied and K - N unoccupied spin orbitals. The linear array q refers to the set of all all spin orbitals. Then qï¿½ and qï¿½ refer to the occupied and unoccupied orbitals, respectivly. The atomic spin orbitals may also be partitioned into two sets. We write them as 0= (06o 00). (3.21) Similarly, for the matrices in the equations four sub-blocks are identified: the occupied N x N and unoccupied (K - N) x (K - N) diagonal blocks; and the upper and lower off diagonal blocks. The transformation to the molecular basis is then we W>) (060ï¿½) = (09ï¿½ 0ï¿½) W(3.22) The right-angle > denotes the upper off-diagonal block, and the down-angle V denotes the lower off-diagonal block which has a vertical rectangular shape. The purposes of this notation is to avoid many messy index manipulations. As a mnemonic we reserve p, q, r for indices running over the particle or unoccuupied range N + 1, ..., K; and h, g, f are for the hole or occupied range 1, ..., N. Both the atomic 4 and molecular 0 depend parametrically on the Gaussian parameter f (t). Introducing a general unitary transformation U of the orthonormal molecular spin orbital basis one can write (C=t cot) = (bt bt) ) (3.23) UV Uï¿½ for the basis field operators, and obtain for a determinental state[31] h K~ N-i 1 I. cot FIN OZ hII1O ( o 'Vl''lt N = = h~lt I vac> -)1 bvp 'kh bo I blvac) h=1 p=N+l k=1 1=1 where the invariance of a determinantal wave function under a linear transformation of its occupied spin orbitals up to a constant a has been used. Defining the Thouless parameters Zpq, EN 1 UpUk*h = Zph, and the reference state N T O V bt fj = t I~ lvac)> (3.24) one can write the unnormalized Thouless determinant I z, R f (x; z(t), A(t)) as z,/ ) (1h - pN+ b;tZphbh) 'I'O > exp (E :1 N+1 Zphbptbh) aO). (3.25) Here the nilpotence of the operators botbh, has been used. The parameters Zpq are complex numbers. The determinantal wavefunction of this state is det Xh (Fn) in terms of the occupied dynamical orbitals K Xh = ?)h + E V)pZph" (3.26) p=N+l 33 These orbitals are not orthogonal. The function z,/R) fei (x; z(t),/i(t)) is the single-determinant wave function for the QCSC END theory. If the arbitrary reference state I To0 ) is selected as the HF ground state of the system then the Thouless determinant contains this state plus all its excitations. Finally, the QCSD END total phase ytotal (t) is the quantum action corresponding to the QCSD END wave function TEND (X, x, t). CHAPTER 4 OVERVIEW OF ROVIBRATIONAL REACTION DYNAMICS In this chapter we provide a brief overview of the classical and quantum mechanical rovibrational dyanamics. First, we review classical vibrations. Second, we discuss the quantum mechanical description of the harmonic oscillator. The classical theory of rotations is examined. Finally, the quantum mechanical description of rotations is presented. 4.1 Classical Theory of Vibrations A system is said to be in equilibrium when the generalized forces acting on the system vanish - -0. (4.1) The potential energy therefore has an extremum at the equilibrium configuration of the system, q01, q02, ..., qon. If the initial configuration is at the equilibrium position with zero initial velocities oi, then the system will remain in equilibrium, i.e. like a pendulum at rest. In the case of a stable equilibrium, one for which small displacements result in small bounded motion about the resting position, all functions can be expanded in terms of a Taylor series and we retain only the lowest order terms. We naturally cast our system in terms of deviations r/i from equilibrium: q- = qoi + 77j. (4.2) Using ri as the new generalized coordinates of the motion we expand, the potential energy about q0i we obtain V(ql, q2, ..,qn) = V (q0, q02, ... qon)+ 7i + -I (rq)zqj +"" (4.3) Where we have imposed the Einstein summation convention. As we see from Eqn. 4.1 the linear term vanishes and we shift the zero of potential energy to eliminate the first term in the series. We are then left with the quadratic terms as the first approximation to cl-( L ) 1 ( a2 V 77 7j 1 V 77oi + - 2 ij 7i % (4.4) \O9qi, V q q where the second derivatives of V have been designated by the constants Vj. A series expansion can also be obtained for the kinetic energy. The kinetic energy is a homogeneous quadratic function of the velocities when the generalized coordinates do not explicitly depend on time: 1 *. 1 T =mij2q mijrhrlj. (4.5) The coefficients mij are functions of the coordinates qk, but they may be expanded in a Taylor series about the equilibrium configuration just as we did for V. Keeping only the quadratic terms, we can therefore write the kinetic energy as T = - Tj (4.6) 2 ~ We should observe that both Vj and Tij are symmetric since individual terms are unaffected by interchange of indices. The Lagrangian is given now given by 1 L = I (Tijij - Vjrir,1). (4.7) Using the 77i as the generalized coordinates, the Lagrangian leads to the following n equations of the motion: TAjij + Vwrj = 0, (4.8) where explicit use has been made of the symmetry property of the Vj and Tij coefficients. We then must solve this set of simultaneous differential equations involving all of the coordinates ri to obtain the motion near the equilibrium. The equations of motion are linear differential equations with constant coefficients leading us to make the ansatz that the solution has the form 77i = Caie-iwt (4.9) Here Cai gives the complex amplitude of the oscillation for each coordinate ri, the factor C being introduced for convenience as a scale factor. Clearly, it is the real part of Eqn. 4.9 that corresponds to the real motion. Substituting our ansatz into the equations of motion leads to (Vijaj - W2Tijaj) -0. (0 (4.10) These equations constitute n linear homogeneous equations for the ai's and consequently can have a solution only if the determinant of the coefficients vanishes: V11 - w2T1I V12 _ W2T12 VIi - w)2T21 V12 - L2T22 .11 (4.11) VII - W 2T21 We have in effect an algebraic equation of the nth degree for w2, and the roots of the determinant provide the frequencies for which Eqn. 4.9 represents a correct solution to the equations of the motion. Each of these values of w2 the equations maybe solved for the amplitudes of aj, or more precisely, for n - 1 of the amplitudes in terms of the remaining aj. The general solution of the equations of motion may now be written as a summation over an index k: ?7i= Ckaike-iwkt, (4.12) there being a complex scale factor Ck for each resonant frequency. We should note that we are only interested in the real part of our equations of motion, consequently we ignore the negative values of w. It is possible to transform the 77i to a new sort of generalized coordinates that are all simple periodic functions of time-a set of variables known as the normal coordinates. We define a new set of coordinates (j related to the original coordinates i~, by the equations 17i = aij (j, (4.13) or in linear algebra terms (4.14) Importantly, A diagonalizes V by a congruence transformation and the potential therefore reduces simply to 122 V - 2 Wkrk (4.15) Similarly, the kinetic energy transforms to T = I (.. (4.16) 2 Returning to the Lagrangian, we know have L ( i i _ w2(i2) (4.17) so that the Lagrange equations for (i are + W2(, = 0. (4.18) The immediate solutions are =Cie-iwt (4.19) Each of the new coordinates is thus a simple periodic function involving only one of the resonant frequencies. So it is therefore customary to call the ('s the normal coordinates of the system. Individually each normal coordinate corresponds to a vibration of the system with only one frequency, and these component oscillations are called the normal modes of vibration. All of the particles in each mode vibrate with the same frequency and with the same phase; the relative amplitudes being determined by the matrix elements aik. The complete motion is then built up out the the sum of the normal modes weighted with appropriate amplitude and phase factors contained in the Ck's. Harmonics of the fundamental frequencies are absent in the complete motion essentially because of the stipulation that the amplitude of oscillation be small. We are then allowed to represent the potential as a quadratic form, which is characteristic of simple harmonic motion. The normal coordinate transformation emphasizes this point, for the Lagrangian in the normal coordinates is seen to be the sum of Lagrangians for harmonic oscillators of frequencies Wk. We can thus consider the complete motion for small oscillations as being obtained by exciting the various harmonic oscillators with different intensities and phases. 4.2 Quantum Mechanical Theory of Vibrations The quantum mechanical harmonic oscillator is a superb pedagogical tool as it is a system that can be exactly solved in classical and quantum theory but it is also a system of enormous physical relevance. Any system displaced by small amounts near a configuration of stable equilibrium may be described either by an oscillator or by a collection of decoupled harmonic oscillators. The dynamics of a collection of noninteracting oscillators is no more complicated than that of a single oscillator aside from the N-fold increase in degrees of freedom. While addressing the solution of the oscillator we are actually confronting the general problem of small oscillations near equilibrium of an arbitrary system. 4.2.1 The Quantum Mechanical Harmonic Oscillator The canonical example of a single harmonic oscillator is a mass m coupled to a spring of force constant k. Small deviations in the position x will exert the force given by Hooke's law, F = -kx, and produce a potential V = lk9. The Hamiltionian for this system is H=T+V= - + mW 2X2 (4.20) 2m 2 where w = (k/m) 2 is the classical frequency of oscillation. Any Hamiltonian that is quadratic in the coordinates and momenta will be called the harmonic oscillator Hamiltonian. The mass-spring system is just one among the following family of systems described by the oscillator Hamiltonian. A particle moving in a potential V (x) is placed at one of its minima x0, it will remain there in a state of stable, static equilibrium. Consider the dynamics of this particle as it fluctuates by small amounts near x = x0. The potential may be expanded as a Taylor series as in the classical model. For small oscillations, we may again neglect all but the leading term and arrive at the potential Eqn. 4.20. Therefore, we identify d2V/dx2 with k = mw2. A system of harmonic oscillators can be formed by simple addition of the individual Hamiltonians and decoupling them through a coordinate transformation. In general, a system of N coupled harmonic oscillators can be decoupled into N separate harmonic oscillators. 4.2.2 Quantization of the Harmonic Oscillator We focus our attention on the time-independent Schrodinger equation eigenvalue problem: ( + mw2X JE) = EE). (4.21) A clever method due to Dirac allows us to work in the energy basis without having to know ahead of time the operators X and P. From the basis independent commutation relation [X, P] = ih, (4.22) we are motivated to introduce the operator a= (m)X + 2rrw P (4.23) and its adjoint at (zm) X- ir P. (4.24) They satisfy the commutation relation E[a,at] 1(4.25) and is related to the harmonic oscillator Hamiltionian so that H= (ata+ ) hw. (4.26) For simplification let us define H = H/hw and c = Elhw. Two important relations can be immediately obtained: [a,f] a [at,II] -at. (4.27) The utility of a and at is that given a particular eigenstate they can be used to generate others. For example IHalc} = (aftI- [a, ItI) = (aft - a) I( ) = (c- 1)aIE). (4.28) We can conclude that a I c ) is an eigenstate of ft with eigenvalue E - 1 thus a I) C C - 1) (4.2 (4.29) where C, is a constant and I c ) and 6, - 1 ) are normalized eigenkets. Similarly we observe that IfIat le) = (atfiI - [at,I2I]1 E = (atIt +at) I) = (E+1)atl6) so that at I F= C'+1 I + 1 ) (4.30) (4.31) Clearly this is why one refers to a and at as creation and annihilation operators. We are led to conclude that if c is an eigenvalue of Hl, so are + 1, c ï¿½ 2, .., c + o. However, we must remember that the quadratic terms in H make it a positive operator with positive eigenvalues. Consequently, there must exist a state I c) that cannot be lowered further: a60) - 0 HI o)= . 2 It is convenient to define E, (n+1), En = (n +l) hw, n 0, 1, 2,... n 0, 1, 2,.. Since there are no degeneracies in one dimension these constitute all of the eigenstates of H. (4.32) (4.33) (4.34) With some manipulation it can be shown that the coefficients are given by 1 i Cn = n- (4.35) where ï¿½ is an arbitrary phase. The energy eigenstates can be succinctly written in terms of the creation operator at (at) n In) - n - (n!) 710). (4.36) Returning to the position basis with X -- x and P -- d/dx we can observe that the inversion of Eqn. 4.23 and Eqn. 4.24 leads to n) - (h2 f(! ) (- 2h ( e)Hp [(M ) 2 1 X] (4.37) where H, (y) are the Hermite polynomials. 4.3 Classical Theory of Rotations 4.3.1 Rotations When we discuss rotations we do so in the context of a rigid body. A rigid body is a system of mass points subject to the constraints that the distances between all pairs of points remain constant throughout the motion. This is something of an idealization, especially in the case of molecular motion, but it allows us to discuss the important aspects of rotational kinematics and dynamics. In the case of a rigid body with N particles it can at most have 3N degrees of freedom. Although, as we see by definition there exists a set of constraints. These constraints serve to reduce the number of degrees of freedom greatly. We only need to establish the position of just three non-collinear particles of the body to define its location. This gives us nine degrees of freedom, except we are not free to alter the distances between the particles which reduces the total number to just 6 degrees of freedom. Three of which describe the location of the rigid body and the other three describe the rotations. Rotations of rigid bodies are thought of as orthogonal transformations where the coordinates of the position of the particles in the rigid body are transformed into another set of coordinates. The transformation can be written as: 3 Xi aijxj i = 1, 2, 3. (4.38) j=l Since the transformation must preserve the distances between particles in the rigid body the coefficients aij form a special type of matrix which is a representation of an orthogonal transformation. This matrix, which will be the representation of an operator A, is orthogonal and therefore must satisfy Saijaik -ik (4.39) The operator A may be thought of as a change of basis or for our purposes as a transformation of the position vector F to a different vector f': -' = Af . (4.40) It is important to note that a product of two orthogonal transformations is also an orthogonal transformation. If we restrict ourselves to transformations with determinant +1 then rotations done consecutively are equivalent to one single rotation. 4.3.2 Euler Angles As we observed in the previous section, rotations can be mathematically realized as orthogonal transformations. Here we describe how these transformations are constructed and practically implemented. The simplest rotation is one about a particular axis through and angle 0. For such a rotation of the transformation matrix can be easily calculated using basic trigonometry, for example a rotation about the x-axis in three dimensions: 1 0 A 0 cos0 sin0 (4.41) 0 -sin0 cos0 In order to describe the motion of rigid bodies in the mechanics three independent parameters are needed. The choice of these parameters is one made of convenience and appropriateness for particular problems, here we will describe two of the more popular formulations. The most popular set of parameters are the Euler Angles. They are defined as three successive rotations performed in a specific sequence through three angles. Within limits, the choice of rotation angles is arbitrary but the most common is to rotate the system about the coordinate axis (as in Eqn. 4.41) using a particular order. One such sequence is started by rotation the initial system of axes, xyz, by an angle k counterclockwise about the z-axis, giving the new axis as x'y'z. In the second stage the intermediate axes are rotated about the x'-axis counterclockwise by an angle 0 to produce another intermediate set of axes x'y"z '. Finally these axes are rotated again about the x'-axis in a counterclockwise direction by and angle V). The three angles 0, 0 and b constitute the three Euler angles and they completely specify the orientation, labeled XYZ, of the new system relative to the original coordinates. The elements of the complete transformation A can be obtained by writing the matrix as the triple product of the separate rotations, each of which has a relatively simple matrix form similar to Eqn. 4.41. The complete transformation A = BCD (4.42) is composed of cos ï¿½ sin ï¿½0 D= - sine0 cosï¿½ 0 (4.43) 0 0 1 C = 0 cos 0 sin 0 (4.44) 0 - sin0 Cos0 a n d c o S s i V ï¿½ B- -sin 0 cos 0 ï¿½ (4.45) 0 0 1 4.3.3 Cayley-Klein Parameters Although we have seen that only three independent quantities are needed to specify the orientation of a rigid body, there are occasions when it is desirable to use more than the minimum number of quantities. The Euler angles are difficult to use in numerical computation because of the large number of trigonometric functions involved, and the four-parameter representations are much better adapted for use on computers. We can write the matrix A in terms of four real parameters eo, e1, e2, e3-. 'F Figure 4. 1: Euler Angles The matrix representation is then e0 + 21 - 3- 3 2 (ele2 - eoe3) 2 (e1e3 + eoe2) 2 (ee2 + eoe3) 2 (ele3 - eOe2) 2 _ e 2 + e2 - e2 ( 2 3 + e3 e ) 2 (e2C3 eO3) 2e 2 _c2 +e2 0 1 2 3 (4.46) The parameters obey the relation 2 2 2 2 eo + el + e2 + e3-1 (4.47) and are derived from the rotation formula of Rodrigues[32]. This formula represents a single finite rotation about an axis to transform the body to its new orientation. It is given by (4.48) il = f'cos + il(il. r-')[1- cos -c] + (i'ï¿½ ii) sin (D. 48 Here F is the original vector which is rotated through an angle 4D in a plane defined by a vector normal to the rotation axis i! as in Fig. 4.2. The rotation formula can be rewritten ind=dQ Figure 4.2: Vector diagram for derivation of the rotation formula using the following relations: CO = COS 2 = n sin 2 where f*is the vector with components e0, el, and e3. As a result we have (4.50) (4.49) (e2 2 _ e2 _ e2) el 2 3 + 2F(e + 2 (F x ej which can be shown to be equivalent to the orthogonal transformation given by Eqn. 4.46. 4.3.4 Rate of Change of a Vector The dynamics of the rotational motion of a rigid body is best understood by examining infinitesimal rotations first. Consider some arbitrary vector F' involved in a mechanical problem, such as the position vector of a point in the body, or the total angular momentum. Usually such a vector will vary in time as the body moves, but the change will often depend on the coordinate system to which the observations are referred. For example, if the vector happens to be the radius vector from the origin of the body set of axes to a point in the rigid body then, clearly, such a vector appears constant when measured in body set of axes. For an observer fixed in the space set of axes finds that the components of the vector, vary in time, when the body is in motion. The change dt in a time of the components of a general vector ? as seen by an observer in the body system of axes will differ from the corresponding change as seen by an observer in the space system. We can derive a relation between the two differential changes in F based on physical arguments. We can write that the only difference between the two is the effect of rotation of the body axes: (d--) spae - (dr)body + (d--)rotation. (4.51) Consider now a vector fixed in the rigid body. As the body rotates there is of course no change in the components of this vector as observed from the body frame. The only contribution to (drI')space is then the effect of the rotation of the body. But since the vector is fixed in the body system, it rotates with it counterclockwise, and the change in the vector as observed in space is that given infinitesimal version of Eqn. 4.46 which can be written as (dr-)rotation - dd x f'. (4.52) The time rate of change of the vector F as seen by the two observers is then = + (j X F, (4.53) /dt space dt body where W- is the angular velocity of the body defined by the relation COdt = dQ. The vector Co lies along the axis of the infinitesimal rotation occurring between t and t + dt, a direction known as the instantaneous axis of rotation. The magnitude of w- measures the instantaneous rate of rotation of the body. The angular velocity can be expressed in terms of the Euler angles and their derivatives in both the space and body frames. In the body frame the general infinitesimal rotation associated with ca can be considered as consisting of three successive infinitesimal rotations with angular velocities wo =, wo = and w, = '. In the body frame in Cartesian coordinates these become W/ = sin0sinV) + Ocos 0 WY q sin 0 cos i/-0sin / Wz = cos0+. (4.54) In the space frame we have jx = Ocososinop + sinOsino LOY = sin 0sinV - t sin0cosq$ bz = Ocoso+q5. (4.55) 4.3.5 Rotational Inertia, Angular Momentum and the Euler Equations In the center of mass frame of a rigid body the total angular momentum is rnK (' ). (4.56) Introducing the angular momentum it is possible express Eqn. 4.56 as , = IvecW (4.57) where I is the rotational inertia tensor. In the CM we may express the rotational inertia tensor as a matrix with elements Ik = ma (r, %. rjk - r0jrk). (4.58) It is possible to diagonalize I by solving the appropriate eigenvalue problem yielding a transformation matrix. The eigenvalues-I1, 12, and 13-are commonly referred to as the principal axes of inertia. We can transform the coordinate description of the orientation of the rigid body so that the I is diagonal. This simplifies the relevant equations greatly, the angular momentum can be written in coordinate form as Z Iiz4 + I2Wy j I3Wz. (4.59) A set of equations know as Euler's equations can be derived for the rotational motion about a fixed point using a direct Newtonian approach. We consider either an inertial frame whose origin is at the fixed point of the rigid body, or a system of space axes with origin at the center of mass. In these situations we have (df =. (4.60) dt )space where N is the external torque on the system. This equation can also be expressed in terms of the body frame derivitives by dt ) s dt )bod y+ xL. (4.61) /space body We can then substitute and the resulting in the Euler equations. Expanding the equations gives Il6&' - Uo23 (12 -/3) = N1, I2)2 - W030; (13 -/II) = N2, 13w3 - wLIw2 (11 -/2) = N3. (4.62) For torque free motion we have Ni = 0 so 11dl = w20W3 (12 - 13), I2 )2 = W3W1 (I3 - Il), 13L)3 = W1W2 (11 - 12)- (4.63) 4.4 Quantum Mechanical Theory of Rotations 4.4.1 The Rotationally Invariant System Consider a problem where V (p, q, 0) = V (p). Then a change of basis to spherical coordinates allows us to write the eigenvalue equation for H as [h2 02 1 10-) ï¿½ V () bE (P, ï¿½) ElE (p, ï¿½) (4.64) Here p denotes the mass of the system. The angular momentum operator in three dimensions can be formulated using infinitesimal rotations which gives us the components Lx = YPz-ZPu LY = ZP - XP, Lz = XPY- YPX. (4.65) These components obey the commutation relations [Li, L4] - ih E EiJkLk (4.66) k x,y,z with i, j and k running over the coordinate indices x, y and z. Additionally, Eijk is the Levi-Civita tensor also known as the antisymmetric tensor of rank 3. With this operator it is possible to rewrite the rotational Hamiltionian in operator notation 1192 H- = I 2+V(p) (4.67) where I is the rotational inertia and L2 is the Euclidean norm of the angular momentum operator. Clearly we have [H, Li] 0 =H 21 0. (.8 Consequently, L2 and its components are conserved. Now we see that we can find a basis common to H, L2, and one of the components of L, since the components do not commute (see Eqn. 4.66). 4.4.2 Eigenvalue Problem of L2 and L, Following the example of the harmonic oscillator we will now examine the the eigenvalue problem of L2 and L,. Let us assume that there exists a basis I a/3) common to both of these operators: L21a/3) a c43) L;I ao)ï¿½/} :/1 aO}. (4.69) We now define raising and lowering operators Lï¿½ = L, ï¿½ Zï¿½iLY (4.70) which satisfy [L,, Lï¿½] = ï¿½hLï¿½ (4.71) and of course [L2, Lï¿½] = 0. (4.72) The final results of the eigenvalue problem are L211M) =1(l+1)h2l1M) I/=0, 1, 2, ..(4.73) L, lIrm) = mh2 I 1M) m =-1, 1 -1, 1- 2, ,, for integral values of the angular momentum. The complete solution of the eigenvalue problem actually includes half integer angular momentum but these are not of interest at this time. The parallel construction between harmonic oscillators and angular momentum begins to break down here where the eigenvalue is bounded from above and below. Expanding the operators in the coordinate basis allows us to determine the eigenvectors for the rotationally invariant problem. The general solution includes the radial part while the angular part of the function is given by the spherical harmonic functions 'OElm (P, 0, 0) = REIm (r) Y7m (, 0). (4.74) This solution can then be used for particular values of the potential to arrive a the specific solution. CHAPTER 5 COHERENT STATES AND ROVIBRATIONAL DYNAMICS The minimum definition of a Coherent State (CS) given by Klauder[33] is that they are a set of normalized elements I A) of a Hilbert space H. They share the following two properties: (1) continuity, the states are continuous functions of a parameter set A lim IA) =Ao), (5.1) A-+ Ao (2) resolution of the identity, there exists a positive measure d/u+ > 0 for which 1=fJ dpu A) (A1. (5.2) Importantly, the set of CS form an overcomplete set of vectors. Hence, the closed linear span of Coherent States is the Hilbert space, H. Coherent States have proven very useful to many areas of the physical sciences, and occur in great variety. 5.1 Quasiclassical Coherent States A vital property of some CS is their quasiclasscial (QC) dynamics. A state, I 7p), is said to be quasiclassical when the expectation values of the position, momenta, and energy satisfy the classical Hamiltonian equations of motion. Mathematically, XqC = (V ), Pqc = (V I P 1,), nqc = (V)IH IV), (5.3) satisfy . OHqc q Hqc (5.4) Xqc OX q Pc = Opqc The average position, momenta, and energy of the quasiclassical state evolve in time as the position and momentum of their classical analogs. It is important to note that this is not the same demand as the semi classical limit, when h -- 0, is invoked. There is no guarantee that a quasiclasscial state even exists for a given Hamiltionian. A method to investigate if a Hamiltonian allows quasiclassical states is to apply Ehrenfest's theorem and show the resulting equations reduce to those of the classical case. In this manner, we can show that the canonical CS are quasiclassical. 5.2 Vibrational Coherent States The Glauber states are considered the canonical Coherent States and are quasiclassical. These states, that we will define, I a) are associated with the harmonic oscillator Hamiltonian Hib = hw(a+a + ), where w is the angular frequency. The harmonic oscillator creation operators can be expressed in terms of the self-adjoint position i3 and momentum P operators a i - (5.5) where m is the oscillator mass. Our complex parameter a can be expressed in terms of the real parameters of average position x, z (a [i I a) and average momentum a I X + h 1 (5.6) An expansion in terms of harmonic oscillator energy eigenstates, {j n) n = 0, 1, ..}, exists, and is given by a') e 2 I) - Ic 1)e(na+) 0) (5.7) n=O n nV 57 Their probability distribution over the harmonic oscillator states is given by e x p ( I 4 2 12 n (5.8) The Coherent state expectation value of the vibrational energy is E~ib =< H >= hw (ja12 +ï¿½ (5.9) making the vibrational existing energy a 2 =- . So the distribution of the vibrational hw levels are (5.10) Fig. 5.1 is an Example for a2 = Ei = 3. hw Coherent State Energy Eigenstate Distribution 025 02 00511_ _ _ _ _ _ 0 0 0 6 8 Ene gentalf Figure 5.1: Distribution of Coherent States over harmonic oscillator energy eigenstate a 2= E b =3 hw Notice that the canonical CS are associated with the Hamiltonian of a vibrating system. It has been demonstrated that an a posteriori vibrational analysis in terms of these harmonic oscillator CS can be utilized to obtain vibrationally resolved differential cross Ca _ r,,c ep hw ] n! sections when the vibrational energy levels of a product wave function are approximately equidistant. A similar derivation can be made for CS associated with the Hamiltonian of a force free rotor, and these also have the quasiclassical property. Additionally, CS have been constructed which combine rotational and vibrational modes. Here we have a powerful tool to analyze molecular dynamics, and motivation to develop a method of obtaining the relevant quantities from the classical evolution of a rovibrational molecule. It is our intention to generate a general method of analyzing the classical vibration and rotations of molecular systems in order to evaluate their wave functions in terms of quasiclassical CS. 5.3 Proposed Rotational Coherent States A new rotational CS was derived by Morales, et al.[34, 27] in the spirit of the Janssen CS formulation. Morales, et al. expressed these CS only in terms of the integer rotational states so that it is quite appropriate to describe molecular rotors within the QuasiClassical-Single-Determinental END theory. Although these derived CS do no exhibit an exact quasi-classical behavior their departure from the classical equations can be easily remedied. Morales was able to show that the exclusion of the half-integer rotational states from his construction leads to exact quasi-classical rotational CS. The formulation of half-integer only rotational coherent states is an open research problem. Rotational Hamiltonian and Related Operators. The Hamiltonian for a free rotating molecular system can be written as[35] Hrot + L2 (5.11) ot 21,, 2I_1y 21, where Ii are the moments of inertia for the principle axis and Li are the body fixed components of the angular momentum. For molecular systems the values of the orbital angular momentum are restricted to integer quantum numbers appropriate for bosons. The space-fixed components of the angular momentum Ji (J2 = L2) are also important for our discussion. Taken together the components obey the following commutation relations [J, Jj] = icijkJk; [Li,Lj] = -iCijkLk (5.12) and [Ji, Lj] = [L 2, L,] = [j2, jz] 0 (5.13) where fijk are the components of the Levi-Civita tensor. Observe the asymmetric commutation relationship for t components of Li. From these relationships we can construct a complete set of rotational eigenstates JIMK) such that L2 IMK)= I(I+1) IIMK), I=0, 1,2,... L JIMK)= K IMK), K =0, ï¿½1,+2, ...I (5.14) J JIMK)= MIIMK), M=0, ï¿½1, ï¿½2, ...-I. These rotational eigenstates can be represented in space by the Wigner D functions (0, 0, 0, I JIMK) = [21+1 2 DK(0, 1, (5.15) &T8 2 ] !M ï¿½ , ,)(.5 with the arguments being the Euler angles. Since we can immediately observe that the the Hamiltonian Eqn. 5.11 commutes with Ji and Li the Hamiltonian eigenfunctions TI must satisfy Hrot IM = EotIiMI J 2 9 m = I(I + 1 M , I = 0 , 1 , 2 , . , (5 .16 ) JzXP' = NIT', M = 0, ï¿½1, ï¿½2,..+ I where the superscript a is a third quantum number to label a rotational eigenstate along with I and M. Additionally, it can be shown that [Hrot, Li] = i - __ (LjLk + LkLj) (5.17) We now have established the necessary relationships to represent the rotational Hamiltonian eigenfunctions ' in terms of the angular momentum eigenstates, or in mathematical terms IF" ZC IMK) (5.18) k where the coefficients c IcM are to be determined. Example: Spherical Rotor. In the case of a spherical rotor the moments of inertia are equal, ISR = Ix = Iy = I, and the eigenvalue problem reduces to ro 21SR 21SR IFM - I IMK) (5.19) rot Erot 21SR 5.3.1 Irreducible Representation The CS under investigation are not group-related but are built up in analogy to the construction used for group-related CS so we discuss their group structure for completeness. The irreducible representation of the rotational eigenstates I IMK) is the semidirect product of the SO(3) x SO(3) with an Abelian group. The generators of the SO(3) Lie groups are the Li and the Ji, respectively. Each have an associated Lie algebra given by their commutation relationships which satisfy [Li, Lj] = iijkLk (5.20) and [Ji, J j] = iijkJk. (5.21) The generators of the Abelian group belong to a the family of operators TA, (A= 0' 2' 1' 2' ' 0, + .. , ï¿½A) which satisfy P1 TP"I = 0, (5.22) T,)' = (-1)"-PT A,, (5.23) and y(TI-t) t T~v = 6VV,, (5.24) /1 (T -v TA 6"'. (5.25) These operators are a generalization of the regular spherical harmonic tensor operators. If A =1 is used, as in Janssen[36], which gives four generators for the Abelian group. Unfortunately, these operators connect integral(boson) and half-integral(fermion) states, for molecular physics we should use A = 1 to restrict considerations to the bosonic states. With A = 1 this leaves nine generators for the Abelian group. In order to establish among the generators of the elements of the irreducible representation Morales, et al. generalized the properties of the regular two-subscript spherical tensor operators relationships with both Li and Ji. The generalized relationships for arbitrary A are [Lz, T ] = T , (5.26) JT, p A(.7 and [L+, TA,] = [A(A + 1) -v(vF 1)] T'(V+=), (5.28) [Jï¿½, T ] = [A(A + 1) - /(p T 1)] TTA). (5.29) The action of the generalized TAW operators on the angular momentum eigenstates is shown to alter the indices and multiply the states [IMK) by a factor. Moreover, it has been shown that the IMK) do in fact span the irreducible representations of the product group. It is important to note the action of one particular generator of the Abelian group on one particular angular momentum eigenstates: T1~ ~ ~ \ 1II1-I )=(2+)21+ 1 _ I- _ I- I-_ 1). (5.30) 5.3.2 Construction of Coherent States The Perelemov[37] prescription to construct coherent states associates complex parameters with the generators of the irreducible elements of the product group. In this case his construction would require three complex parameters x, y, and z, one for each quantum number, and the set of CS will be generated by Ixyz) ',- eXJ+ezL+eyT' 1_1 1000) (5.31) Unfortunately, this set of CS does not exhibit the quasi-classical properties for a rotor. Janssen[36] altered this construction to generate a set of quasi-classical rotational CS. Morales modified Janssen's construction even further to generate CS for the bosonic systems. His proposed CS are given by Ixyz) i exp [Iyy*(1 + xx*)2(1 + zz*)2 x exJ+ezL+eyf(I)Tl1 1 000), (5.32) where the operator I is defined by I JIMK) = I JIMK) (5.33) and the function f (I) is f ( I ) = V -I 2j (5.34) The operator I admits a representation in terms of the Schwinger boson operators for the angular momentum[36]. It is then shown exp (yf (I) T1_ 1) 1000) = S (yf (I)T' _ 000) I --0, 1,... I01 (5.35) (5.36) ezL+ iI - I - I) = E z -L- II - I - I) n=0 nZ! 21 zn n1 - f 21(21 - 1)... (21 - [n - 1])}n. II I -I+n) I zI+K [ (21)! 12 =-I I I K K +(IK)! [(I-K)! I-IK 37) where n I + K has been used form the second to the third line. Exchanging M -+ K and L+ -i J+ an analogous expansion for ezJ+ I - I - I) was obtained. All of these results are still valid for integer and half-integer angular momentum eigenstates. The set of CS can now be expressed as Ixyz) = exp [yy* (1 +XX*)2(l+ZZ*) X 12 1 (1 )!I-(21) !2 12 IMK (I + M)!(I- M)!(I + K)!(I K)! xI+M ylzI+K IIMK) (5.38) These states are normalized so that ( xyz xyz) - 1 and they are nonorthogonal. By construction the rotational CS, Ixyz), satisfy the first condition of the definition of a CS (see Eqn. 5.1). The second condition requires a positive measure that is a resolution of unity, Morales, et. al and Janssen were only able to construct a weaker version of the second condition in which the measure does resolve the identity but is not positive everywhere. The measure they obtained dpï¿½ (x, y, Z) 1 4 [(1 + xx*) (1 + zz*)]4 (yy*)2 -8 [(1 + xx*) (1 + zz*)]2 yy* + 1}dx dy dz (5.39) which gave IIMK) fdpï¿½(x,y,z)exp[-yy*(1+xx*)2(l+zz*)2] x X*I+M y *Iz*I+K X I! (I + M)! (I- M)! (I + K)! (I- K)! xyz (5.40) The goal of the construction of the rotational CS has been to develop quasi-classical states. Morales, et al. were able to show that the rotational coherent state parameters can be related to the relevant physical parameters for rotations. Evaluating the operator averages and examining their dynamical properties they were able derive[38] relationships between the parameters and physical quantities. Connecting Operator Averages and Physical Parameters. Connecting the coherent state parameters with physical ones was accomplished by employing the following relationships x -. cot( ) z -e+i~ cot (where 0 < a, a < 27r and 0 < 4, 0 < 7r. The parameter y is expressed as y = r1/2 sin2 (Ml) (0) e (5.41) where 0 < r < oo and 0 < < 27. The CS then becomes xyz) = I a/pOï¿½r) -CS) ={x (r [(21)!] 5 ex 2--'MKr 1!I (1 + M)! (I -M)! (I + K)! (I -K)! x [e-iMacosI+M -)sinI-M ()] x [e+iKo CosI+K ( )sin I-K (6) e-1 x IIMK). (5.42) Calculating the operator averages reveals the significance of the physical parameters, we observe (CSjLXICS)=rcos'sinO, (CSjJZCS)=rcosasin/3, (CSILv ICS) =rsinsinO, (CSIJyICS)=rsinsin(5 (CS I L, CS) = rcosO, (CSIJzICS) = rcos3, (CS ilCS) = and (CSIL21CS) = (CS 1Ij21CS) = r(r + 2). (5.44) It is then possible to interpret these parameters. If follow that the parameter r is the angular momentum modulus, the pairs of angles o, and 0; and a, and /3 are the azimuthal and the polar angles of ( CS IL CS) and ( CS IJ I CS ) vectors in the body-fixed and laboratory frame, respectively. The angle -y determines the relative orientation of the body-fixed frame with respect to the space-fixed one. Finally, the probability for the CS to be in a rotational state I I M K ) is PIMK {I!(I + M)! (I - M)! (I + K)! (I - K)! " [Cos21+M ( sin2M (21 x Cos21+K ( )sin2'K ( )] xexp(-r) r (5.45) = "/[(21) !]2 P P(I+M) (I - p)(IM (5.46) I! (I + M)! (I- M)! (I + K)! (I - K)! xq(I + K)(1 - q)(I-K) exp(-r) r (5.47) where p - cos2 ( ) and q cos2 (2). The probability is the product of binomial distributions in p and q, and a Poisson distribution in r. We observe E E Y PIMK = 1. (5.48) 1=0, 1, .. M=-I, .. K=-I, .. Evolution of the Coherent State. The time evolution of the proposed rotational CS was analyzed using Ehrenfest's theorem[38] applied to the operators Li gives d-(CS IL ICS) = i(CS j[H ,t,L ]ICS) dt = -(CSI Eijk2- (LjLk+LkLj) ICS), (5.49) A quasi-classical definition for the angular velocity w was introduced where (CSlL ICS) (5.50) Ai and using the previous averages the equations of motion can be obtained 1((5 Iz 2r These equations are very similar to Euler's equations[32] of motion for a torque free rigid body. They demonstrate that the derived CS are almost quasi-classical. In the limit of high angular momentum (r -* oc) these equations would match the classical result. CHAPTER 6 VIBRATIONAL ANALYSIS Prony's method[39, 40] (PM) is a technique for modeling sampled data as a linear combination of complex exponentials that has been widely used in engineering and the physical sciences[41, 42, 43, 44, 45]. It is highly regarded for its accuracy and stability. It seeks to fit a deterministic exponential model to the data. The standard PM is used only to analyze one degree of freedom, but we will require an analysis of many more degrees of freedom. We will build an analogous method to analyze our simultaneously rotating and vibrating system, but first we present the modem version of Prony's technique. 6.1 Prony's Method Assuming one has the N complex data points x[i] generated from p exponential terms, the Prony method will estimate the terms using the model: P J [n] --- E Ake [(ak +27rik) (n- 1) At+iOk] (6.1) k=1 for 1 < n < N, where T is the sample interval in seconds, Ak is the amplitude of the complex exponential, ak is the damping factor in 1/seconds, fk is the sinusoidal frequency in Hz, and Ok is the sinusoidal initial phase in radians. In the case of real data samples, the complex exponentials must occur in complex conjugate pairs of equal amplitude. Ifp is even, there are p/2 damped cosines. If p is odd, then there are (p- 1)/2 damped cosines and a single purely damped exponential. We can rewrite our p-exponent discrete-time function more concisely in the form P x[n] E hkZ-1* (6.2) k=1 where Zk are the time dependent factors and hk are the time independent factors. Ideally, one would like to minimize the error X- (x[i] [ for the N data values, with respect to the {hk} and {Zk} parameters and the p exponents simultaneously. Evidently, this is a difficult nonlinear problem, even if the value of p is known. Iterative algorithms, such as steepest descent procedures or Newton's method, have been devised to minimize this error with respect to all the parameters. Unfortunately, these algorithms have proven to be computationally expensive. The Prony method embeds the nonlinear aspects of the exponential model into a polynomial factoring, for which reasonably fast solution algorithms are available. Observe, that the equation may be expressed in matrix form as zï¿½ Aï¿½ .. 0z hi x[11 z I ... 4 x[2] 1= Z (6.3) p -1 p-1 hp x[p] zp-1 z21 ... h If a method can be found to separately determine the {zk} elements, then Eqn. 6.3 represents a set of linear simultaneous equations that can be solved to yield {hk}. Prony's keen insight allowed him to develop just such a method. In order to fit a p exponential model using PM, at least 2p data samples are required. An exact fit to the p exponential parameters {hk } and {Zk }, can be obtained with 2p data samples. The method has been extended to optimize the 2p parameters when more data is available using least squares fitting. The key to the separation is to recognize that the Eqn. 6.3 is a solution to a homogeneous, linear, constant-coefficient difference equation. In order to find the form of this difference equation, first define the polynomial O(z) that has the {zI, ..., zk} as its roots, p ï¿½(Z) = 1 (Z - zk). (6.4) k=l Expressed as a power series, this becomes p O(z) = E a[m]zP-m, (6.5) m=O with complex coefficients {a[m] } such that a[O] = 1. Shifting the index from n to n - m and multiplying by the parameter {a[m]} yields P anmzx[n - m] a[m] hk 1, (6.6) k=l which is valid for p + 1 < n < 2p. Summing over similar products produces p p p a~n-m-1(67 Za[m]x[ri- m] hk 1: a[m]Zkrl (6.7) m=O k=l rn=O ii m 1n-p-1 p-rn Making the substitution z.' = zi- zi, we observe p p p a[m]x[n - m] = hkZ-p-1 a[m]zP-m=O, (6.8) k-i - T 0m=0 m=O k= l m=O since the right-hand summation is the previously defined polynomial, O(zk) 0-O The p equations for the values of {a[n]} in Eqn. 6.8 may be expressed as the p x p matrix equation x[p] x[p- 1] ... x[1] a[1] x[p + 1] x[p + 1] x[p] ... x[2] a[2] x[p + 2] (6.9) x[2p- 1] x[2p- 2] ... x[p] a[p] x[2p] This equation demonstrates that with 2p complex data samples, it is possible to determine the {hk } and {zk } parameters. The complex coefficients {ak }, which are functions only of the time-dependent {4}, form a linear predictive relationship among the time samples. In summary, PM consists of three distinct steps. First, the coefficients of polynomial {ak} are obtained by solving Eqn. 6.9. Second, the roots of the polynomial {Zk} are calculated. Finally, the matrix equation Eqn. 6.3 can be solved for the parameters {hk}. Then the original exponential parameters can be obtained by the following relations k = In JZkJ /At fk = tan- [Im{Zk}/Re{zk}]/27rAt (6.10) Ak = [hkJ Ok = tan-'[Im{hk}/Re{hk}] (6.11) 6.2 Generalized (GPM) Prony's Method Given a set of nuclear trajectories for a general molecule in motion, a method analogous to PM to estimate the vibrational and rotational parameters will be presented. A computer algorithm is implemented to utilize this method in order to extract normal coordinates and rotational motion from ENDyne molecular simulations. In the regime of low energies and small vibrations, it is possible to separate the rotational and vibrational motion. The following expression is a model for the center of mass frame nuclear positions of an N atom molecule in the semi-rigid approximation for which we consider vibrational and rotational motion to be decoupled tv =t-1 P + ï¿½ ?,,jcie " (6.12) j=1 With v = 1..N, and j =1..p. The time step between data points is given by At, p represents the number of vibrational modes present. Here ]t,, is the position of the ith atom at time index t, which corresponds to time At (t - 1); t is the rotation matrix where 6o = 1; ?v is the equilibrium position of the ith atom. The Qj and mOj are the jth normal frequencies and phase respectively. Additionally, ?,,j are the normal mode amplitudes for the jth mode and ith particle; and the cj are the normal mode weights. In the case of a classical molecule p is constrained by: 3N -6 N> 3 p< 3N -5 N< 3 If we assume that the time steps At are small so that the rotations are infinitesimal from time index t to (t + 1), a useful result from classical mechanics is At'V: _1tq-i'v t- 14 Zt x t,,-q-t ? c d C e [2iri~j (t- 1)At+qJ]~ w, d:j, (6.13) where tt are the infinitesimal angular velocity. Rearranging gives us the expression p -2r~ (t- 1) At+~~ (614 , - tu-- t x t,, -= Ot E ?v,jcj(27ZiQj)e[2 J (6.14) j=l The vibrational part of this variable is obvious, but it is modulated by the rotational motion. This will interfere with the separation vibrational and rotational motion. So we choose as our fundamental variable: p --6t 1t ?it,, = ,jej(27ri Qj),[21riaj(t-1)At+w'jJ] (6.15) j=1 It proves to be more convenient to transform the variable {->,v} - {Xt,q} from a list of N vectors to a list of 3N scalars. So the variable q = 1..3N. The Xt,q variable is precisely the form prescribed in the PM, which allows the Prony procedure to extract the desired information for this model of several degrees of freedom. For our case of real valued data there is a modified version of Prony's method. The modified method replaces the linear prediction error with the conjugate symmetric linear smoothing error, expressed as 3N (xmP PI + g*[nlx.- ] )21 (6.16) 2 : 1 Xm,q E- [gq[n]Xm+n,q + 9q] n,q ï¿½ (.) q=1 I~p n=l The fundamental variable of the model is quite similar to that of PM, except that we seek to determine more than the smoothing coefficients {gq[n]}. We also seek to determine the elements of the rotation matrix, and if we assume the rotations are infinitesimal then it is equivalent to determining the angular velocity {-Z~t}. Instead of determining the smoothing coefficients by solving a matrix equation similar to Eqn. 6.9, we seek to minimize the smoothing error Eqn. 6.16 relative to the smoothing coefficients {gq[n]} and the angular velocity {-t} ï¿½ Once the smoothing parameters are determined we proceed by forming the polynomial 75 eq(Z) = gq[p]z2p + . gq[1]Z"1 +gq[1]zp -..' + . +g[p] 0. (6.17) The roots of this polynomial are of exactly the same nature as presented in the description of Prony's method allowing us to calculate the frequencies Qj according to Eqn. 6.10. Using these roots to solve for the following matrix equation for the hqdJ Tq,jcj (27iQj) Z0 Z0 ... Z hq, Xl,q zI Az1 hq,2 X2,q 1 2(6.18) p1 p-1 h Z1 Z2 ... hqp Xp,q Tq,j can be recognized as the inverse of the orthogonal transformation to the normal coordinates 3N Z Tq,j" Tq,k = 6j,k j, k = 1..3N (6.19) q=1 3N 3N 42Q E -h 2 (.0 hq,jhq,k =c24ir 5 -hq. (6.20) q=l q=l Employing these relationships, we can determine cj, Tq,j. 6.3 Implementation The software implementation is displayed in Fig. 6.1 using the Class-ResponsibilityCollaborator diagram model [46], it is meant to give an overview of the variables and their interaction with each other. Fundamental to utilizing the GPM scheme is the minimization of Eqn. 6.16. The minimization method we have currently implemented is the Fletcher-Reeves-Polak-Ribiere [47] conjugate gradient method. This requires calculation of the gradient of the function to be minimized. Additionally, it requires that the user supplies reasonable initial values of the undetermined parameters {gq[n]} and 6.3.1 Angular velocity and Rotation Matrix An initial estimate of the angular velocity is given by solving the classical angular momentum equation. The angular momentum is given by N ?t =-: J:(, t,, x mv, t,,,). (6.21) v=l The moment of inertia tensor is given by {}J = m . [cijk S -'RL - Rc.'jR.'k] (6.22) We solve the angular momentum equation Lt = I-dt, and obtain the infinitesimal rotation vector -Jt. Determination of the angular velocity allows us to calculate the rotation matrix. The rotation matrix can be written as t (6t = H ,. (6.23) 7=1 Where St is the infinitesimal rotation matrix with the property 9t1t, , ]t+iv and is represented in the lab frame by I1 -wt3At wt2At St - wt3At 1 -Wtl At (6.24) -wt2At wtI At 1 Unfortunately, the resulting matrix, 67, is not orthogonal. Since we are interested in the inverse of the rotation matrix we redefine the rotation matrix so that it is orthogonal, to ensure a well defined process. It is possible to represent finite and infinitesimal rotations by the rotation formula presented by Goldstein [32] t t cos D + -#(7 t,) [1 - cos P] + ( xt,, x -T) sin I, (6.25) where 1 is the angle of rotation about the normal to the plane of rotation -i. We can reformulate Eqn. 6.25 into a more useful form by introducing a scalar e, and a vector - defined as =-sin (6.26) 2 eo C=os2 (6.27) 2 (D = - j At. (6.28) Where the sign ï¿½ is given by the right hand rule. With further manipulation, Eqn. 6.25 can be rewritten as tt+,, (e2 e- e - e3) + 2 ï¿½ + 2(t,, x 2))eo. (6.29) The elements of the rotation matrix can be calculated by expanding Eqn. 6.25. Hence, the resulting infinitesimal rotation matrix is redefined as e0 + el2 - e- e2 2 (e2el + eoe3) 2 (e3e, - eoe2) St 2 (e2ex - eoe3) e2 - e 2 +e 2 - e 2 2 (e3e2 + eoea) (6.30) 2 (e3el +ï¿½eoe2) 2 (e3e2 - eoel) e~ 2- e 2 _e2+ e 2) Then the rotation matrix, Ot, is orthogonal. The inverse of this matrix is a rotation through an angle -4), the transpose: eo -> eO (6.31) (6.32) ( + e2 - e2 _ e2 - 0 -+ 21 - 3 1; 2 (e2e, + eoe3) 2 (e3e1 - eOe2) expressing - in terms of e0 and c 2 (e2e1 - eoe3) 2 (e3I + eoe2) e2 _ e2 + e2 2(32 - eoe1) 2 2 12 2 2 2 (e3e2 +eoel) e2 _e2 _e2+ e 2 2 arccos (eo). 6.3.2 Smoothing Coefficients The initial estimate of the smoothing coefficients is given in matrix form 01 Op 2X 2 01 Op (6.33) (6.34) (6.35) where the matrix R2p and vector ?2p are r2,[0, 0] ![2p2 0 r2p [2p, 0] and ?2p = g p] 411] 1 g*[1] g*[p] The elements of !R2p are jqp j, k k,q + Xn-p+jq -Xnp+k,q) n=2p+l A fast algorithm has been provided by Marple [39] which can solve these equations using the symmetric covariant method. 6.3.3 Gradient For the minimization of Eqn. 6.16 we have chosen the Fletcher-Reeves-PolakRibierre conjugate gradient method [47] which requires that we calculate the gradient with respect to the adjustable parameters. A few intermediate results that we will find useful are t 6 1= I S-1+1. (6.39) "=I .. " r2p[0, 2p] ... r2p[2p, 2p] (6.36) (6.37) (6.38) 80 6 I S0 ) I 1- T)1 for I < t (6.40) 0eT 1+1 0 for 1 > t (6.41) The functional dependence of -t,v on the parameter - t is given by the following derivation t +(7+- ), (6.42) where V represents the vibrational terms of the model. Substituting, we obtain the following result _t "_tl- _ - t X ift (6.43) At 6t (-9 + Vt+X) -0t-i (? + V7t)_ _-- t x 6)t- +( ?)ï¿½ 6.4 The properties of the infinitesimal rotation matrix yields the following result 6t (9 + Vt) = At(~t x t-x (-+ + Vt) + 6t-1 (P + rt). (6.45) Employing this result gives us -- a=O V t (6.46) and - = ?t (6.47) Consequently, (6.48) - t -- - t (- 1, -E 2,, .- t ,--? t (- t)) So the gradient of the prony like variable 37t,, is given by O-t v x 1 (a tt x Oe,1 t te,1 - >t - Ot' _e where e, = e, The first set of derivatives of the smoothing error are -2Z [1: + (E p - E [gvn] -m+n,v + g9[nI]-JM n=l -nv]). [-!m+k,, + -m-ky]j (6.50) for k = 1..p and v 1..N. The second set of derivatives are ax2 p - 5 gv[n]2m+nv + *[I] n=l ( aeli P a-'- m+k,v -' kv) S gv[k] aomk + g*[k] a k=l aPi aei J] (6.49) ax ogv[k] -n,v))" (6.51) (6.52) tv ) 2 M-P E JIMV M=P+i ( FileName mDATA Trajectory iTwPQ* dT (DJ c o yiN m b r* .-. iDim* SGetmW MoldmY Getm Calculate initial values Generate Prony Variable Calculate initial values for mW mY initial values for mADataFile 0 SmDATA mwr mDATA my m DATA mA dT dT d N DC C OutFile N - -0 MnRegen ~Minimize smoothing :r ~~~~error w/ respect to mW, Rgnrt aafo 0 mA OutFile=nitOut.dat mWm mDA A mWDataFile=nitR.dat Calculate, D isplay Erro dT mA 1,2,30- OutFile mR rwDataFile mW ,,. mDATA mA OutFile=Out.dat dTmre DataFile=mRout.dat mW mAmp mA mRerr mY mWerr mR,mW CHAPTER 7 VIBRATIONAL CALCULATION RESULTS In order to investigate the efficacy of our GPM we applied it to an interesting nontrivial system. Using ENDyne[8] we calculated the dynamical evolution of water for various magnitudes of vibrational excitation. Our study of water was motivated chiefly by the large amount of experimental data available for this chemically and biologically imporNormal Modes of Water v3 Figure 7.1: H20 Modes tant molecule. It's three atoms and bent geometry provide a theoretical model which is not simple, but also not very complex. The bent shape of the H20 molecule is a result of the character of its molecular orbitals[48]. Normal Freq. Sym. Mode ï¿½1 cm-1 Species Ul 3656.65 a, V2 1594.59 a1 V3 3755.79 bl Table 7.1: Experimental vibrational frequencies for H20. Source: Shimanouchi[49] H20 V1 V2 V3 Expt 3656.65 cm-1 1594.59 cm-1 3755.79 cm-1 SCF +332 +102 +344 CISD +135 +44 +147 MBPT(2) +77 +15 +112 SDQ-MBPT(4) +89 +33 +107 CCSD +80 +34 +98 CCSD(T) +73 +26 +92 Table 7.2: Values for the vibrational frequencies using different levels of theory. Theoretical numbers indicate deviation from experiment ie SCF=Expt + Value. Source: Bartlett[50] 7.1 Vibrationally Excited H20 The 3N - 6 = 3 (3) - 6 = 3 normal modes of water are visually represented by Fig. 7.1 where v, the symmetric stretching mode, v2 the bending mode, and v3 the asymmetric stretching mode. The current experimental values are given in Table 7.1 and the theoretical harmonic vibrational frequencies given by various electronic structure calculations for these modes in Table 7.2. It is important to realize that both theory and experiment do not represent the situation we are presented with in the dynamics of nuclei in molecules. These calculations rely on the assumption that the nuclei experience a truly harmonic potential, specifically that the potential they encounter is proportional their displacement from equilibrium. In real molecules this can only be considered an approximation especially as the vibrations lead to large displacements from equilibrium. The effect of this breakdown of the harmonic assumption is commonly called Anharmonicity and is visually represented in Fig. 7.2. Two Potentials RealsH c Harmonic De Lu 0 0 It I 0 1 2 3 4 5 Internuclear distance (Req) Figure 7.2: Comparison of a truly harmonic potential and a representation of a realistic intramolecular potential Several important features of the realistic intramolecular potential V should be noted. Firstly, near the equilibrium distance the differences with the harmonic potential are small breaking down at greater displacements. Importantly, unlike the harmonic model V,. exhibits a disassociation energy De at which the intramolecular bond will break[51]. Also the potential V exhibits an asymmetry about the equilibrium distance. All of these features represent a breakdown of the harmonic potential. The deficiencies of the harmonic approximation posed a problem for our analysis. This means that realistic intramolecular potentials deviate from the harmonic potential more for higher vibrational excitations, and we are faced with the fact the molecular motion will not consist strictly of harmonic vibrations, especially for highly excited molecules. Often realistic potentials are examined by using empirical and semi-empirical approximations such as the Lennard-Jones potential or the Morse potential. We chose the Morse potential to represent the intermolecular bond stretching potential of the H20 which is given by: U = Deq 1 - e-a(req -r)]2 (7.1) In Fig. 7.3 we see an abstract representation of the Morse potential and its associated quantum mechanical energy levels. Here we again see the effects of the realistic poten- Morse Curve 0 1 2 3 4 5 Internuclear distance (Req) Figure 7.3: Morse Potential: Energy Levels for the Morse potential tial compared to the harmonic potential. Since the potential allows for bound states the energy levels are discrete but the energy levels are not evenly spaced as they are for the harmonic approximation. In fact, the spacing between sequential energy levels becomes increasingly smaller for larger energy states. Additionally, the disassociation energy limits the number of energy levels as opposed the infinite number of levels available to for the harmonic oscillator. We used one method of obtaining an expression for the energy levels by employing the Dunham[52] expansion of the morse potential. The Dunham expansion of vibrational energy levels is 1 n (+ )(- (n +ï¿½ ) we+. (7.2) where we is the frequency given by the harmonic potential given by the quadratic term in the Taylor expansion of Eqn. 7.1 and Xe (7.3) 4D, is referred to as the anharmonic constant. Immediately we can identify the frequency of vibration given by Morse potential by making an analogy with the structure of the harmonic energy levels. The Morse frequency is given to first order by Wn (1 (n + I) Xe) We. (7.4) Unlike the equilibrium harmonic frequency we, the Morse frequency decreases with increasing quantum number n and the difference in the between the frequencies of neighboring energy levels also decreases. Employing conservation of energy it is possible to derive the first order correction of the intermolecular motion Dea2 A (sin ((wet)) + Cos (2wet) . (7.5) W 8/ This result shows that the first order correction of the harmonic motion given by the Morse potential generates motion which is simply a sum of sinusoidal functions. This is precisely the type of signal required to perform the Prony analysis. Since we are only concerned with the low lying vibrational states we will leave the discussion of higher order corrections for a later time. But they are also expressible as sums of sinusoids. Examining Eqn. 7.5 reveals that the effect of the Morse potential on the displacement of the intermolecular distance for a particular vibration. The second term g cos (2wet) oscillates with twice the fundamental frequency of the main component of the motion. As a result the motion will appear assymetric with the peaks of the main component being more spread out and the troughs being more narrow as in Fig. 7.4. Qualitative Comparison of Harmonic and Anharmonic Motion Anharmonic Motion to First Order Harmonic ___ Anharmonic EA .0 E (D . CL I .0 A -37t -2n -n 0 i2n 3 Time (frequency) Figure 7.4: Comparison of the motion in a harmonic potential versus the first order correction due to an anharmonic potential 89 Armed with this knowledge of the effect of anharmonicity on the vibrational motion of a molecule we then investigated the ability of GPM to analyze realistic molecular motion. We did this by performing a series of Electron Nuclear Dynamics trajectories for only H20 with a single vibrational mode excited to varying degrees. The results of the trajectories of the symmetric bending mode of water from equilibrium are given in Fig. 7.5. Here the effects described previously are plainly seen. The frequency of vibration decreases with increasing excitation in energy and there is a noticeable difference between the peaks and troughs of the motion. Vibration of Water 2.8 I I I E--.O'hw E=.5hw ------E=2,5hw 2.6 E=5hw E 10hv 2.4 . 2.2 C C 0 m 2 I 1.8 1.6 SIIII I I I 0 100 200 300 400 Time (au) Figure 7.5: Endyne Trajectories: Excitation quency 4140 cm-1 using STO-3G 500 600 700 of the stretching mode of water w/ fre- In order to account for the anhannonicities we applied the GPM with twice as many anticipated signals; which was motivated by the observation of the appearance of additional sinusoidal terms predicted from our Morse potential model. Applying the GPM in this way to these resulted in the data in Table 7.3. The additional sinusoidal terms were clearly measured by our GPM and found to have magnitudes as predicted by Eqn. 7.5. Moreover, the frequencies decline with increasing energy as predicted by Eqn. 7.4. (__wo (cm-1) Iw! (cm)-1) ratio hw .05 4137 8274 2.0000 0.1 4134 8269 2.0002 0.5 4115 8233 2.0007 2.5 4006 8194 2.0500 5.0 3920 7833 1.9987 Harmonic:we j 4140 Table 7.3: GPM Analysis: Predominant frequencies calculated by GPM for ENDyne trajectories of varying excitations We used this decrease to calculate the disassociation energy through the anharmonic constant. Performing a linear regression using the data from Table 7.3 (see Fig. 7.6) we obtain a value of Xe ,- .012 which corresponds to a disassociation energy of De - .41 Hartree or n - 20. It should be kept in mind that these results are the product of a first order correction and that the disassociation energy corresponds to a physical parameter far away from the equilibrium. We observed that these values demonstrate that our morse potential approximation is consistent with physical reality and provides a useful tool in analyzing the effects of anharmonicity on our GPM. Importantly the analysis demonstrates that the anharmonicities do not cripple our ability to use the methods we developed. 7.2 Prony's Method, Condition Numbers and Numerical Stability Our development of Generalized Prony's Method is for the purposes of numerical calculation and analysis. Whenever computers and numerical methods are used we must remain conscious of numerical instability and loss of precision. In the case of the Generalized Prony's method we are presented with two sources of computational Fit of the Anharmonic Constant 3900 k GP2I Freq Best Fit S I i I I I I 0 0.5 1 1.5 2 2.5 3 3.5 4 Energy (hw) Figure 7.6: Best fit for the anharmonic constant 4.5 5 interest. The first is the solution of a polynomial 0 (z), but here we focus on the second which is the solution of the matrix equation given by (ZHZ) y = ZHb, (7.6) where Z is a matrix with Vandermonde structure[39]. This matrix Z is a function of the roots of 0 (z) 0o 0 0 z1 z2 .. Z - 1 2 (7.7) 1 2 ~2 where p is the number of exponentials in the Prony signal as well as the order of z in 0 and n > 2p is the number of data points used. The solutions of 4 are related to the frequencies present in the signal by Zk = exp 27riQkAt, (7.8) where Q is the frequency and At is the sampling period. Of course we are primarily interested in the matrix "Y11 "' 1p A = ZHZ = ") (7.9) 7pi " PP where (z*zk) -1 if ZjZk 1-1 (7.10) m=0 n if zj zk 1 7.2.1 Method The numerical stability of the solution of Eqn. 7.6 is related to the condition number[47]. In its simplest terms the condition number is a measure of the sensitivity of the solution of a linear algebraic system Ax- = b with respect to changes in vector b and in matrix A. It is calculated as the product of the norm of A and the norm of A-' or More explicitly - II IIA-1H (7.11) The norm IIAII in this instance is given by the maximum of the sum of magnitudes of each row, or formally N 1AII E ma Ajj . (7.12) I The main computational importance of the condition number is as an estimate of the precision required to solve the algebraic system. An ill conditioned matrix is one whose condition number is greater than or of the order of the reciprocal of the machine precision, i.e. 106 for single precision and 1012 for double precision. We will consider a matrix to be poorly conditioned if the square of the condition number is greater than the reciprocal of the machine precision. While a general result relating the condition number to the parameters of our particular implementation is desirable, we first only investigate our application of GPM to the H20 molecule. This molecule has three normal mode frequencies. For the Prony analysis this indicates that the number of exponential parameters is six, consisting of three complex conjugate pairs. The free parameters in this analysis are the sampling period, At, and the number of data points n. We will evaluate the condition number for several reasonable values of these parameters. The range of the sampling period will be At = [1 au, 320 au] with the upper limit chosen to be half of the period of the smallest angular frequency since the periodicity of the signal makes any larger values redundant. The number of data points ranges from n = [2p, 50p] where p = 6 the minimum number of points needed to complete the full Prony analysis, the upper bound was chosen to be a reasonably large number of data points. Additionally, we will investigate the effect of small errors in the frequency Q. 7.2.2 Results The parameters obtained from the first step of the Prony analysis are the solutions, z, to the polynomial, 0. The location of the roots in the complex plane can be observed in Fig. 7.7. In the presence of noise in the prony signal the location of the roots no longer lie on the unit circle in the complex plane[39]. For simplicity and convenience of our argument we disregard the phase of the roots so {zj } expressed in terms of the Distribution of Solutions in the Complex Plane Along the Unit Circle Figure 7.7: Location of the roots of 0 (z) in the complex plane angular frequencies are given by z' = expiQ~At, (7.13) Q, = 4140 cm-1 Q2 = 2170 cm-1 Q3 = 4390 cm-1 (7.14) The conversion factor to atomic units is c = 219474 (cm au) -1 resulting in Q3 = 0.0200 au- . (7.15) where Q, = 0.01886 au-1 Q2 = 0.009887 au-1 |

Full Text |

PAGE 1 QUASICLASSICAL AND SEMICLASSICAL METHODS IN MOLECULAR SCATTERING DYNAMICS By ANATOL BLASS A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA PAGE 2 ACKNOWLEDGMENTS I would like to deeply thank Prof. N. Yngve Ohm and Dr. Erik Deumens for their guidance and patience in my studies with them. They gently opened the doors of quantum chemistry to me and showed me the wonder and beauty of their field of study. A boundless amount of gratitude must be expressed to my colleagues in the Electron Nuclear Dynamics group: to Dr. Mauricio Coutinho-Neto for his time, guidance and humor; to Dr. Magnus Hedstrom for his advice and love of film; to Dr. Remigio Cabrera-Trujillo for his insight and leadership; to David Masiello and Benjamin Killian for their enthusiasm and friendship. I would like to thank all of members of the Quantum Theory Project for their collegiality and for creating a vibrant and fertile atmosphere for research. I am deeply indebted to all of the support staff here at QTP, especially Judy Parker and Coralu Clements, for everything, but mostly for their patience. I need to express and acknowledge my profound appreciation of my family David, Oscar and Piotr Blass; Jack and Margot Khavinson; and my immensely gifted wife Jill. Their love and support have been the fuel of my pursuit of learning and knowledge. Along with them 1 would like to thank Minky and Mozu for their constant affection and for providing pleasant and fanciful distractions. 11 PAGE 3 TABLE OF CONTENTS ACKNOWLEDGMENTS ii ABSTRACT v CHAPTERS 1 INTRODUCTION 1 2 OVERVIEW OF SCATTERING THEORY 4 2. 1 Theoretical Considerations 5 2.2 Classical Theory of Scattering 8 2.3 Quantum Mechanical Scattering Theory 11 2.3.1 Time-Independent vs. Time-Dependent Scattering 12 2.3.2 Fundamentals of the Time-Independent Theory 14 2.3.3 Scattering Amplitude 16 2.4 Semiclassical Theory of Scattering 16 2.4.1 Bom Approximation 16 2.4.2 Partial Wave Expansion 17 2.4.3 JWKB 19 2.4.4 Eikonal Approximation 20 2.5 Time-Dependent Scattering Theory 21 2.5.1 Wave-Packet Propagation: Exact Methods 22 2.5.2 Approximate time-dependent methods 22 3 END THEORY OF TIME DEPENDENT MOLECULAR DYNAMICS . . 24 3.1 The END Theory 24 3.2 The END Wave function 29 4 OVERVIEW OF ROVIBRATIONAL REACTION DYNAMICS 34 4.1 Classical Theory of Vibrations 34 4.2 Quantum Mechanical Theory of Vibrations 39 4.2.1 The Quantum Mechanical Harmonic Oscillator 39 4.2.2 Quantization of the Harmonic Oscillator 40 4.3 Classical Theory of Rotations 43 4.3.1 Rotations 43 4.3.2 Euler Angles 45 4.3.3 Cayley-Klein Parameters 46 iii PAGE 4 4.3.4 Rate of Change of a Vector 49 4.3.5 Rotational Inertia, Angular Momentum and the Euler Equations 5 1 4.4 Quantum Mechanical Theory of Rotations 53 4.4.1 The Rotationally Invariant System 53 4.4.2 Eigenvalue Problem of L 2 and L z 54 5 COHERENT STATES AND ROVIBRATIONAL DYNAMICS 56 5.1 Quasiclassical Coherent States 56 5.2 Vibrational Coherent States 57 5.3 Proposed Rotational Coherent States 59 5.3.1 Irreducible Representation 61 5.3.2 Construction of Coherent States 63 6 VIBRATIONAL ANALYSIS 69 6.1 PronyÂ’s Method 69 6.2 Generalized (GPM) PronyÂ’s Method 72 6.3 Implementation 75 6.3.1 Angular velocity and Rotation Matrix 76 6.3.2 Smoothing Coefficients 78 6.3.3 Gradient 79 7 VIBRATIONAL CALCULATION RESULTS 83 7.1 Vibrationally Excited H 2 0 84 7.2 PronyÂ’s Method, Condition Numbers and Numerical Stability 90 7.2.1 Method 92 7.2.2 Results 93 7.2.3 Discussion 95 7.2.4 Conclusion 97 7.3 Application: Vibrational analysis of END trajectories 97 7.3.1 Prony Analysis of H 2 0 98 7.4 Anharmonicity 103 7.5 State Resolved scattering: H + + H 2 0 106 7.5.1 Experimental DCS 110 7.5.2 Results and Discussion 113 8 ROTATIONAL CALCULATION RESULTS 121 9 CONCLUSION 126 REFERENCES 128 BIOGRAPHICAL SKETCH 132 IV PAGE 5 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy QUASICLASSICAL AND SEMICLASSICAL METHODS IN MOLECULAR SCATTERING DYNAMICS By Anatol Blass December 2001 Chairman: N. Yngve Ohm Major Department: Physics The exponential increase in computing power that began in the Â’60s has allowed new methods to evolve that would make time dependent dynamical studies possible. The theoretical calculation of cross sections of chemical reaction provides an understanding of the details at the molecular level. While experimental studies yield important data, a significant fraction of the interesting elementary reaction in nature involve short-lived molecular species that are difficult to isolate and study in the laboratory. The prediction of the rates and cross sections of such chemical reactions is a primary goal of computational chemistry and molecular physics. These data, in addition to its scientific interest, are important for the modeling of complex chemical systems such as combustion, plasmas, and the atmosphere. In this dissertation we present a method developed to take advantage of current theoretical techniques by calculating quantum mechanically resolved results using classical nuclei. Utilizing the nearly classical behavior of nuclei in chemical processes we were able to devise a system to map quasi-classical coherent states to trajectories calculated v PAGE 6 from the semi-classical approximation of the Electron Nuclear Dynamics theory. Using this new method allows us to analyze the results of Electron Nuclear Dynamics molecular scattering trajectories, calculated using our computer code called ENDyne, and extract state resolved results. In this way we are able to compare our results to experimental data and reveal their underlying chemical and mechanical processes. We present just such applications to the vibrationally resolved differential cross sections of the reaction H + + H 2 0 and were able to determine the mechanics of the non-adiabatic inelastic scattering channel. Additionally, this result served to verify the ability of the Electron Nuclear Dynamics theory to investigate non-adiabatic molecular scattering. V o VI PAGE 7 CHAPTER 1 INTRODUCTION Physics cannot be reduced to a few trite formulae. The complexity of real world processes quickly overwhelm the security we find in fundamental equations. SchrodingerÂ’s equation is one possible beginning of understanding molecular systems and has not put a single quantum chemist on unemployment. In fact, early quantum mechanics struggled with applying this equation to even the simplest chemical processes. Without the benefit of computers, novel solutions and approximations arose to apply a new understanding of nature to chemistry and molecular scattering dynamics. Time-dependent problems were difficult to solve and even then they were difficult to calculate quantitatively. A whole industry of activity arose around solving time-independent problems and devising schemes to solve dynamical problems to some degree or another. Many of these methods suffered from an incomplete description of the dynamical processes and were chiefly restricted to adiabatic processes. The exponential increase in computing power that began in the Â’60s has allowed new methods to evolve that make time-dependent dynamical studies tractable. The ab initio calculation of cross sections of chemical reactions provides an understanding of the details at the molecular level. While experimental studies yield important data, a significant fraction of the interesting elementary reactions in nature involve short-lived systems that are difficult to isolate and study in the laboratory. Prediction of the rates and cross sections of such chemical reactions is one primary goal of computational chemistry and molecular physics. These data, in addition to its scientific interest, are important for modeling complex chemical systems such as combustion, plasmas, and the atmosphere. 1 PAGE 8 2 Though highly desirable, theoretical analysis of even the simplest cases of these dynamical experiments can pose serious roadblocks to the scientist. The numerical solutions can use massive amounts of computer time and produce overwhelming amounts of data. Theoretical molecular dynamics strives to identify important parameters and mechanisms involved in the chemical or collision processes for analysis and prediction. With this in mind, a process of identifying and refining theoretical methods leads to the creation of efficient tools for the study of chemical systems. In our group at the University of Florida, lead by Prof. N. Yngve Ohm and Dr. E. Deumens, we have striven to create new and efficient methods for both analysis and prediction to apply to difficult problems in reactive dynamics. Electron Nuclear Dynamics (END) theory developed here provides approximate yet accurate solutions to time-dependent problems. Using the time-dependent variational principle (TDVP) allows us to simultaneously calculate both the nuclear and the electronic dynamics without the use of predetermined potential energy surfaces (PES). Currently this theory is implemented in the ENDyne code at its minimal level of approximation described as quasi-classical single-determinantal electron dynamics (QCSD END). At this level, using classical nuclei and electrons described by a single-determinantal wave function, we have had success in studying nonadiabatic dynamics. While we continue to develop the END theory and improve its implementation we must pause and consider whether a more complex theory or using more computer time is the only correct way of pushing forward. This reflection brings us to the overriding principle of this dissertation: Using coherent states and a posteriori analysis we can extract state-resolved results from a semiclassical description using classical nuclei. Specifically, we strove to study rovibrational dynamics and the state-resolved differential cross sections in molecular scattering. Building a technique on three separate pillars Â— Coherent States, analysis of classical dynamics, and molecular trajectories cal- PAGE 9 3 culated using QCSD END theory Â— we successfully studied state resolved dynamics of several interesting systems. Here we present this method and its results. This dissertation is organized in the following way. Chapter 2 introduces the scattering process we study. Time-independent and time-dependent quantum methods are discussed. Traditional semiclassical methods are explained and their relationship to END is elucidated. Chapter 3 is a review of the END theory and its implementation. Chapter 4, rovibrational dynamics, both classical and quantum, are reviewed in the light of scattering theory. We turn our attention to coherent states for vibrations and rotation in chapter 5. The methods developed to study vibrational analysis are presented in Chapter 7 and its application to specific systems are the subject of chapter 8. Chapters 9 and 10 does the same for rotational analysis. Finally we will have some concluding remarks in Chapter 1 1 . PAGE 10 CHAPTER 2 OVERVIEW OF SCATTERING THEORY Scattering experiments and theory have been a cornerstone of the development of modem physics. From the experiments of Rutherford to modem high-energy particle accelerators, the use of particle collisions has allowed scientists to infer the structure and properties of solids, gases, molecules, atoms and subatomic particles. Well designed and extremely precise experiments have lead to a wealth of data. Unfortunately, most scattering experiments involve many collisions and complex dynamics leading to experimental observable that are stochastic in nature. Furthermore, any experimental data obtained are limited to an overall change in the state of the system before and after the event. For the theorists this poses a two-fold challenge to explain the physics of the individual collision processes and then to corroborate those findings with the experimental data. The theoretical quantity for comparison with experiment is the cross section, a l3 ( E ), the notional area through which a representative incident particle must pass at a given energy to achieve a given change * Â— Â» j in the system. In molecular systems and processes of chemical interest one must recognize that the breakdown of classical mechanics in these regimes demands the use of quantum mechanics. Quantum mechanical effects may come from interference, effects of the uncertainty principle and the quantum nature of many chemical processes. On the molecular scale, an appropriate theoretical basis is that of a semiclassical theory allowing the use of our experience with classical mechanics and including the relevant quantum mechanical effects. Our goal in this chapter is to present the essence of molecular collision theory and its use as an introduction to our own theoretical developments in studying state resolved reaction dynamics and scatter4 PAGE 11 5 ing. After the discussion of Child[l] and Sakurai[2] we separate the theory into three broad classes. The three are termed: Elastic. Used in cases of simple momentum transfer for studying mean intermolecular potential functions averaged over internal states. Inelastic. Used if there is also a change in internal energy yielding dependence of the potential functions on the internal coordinates corresponding to that state. Reactive. Used when scattering is accompanied by the change in chemical composition which reveals the nature of the potential energy surface in the neighborhood of the reaction coordinate. The simplest molecular collision experiments involve two beams with defined velocities vi and v 2 and a detector configured to observe a given transition from state i Â— > j of the system. Considerable simplification of subsequent calculations can be made by transforming from the laboratory frame to the center of mass coordinate system (CM). Transforming the positions f\, r 2 and velocities v\, v 2 of the collision particles, with Â— ^ Â— * mass mi and m 2 respectively, to the CM coordinate, R, velocity, V and internal coordinate, r, velocity, v; 2.1 Theoretical Considerations R m 2 Â„ r n r 2 (2.1) and v Vi V 2 . ( 2 . 2 ) PAGE 12 6 Then the classical kinetic energy T and the angular momentum L may be written 1111 T = -mivj + -m 2 v% = -MV 2 + -mv 2 , L = m\ (f*i x iq) + m 2 (r 2 x v 2 ) = M (^R x V^j + m (r x v ) , (2.3) where Â„ mim 2 .. M = mi+m 2 , m = . (2.4) m\ + m2 With the potential energy independent of R, and of the absolute orientation of the system, the center of mass motion may be factored out of all equations. The remaining problem that we confine ourselves to is equivalent to a single particle with reduced mass 777, in a given initial internal state i moving with translational energy, \mv 2 , and orbital PAGE 13 7 angular momentum L about a scattering center at r = 0. The inverse transformation between the laboratory and center of mass frames is easily derived from the preceding equations. The collisions of molecules leads to scattering in all directions and the resulting intensity distribution in the CM frame gives the differential cross section (DCS) defined by the ratio da ij Number of scattered particles per unit time per unit solid angle Number of incident particles per unit time per unit area (2.5) The DCS is the fundamental observable quantity in scattering experiments to which all other quantities may be related. The first derived relevant quantity is the integral cross section r 2n P7T PAGE 14 8 where o v (v) is the weighted cross section Jo Jo (2.9) In such bulk phase measurements much of the details of individual collision processes is lost in the averaging involved in the observations. The goal of molecular collision theory according to Child[l] is to identify the dominant characteristics of a given model potential function and to extract from them all possible observable consequences. Classical mechanics provides the roots for modem physics. It provides a familiar basis to begin a more intricate investigation but also quantum mechanics still depends on classical physics through the construction of Hamiltonians and Lagrangians. The purpose of this section is to provide a foundation for our investigation into QM scattering. The discussion leads from classical trajectory to the collision cross sections, and concludes with some comments about their validity. In the CM system the conservation of energy E and angular momentum L given by Eqn. 2.3 gives the conditions of a given event defined by the incident relative velocity Â— * v and the impact parameter b in Fig. 2.2. The trajectory must lie in a plane since L is conserved. The equations of motion may be written in polar coordinates: 2.2 Classical Theory of Scattering ( 2 . 10 ) (2.11) PAGE 15 9 Elimination of the time derivatives gives ^=(^=Â± ( 2 . 12 ) dr \r ) r 2 [l-b 2 /r 2 -V(r)/E ]2 the sign given by the radial velocity; positive for outward and negative for inward motion, respectively. The complete classical trajectory may now be determined by integrating Eqn. 2.12 from oo to a and out again. For our interests, only the overall effects of the collision are observed after the event. The relevant quantity is termed the deflection function (DF), PAGE 16 10 0, related to E and L by &{E,L)= 7T(2.13) which gives the functional dependence to calculate the deflection angle of a single collision trajectory. We now move our focus from a single trajectory specified by an initial energy and impact parameter to reflect the inability of experiments on a molecular scale to specify a single impact parameter. As we have discussed, the experimental results are reported in terms of the differential cross section, dcr^/dQ, and the collision cross section, oy, which are notional areas leading to a particular angle and/or state. In experiments it is not possible to distinguish between positive and negative deflection angles so we introduce the scattering angle, 9 = |0|, which is the magnitude of the DF and is limited by 0 < 9 < 7 r. We observe that apart from discontinuities at 9 = 0, 9 varies smoothly with b, and hence trajectories within an impact parameter range from b to b + db defining an area 2nb d b, are deflected into an appropriate solid angle df) = 2 tt sin 9 d 9. The DCS is therefore If more than one value of b leads to the same angle we can appeal to the experiment and observe that the classical intensity I is increased. Then it can be replaced by an appropriate sum (2.14) dfl Y sin9d9/db k (2.15) PAGE 17 11 where k is an index for all values of b leading to the same deflection angle 9. The total elastic cross section is then defined as t he integral of the differential form: a(E)= /^-dfi = 2tt [* 1(6, E) sin6 dO. (2.16) J dlz Jo We observe that the expression for ^ displays two possible singularities, the socalled glory and rainbow effects in the classical differential cross section. The glory effect, which may occur at 9 = 0 (forward) or 9 = 7r (backward), is due to the sin 9 in the denominator evaluating to zero in Eqn. 2.15. The rainbow singularity, on the other hand corresponds to a extremum in the deflection function, at which d9/db = 0. At this point the density of classical trajectories rises to infinity in analogy to the phenomenon in optics which leads to rainbows. The principle deficiency of the classical theory is of course that no particle trajectory can ever be precisely defined, because of the inherent uncertainties associated with it. The fascinating effect of interference reveals the failure of classical mechanics at the rainbow angle where many trajectories contribute to eliminate the singularity. 2.3 Quantum Mechanical Scattering Theory The standard treatment of quantum mechanical scattering theory employs a stationary wave function and its associated interference pattern in place of the limiting deflection for a classical trajectory. The wave functions represents the whole system and is independent of time describing the behavior of the system over a long period. This is in contrast to time dependent scattering theory which we will discuss later in this chapter. The starting point for this formulation is determining the asymptotic form of the timeindependent scattered wave function and relate its angular dependence to the effect of the scattering potential. PAGE 18 12 Formal scattering theory has been in development since the beginning of the formulation of quantum mechanics until the 1970Â’s[3, 4]. The main concern of the discipline is to rigorously define a scattering process in mathematical terms, to attempt a formal, exact solution of the time-dependent and the time-independent Schrodinger equations, and to define the formal tools to treat a scattering process. The purely formal solutions are usually obtained from an integral equation reformulation of the original Schrodinger equation in both the time-dependent and the time-independent cases. These developments are of high theoretical value but of little practical consequences since no definite algorithm to solve the scattering problem can be devised in this way. Therefore, a complete presentation of the formal theory is beyond the scope of our discussion. Rather, the main concepts of the formal theory are discussed along with an introduction to the practical methods. 2.3.1 Time-Independent vs. Time-Dependent Scattering A scattering process is a real time-dependent phenomenon which can only be described in full detail by the time-dependent scattering theory. This implies the solution of the time-dependent Schrodinger equation of the whole system Hip ( t ) = ih dip ( t ) dt (2.17) when the Schroodinger picture is adopted. At initial time, both the projectile and the target are described by the wave function ip (t = Â— oo). In the usual case of a state to-state scattering problem, the initial wave function is a stationary eigenfunction of the whole Hamiltonian with a trivial time dependence through a phase. However, it is usual in some type of simulations not to employ a single state but some linear combination of state forming a wave packet. In either case, at a remote final time, the evolved wave function ip (t = oo) can be decomposed into a linear combination of the station- PAGE 19 13 ary eigenfunction of the products. Then, the coefficients of that combination ought to have a trivial time-dependence through their phase again. At any time of the evolution -oo PAGE 20 14 equation for a scattering problem H iP(E) = Eip(E) (2.19) contain in the asymptotic region an incoming component describing the reactants and different outgoing components describing the products. Most of the relevant properties of the scattering process can be calculated from those outgoing components. However, all the intermediate details of a scattering process or a chemical reaction cannot be obtained in this way. This approach is an extension of the most traditional methods to calculate bound states in isolated molecules to the case of several colliding molecules involving some unbound translational states. This connection to the methods in electronic structure theory has also favored the bias to the time-independent schemes. Because of its historical precedence, the time-independent methods will be presented first. Furthermore, the time-independent approach directly refers to the stationary description of reactants and products, a problem which must be necessarily addressed before attempting a time-dependent solution. It is also important to remember that some of the techniques further developed within END theory have their origins in the time-independent theory. 2.3.2 Fundamentals of the Time-Independent Theory For the simple case of elastic scattering by a localized potential V (r) the wave function is dependent only on the interparticle distance r. After Merzbacher[5] and Sakurai[2] motion of the particle is described by the Hamiltonian H = ^ + V = H 0 + V (2.20) PAGE 21 15 where H 0 is the free particle Hamiltonian. The solution to the time-independent SchrodingerÂ’s equation is then given by the Lippmann-Schwinger equation[2] |^ (Â±) ) = \ PAGE 22 16 Using this construction the problem then becomes to determine / ( 9 ) for a given potential function V (r). We are required to turn our attention from the asymptotic region to the range where the potential function is localized. This is the launching point for various techniques in determining the form of scattering amplitude. 2.3.3 Scattering Amplitude In the time independent formulation given by Sakurai[2] the scattering amplitude is f(e) = ~(2nf^(k'\VWM), (2.24) Â— * where k! represents the propagation vector reaching the observation point 0. The outgoing wave function xp^ and potential V terms in Eqn. 2.24 are the fundamental unknowns in this formulation. We will leave the discussion of the calculation of the potential term for later. The determination of xp^ is realized through several different approximations applicable in different regimes. Here we will discuss the Bom Approximation and the method of partial waves which comprise the main quantum mechanical approximations, then we will address the semiclassical JWKB approximation, and Eikonal approximation. 2.4 Semiclassical Theory of Scattering 2.4.1 Born Approximation If the assumption is made that the effect of the scattering potential is not very strong we make an approximation to replace xp^ with the free particle solution (p. In this way we treat the effect of the potential V as a perturbation of the free Hamiltonian. In the fashion of perturbation expansion we can calculate the fist-order Bom amplitude which PAGE 23 17 is denoted by /(i) (m = -Â±2 Â™ [ e i*-k") Â£'v (f') dV. (2.25) J 47 r ft 2 7 We can generate an iterative solutions by introducing the transition operator T such that V\rp M ) = T\4>) (2.26) Using the Lippmann-Schwinger equation Eqn. 2.21, we obtain an expression for T T=V+VE 1 H 0 it T (2.27) The scattering amplitude then can be written as 1 O m -> -* f {9 ) = -^-(2^ w (k'\T\k). (2.28) The iterative solution for T is given by T = V + V E Â— Ho Â— it U-f V EHo -V E Â— Ho Â— itV + (2.29) it Correspondingly, we can expand / as follows: OO / (0) = E / W ( 0 ) , (2.30) 71=1 where n indicates the number of times U enters the expansion. 2.4.2 Partial Wave Expansion In this method we assume the potential V is spherically symmetric which allows us to expand the scattering amplitude in terms of angular momentum eigenfunctions PAGE 24 18 and the phase. As a consequence of our assumption about the potential the transition operator T (see Eqn. 2.29) commutes with L 2 and L. Therefore, T is a scalar operator. Application of the Wigner-Eckart theorem to a scalar operator immediately gives (E', l m'\T\E, l, m) = T t (E) 8 w 5 mm . , (2.31) where E is the initial energy, l and m are the angular momentum eigenvalues. Clearly T is diagonal in both l and m. Turning our attention Eqn. 2.28 we can show 1 9m -> _Â» /(Â«) = = %'LY. (*') rr (*) w) \K\ l m Since we have assumed the incident beam is moving in the z direction and k is outgoing wave at the angle 9 we see that / (0) = Â£ (2* + 1) /i ( E ) Pi (cos 9) , (2.33) i where/ ( (E) = -i tT[ ( E ) /\k\. Considering the conservation of angular momentum and that in our construction the flux of particles is conserved it is possible to relate the / to the phase of the outgoing wave e lSi sin Si (2.34) Then the scattering amplitude is 1 OO /W = TnE( 2, + 1 ) eii ' sin ' 5 ' fl W \k\ io (2.35) PAGE 25 19 with Si real. The effect of this expansion is to change the problem from calculating the scattering amplitude to evaluation of the effect of the potential on the phase which in some cases may be significantly easier. The differential cross section can be obtained just as in Eqn. 2.23 and the total cross section can be succinctly given by Â°tot = J\f(e)\ 2 PAGE 26 20 so that Eqn. 2.37 reduces to V ,2 (2.39) The JWKB approximation is dependent on the assumption that 77 is a slowly varying function of x and solving equation Eqn. 2.39 by iteration. The second iteration gives 2.4.4 Eikonal Approximation The Eikonal Approximation^] is applied to cases in which the scattering potential V varies very little over a distance of order of the wavelength A of the scatterer. Note that V itself need not be small only that Â£>k; which is in contrast to the applicability of the Bom Approximation (Pg. 16). This condition allows us to replace the exact wave function ipW by a semiclassical wave function where S ( x ) can be interpreted as the phase of the wave function. Using the flux we can observe that 7) (x) ~ (x) + -i Â— h rC (2.40) and therefore (2.41) ^( + > e iS( * )/ft , (2.42) j ~ x/S (2.43) PAGE 27 21 allowing us to interpret \/S as the Â“momentumÂ” of the wave packet. Using Eqn. 2.42 in the time independent Schrodinger equation leads to the Hamilton-Jacobi equation for S, ( y Â£) 2 2 m + V = E = h 2 k 2 2m (2.44) Here again we have transferred the difficult problem of calculating to the calculation of the phase. For the high energy regime the computation of S is often made with the additional approximation of straight line trajectories for the semiclassical path of the scattered wave packet. 2.5 Time-Dependent Scattering Theory While the time-independent methods have dominated the development of scattering theory for most of the history of quantum mechanics it must be remembered that scattering is a time dependent process. Time-dependent theories were not studied in earnest until the advent of modem computers in the 1970s . One of the main reasons being that the time-dependent equations are posed far more demanding then the time-independent ones. This young branch of scattering theory is not so developed and established as its counterpart making a concise summary more difficult. Several reviews have been compiled among them are Deumens et al . [8] and Kroger[9]. The time dependent methods can be roughly organized into two camps. The first being the case where the time dependence is reserved only to the nuclear degrees of freedom. In the second one, the time dependence is given to all the degrees of freedom, nuclear and electronic. The first approach is time-dependent nuclear dynamics on one or more predetermined potential energy surfaces (PES). In calculating the PES the electronic degrees of freedom have been included before the dynamical equations are solved and propagated. In the second, the time evolution of electrons and nuclei is performed simultaneously. However, this calculation is very intensive if PES are not PAGE 28 22 used so practical methods have so far been mostly restricted to classical, semiclassical, or quasiclassical descriptions of the nuclear degrees of freedom. 2.5.1 Wave-Packet Propagation: Exact Methods Methods for which the numerical integration of the nuclear time-dependent Schrodinger equation is performed directly are called exact[10]. Although these methods contain approximations for the structure of the Hamiltonian and other numerical approximations they can be calculated to any desired level of accuracy. The scattering is simulated and the use of a wave packet either represented in terms of discretized grid of points or expanded in a convenient basis set. 2.5.2 Approximate time-dependent methods Time-dependent self consistent field. For these methods the exact time evolution is replaced with numerically easier approximations and models. One such method is the Time-Dependent Self Consistent Field (TDSCF) method introduced by Bisseling et al.[l 1], The nuclei are represented by Hartree products of wave packets on a grid. Each nucleus is then moving in the average field of the others. Further developments of this method add several exit channels and include some correlation effects for the nuclear dynamics by adding a multi-reference scheme of Hartree products. These methods can be used for scattering, photodissociation, atomic transfer and vibrational dissociation. Trajectory Surface Hopping. The trajectory surface hopping method (TSH) uses a probabilistic description [12, 13] to make the solution of the time-dependent equations easier. Instead of solving the coupled set of equations at all nuclear configurations, a hopping probability for the electronic configuration is computed at each instant. Thus the equation for the classical nuclei depends only on the hopping probability between the states involved, usually just two. Direct calculation of the dynamics of the electronic states and phase is ignored in the TSH method[14]. PAGE 29 23 Car-Parrinello. A very popular method for dynamics is the the Car-Parrinello (CP) method[15]. The original method was an algorithm to solve minimization problems which lead the the solution of eigenvalue problems. While this method could be used to solve any number of eigenvalue equations it has primarily been applied using the equations derived from Density Functional Theory (DFT) Time-dependent Hartree-Fock. This large class of methods is based on the HartreeFock equation for the dynamics[16] and utilizes semiclassical or classical descriptions for the atomic nuclei. These methods consider an explicit dynamical description of the electronic state. Sometimes the full ab initio Flamiltonian is considered, in other cases model Hamiltonians are set up to drive the dynamics. The coupling between the electrons and nuclei in these models is through the average potential energy surface. The nuclei and the electrons affect each other only through their instantaneous positions in the Fock operator. The result of this that high energy, non-chemical scattering is not properly handled. The ad hoc solution for this problem is the addition of electron translation factors PAGE 30 CHAPTER 3 END THEORY OF TIME DEPENDENT MOLECULAR DYNAMICS Electron Nuclear Dynamics (END)[17] is becoming a complete general theory of molecular processes. It is one theory of a few among the dynamics community in that its foundations are based in the Time Dependent Variational Principle (TDVP). This novel approach generates a theoretical framework to calculate the full dynamics of molecular collisions and dynamics without the use of the Bom-Oppenheimer approximation thus allowing the study of nonadiabatic interactions. The richness of the theory can be implemented in a hierarchy of complexity ranging from the semiclassical to the fully quantum. At all levels of the realization of this theory there is a fertile ground of interesting physical and chemical problems to investigate. Currently the END theory is implemented in the ENDyne[18] code. The level of implementation uses classical nuclei in the narrow wave packet limit and a single complex determinant to describe the electrons. Even at this basic level of approximation several applications have shown that END can capture much of the physics of nonadiabatic molecular processes[8, 19, 20, 21, 22, 23, 24, 25]. It has also become clear that there are limitations to this level of the theory. Further developments in implementation are in progress. 3.1 The END Theory The starting point for the END theory is the TDV[26] which is the quantum mechanical analogue of the classical principle of least action sometimes called Â“HamiltonÂ’s PrincipleÂ”. The application of the principle of least action to a physical problem often leads to more computationally tractable equations of motion than for example the 24 PAGE 31 25 SchrodingerÂ’s equation. Here we will introduce the quantum mechanical action A and demonstrate its connection to the Schrodingers equation by introducing a general set of complex variational parameters. We will then derive the equations of motion using the overlap matrix and the energy. Finally we will discuss the implementations of END specifically the implementation featured in the ENDyne code. The quantum mechanical action is given by A = [ t2 Ldt, (3.1) Jti with the quantum mechanical Lagrangian defined as ( h = 1) (3.2) Here H is the quantum mechanical Hamiltonian of the whole system. Note the Â“left acting derivativeÂ” used in the Lagrangian. It is defined for a differentiable function / as f l = If. J dt dV (3.3) This definition is necessary if we want to promote Eqn. 3.2 to an operator in Hilbert space, L Â— Â» 0, and ensure it is Hermitian. We make the ansatz that 4/, the wave function for the whole system, depends on a set of complex variational parameters Â£ = {Ci; ( 2 , ( 3 , 1 Cm} which depend formally on time, Q = Â£(t). We introduce the column vector |Â£) as the representation of T = 'F(C) = |C) in terms of the complex parameters {Â£i}Here we observe that these complex parameters along with their time derivatives serve as generalized positions and velocities for our quantum mechanical Lagrangian. Thus Â£(*,**) = L(C,C*,C,a, (3.4) PAGE 32 26 where Â£ = ^ are the generalized velocities. The time evolution of the system is determined using the TDVP which requires that the action be an extremum with respect to its parameters so SA = 5 rh / Ldt = 0, J (3.5) subject to the boundary conditions 8\V) = < 5 (\&| = 0 (3.6) at t = ti and t 2 [21]. Substituting our expressions and employing the above boundary conditions and integration by parts gives us 6A 5 [ h dt (c|0|C)(ClC) = o Jti mm (cieio (CIO (CIO 2 (3.7) Since 'P, and equivalently (, can be varied throughout the full Hilbert space the integrand of Eqn. 3.7 must satisfy [ 4 = Â®o < 3 8 > If we explicitly consider the phase, 7 , of the wave function by writing |0 Â— > e* 7 |0 we can derive the following expression for the right hand side of Eqn. 3.8 (C|eJ7 Â©e i7 |0 = 0, (C|e 7 ) i(-i i))e n \() + (CI@|C) = 0, (CIO7 + (Cieio = 0, (3.9) PAGE 33 27 Substituting these results in Eqn. 3.8 gives mh -h] |c) = 7lC>. Ht-H] 10 OIO = o =* [iftg-ff] e i7 |C) = 0. (3.10) This result is the Schrodinger equation. The preceding derivation provides the justification for the form of the action presented in Eqn. 3.4. If we choose to specify a constrained form for the wave functions 'k then we would restrict ourselves to particular regions of the Hilbert space resulting in a dynamical evolution that would be an approximate solution to the Schodinger equation. This idea forms the kernel from which all realizations of the END theory are derived. In choosing the set of parameters ( and the form of the Hamiltionian, H, we are able to derive equations of motion that are relevant to many different molecular processes. The primary advantage of this theory is in the fact that we have not constrained the dynamics of the system except in our choice of parameterization. Hence we are not limited to investigating only adiabatic processes as molecular studies using PES often are. Even before making any assumptions about the form of we can derive a set of general equations of motion. Let us define the S = S( (,(*) = (CIO and the energy E = E{ C, C*) = (C|H|C)/(C|C)Then Eqn. 3.7 can be expressed in ( representation as (3.11) PAGE 34 28 Since S(p and 6Q are independent variations then two sums in the integrand must be separately zero resulting in P d C a ^ = it' (3.12) where we define the complex Hermitian matrix C = {C Q/ g} = d 2 In S/d(*d(p. The general END dynamical equations are given by Eqn. 3.12 and they may be recast in matrix form as ( n n \ ^ ^ ^ M V c 0 0 -c We define a generalized Poisson bracket by dE V 9Ca ) (3.13) UiS} = -*Â£ a,P d 'f _ c _i 0g _ dg x df d( a a/3 dQ d( a a ^dQ (3.14) with / (Â£, Â£*) and g ((, (*) being differentiable functions. It follows that ( = {C, E} and (* Â— {Â£*, E}. So we can state that the generalized Poisson brackets show that the time evolution of the wave function parameters are governed by Hamilton-like equations. Moreover, the additional relations {C>, Oj} = {C Q}= 0 and {CÂ„, q} = (C 0 } = -iC~l (3.15) show that Â£* and ( behave as Â“classicalÂ” coordinates. Observe that if the matrix C were the unit matrix then Eqn. 3.15 would reduce to the commutation rules for a flat phase space. Our generalization has lead to a curved phase space. PAGE 35 29 3.2 The END Wave function At the time of this writing only the first nontrivial level of END theory has been implemented in the code ENDyne[18], In this model the unnormalized END wave function in the lab frame is written as ^ END (X, X, t) F nud (X; R(t), P(tj) fei (x; z (t), R(t)) x exp h l total (t) (3.16) where X and x are the nuclear and electronic coordinates, R(t), P(t) and z (t) are the variational parameters, and 7 total (t) is the total phase. The time-dependence of the wave function comes only through the parameters and the phase. We describe this wave function as the quasiclassical single determinental END wave function (QCSD END) and it has three important parts: the nuclear wave function F nuc i (X; R(t), P(Â£)), the electronic wave function f et (x; z (t), R(t)^, and the phase. The nuclear part consists of a simple product of unnormalized, so-called Â“frozenÂ” Gaussian wave packets $ (X k ; R k (t ) , P k (Â£)) N Till cl F nucl (X; R(t), P(t)) = A ^ (X,; R k (t) , P k (t)) , (3.17) k= 1 where (3.18) The variational parameters R(t), P(t), and the constant a k are assumed to be real. The construction of these wave functions establishes the connection of these parameters with average position and momenta of the wave packets. The constant a k can be related to standard deviation or width of the Gaussian functions. PAGE 36 30 The physical meaning of the previous parameters and constants imply that each nuclear wave packet can be seen as parametric plane wave function Furthermore, since the constants a k are time-independent and the evolution is through the TDVP, each nuclear Gaussian will remain a Gaussian during the evolution without changing shape. The nuclear wave packets only change in their positions and momenta. Contrasting this method with unconstrained propagation of nuclear wavepackets, we observe the QCSD END nuclear wave functions is not allowed to spread, change shape or split into seperate packets. The frozen wavepackets are limited in this regard but have great computational advantage over the general method. The use of frozen Gaussian wavepackets has been widespread[3] and is considered quite valid if one chooses to ignore tunneling. In most molecular experiments the scattering process happens is such that the width of the nuclear wavefunction can be considered negligable. Additionally, this level of wave function for the nuclei allows a description of the internal motion of the molecular components of the scattering process. The electronic function f ei fx; z (Â£), R(t)^j is a single-determinant wavefunction throughout the time evolution. This type of function has been long established in the TDHF theory of nuclear many-body theory [28, 29, 30]. Electron Nuclear Dynamics theory has an attractive feature in its ability to treat both the nuclear and electronic degrees of freedom simultaneously. Additionally, this single determinant is related to either restricted or unrestricted Hartree-Fock description of the reagents and products ground states. The exp j^Pk ( t ) Â• X fc R k (t) (3.19) centered around the position R(t) by a simple Gaussian factor (3.20) PAGE 37 31 difficult problem of constructing a wavefunction which remains as a single determinant while being evolved was solved by Thouless[29, 30], The electronic QCSD END wave function is presented following the original derivation by Thouless. It is meaningful to divide the set of K spin orbitals into N occupied and K Â— N unoccupied spin orbitals. The linear array q refers to the set of all all spin orbitals. Then q* and qÂ° refer to the occupied and unoccupied orbitals, respectivly. The atomic spin orbitals may also be partitioned into two sets. We write them as PAGE 38 32 Introducing a general unitary transformation U of the orthonormal molecular spin orbital basis one can write (c #t c ot ) = (V f h ot ) 1 U' U> ' [7 V UÂ° ( 3 . 23 ) for the basis field operators, and obtain for a determinental state[31] |4'> = |ct ) =nÂ£ic;'|Â»ac) =anfl+ E E K' u * u ^' b 'i! ) ft V I Â«Â“> h=l \ p=N + 1 k = 1 / /=1 where the invariance of a determinantal wave function under a linear transformation of its occupied spin orbitals up to a constant a has been used. Defining the Thouless parameters z pq , EjtLi = z ph , and the reference state N = 6 #t \ = n^ f \ va Â°) ( 3 . 24 ) i=i one can write the unnormalized Thouless determinant Z, R) = fei (x; z (t), fl(t)) as Z ,R)= nLi (l + Ef=jv+i tfz ph bh) I ^0 ) = exp (E/ll E p=n+ 1 Zphbpbl) | 0 ) . ( 3 . 25 ) Here the nilpotence of the operators b^ 6* has been used. The parameters z pq are complex numbers. The determinantal wavefunction of this state is det Xh (f n ) in terms of the occupied dynamical orbitals K Xh='lph+ P z ph p=./V-fl ( 3 . 26 ) PAGE 39 33 Z, R) = fei (x; z (t), R(tj) is the These orbitals are not orthogonal. The function single-determinant wave function for the QCSC END theory. If the arbitrary reference state | \&o ) is selected as the HF ground state of the system then the Thouless determinant contains this state plus all its excitations. Finally, the QCSD END total phase 7 to tai (t) is the quantum action corresponding to the QCSD END wave function 'end (X, x, t). PAGE 40 CHAPTER 4 OVERVIEW OF ROVIBRATIONAL REACTION DYNAMICS In this chapter we provide a brief overview of the classical and quantum mechanical rovibrational dyanamics. First, we review classical vibrations. Second, we discuss the quantum mechanical description of the harmonic oscillator. The classical theory of rotations is examined. Finally, the quantum mechanical description of rotations is presented. A system is said to be in equilibrium when the generalized forces acting on the system vanish The potential energy therefore has an extremum at the equilibrium configuration of the system, Â§ 01 , q 02 , . . . , qo n Â• If the initial configuration is at the equilibrium position with zero initial velocities q^, then the system will remain in equilibrium, i.e. like a pendulum at rest. In the case of a stable equilibrium, one for which small displacements result in small bounded motion about the resting position, all functions can be expanded in terms of a Taylor series and we retain only the lowest order terms. We naturally cast our system in terms of deviations r] l from equilibrium: 4.1 Classical Theory of Vibrations (4.1) q% qoi (4.2) 34 PAGE 41 35 Using r?i as the new generalized coordinates of the motion we expand, the potential energy about q 0l we obtain V (q u q 2 , . . . , q n ) = V {q 0 1 , q 0 2 , Â• , qon) + d 2 V \ dqidqj J ViVj + o (4.3) Where we have imposed the Einstein summation convention. As we see from Eqn. 4.1 the linear term vanishes and we shift the zero of potential energy to eliminate the first term in the series. We are then left with the quadratic terms as the first approximation to /dU\ 1 l d 2 V \dq l dq j ViVj = ^VrjViVj, 0 z (4.4) where the second derivatives of V have been designated by the constants V^. A series expansion can also be obtained for the kinetic energy. The kinetic energy is a homogeneous quadratic function of the velocities when the generalized coordinates do not explicitly depend on time: -f 2 ^ijQiQj 2 j Vi Vj (4.5) The coefficients m t j are functions of the coordinates q k , but they may be expanded in a Taylor series about the equilibrium configuration just as we did for V . Keeping only the quadratic terms, we can therefore write the kinetic energy as T = 2 T bW (4.6) PAGE 42 36 We should observe that both V t j and T t j are symmetric since individual terms are unaffected by interchange of indices. The Lagrangian is given now given by Using the Vi as the generalized coordinates, the Lagrangian leads to the following n equations of the motion: where explicit use has been made of the symmetry property of the V tJ and coefficients. We then must solve this set of simultaneous differential equations involving all of the coordinates Vi to obtain the motion near the equilibrium. The equations of motion are linear differential equations with constant coefficients leading us to make the ansatz that the solution has the form Here Ca* gives the complex amplitude of the oscillation for each coordinate Vi, the factor C being introduced for convenience as a scale factor. Clearly, it is the real part of Eqn. 4.9 that corresponds to the real motion. Substituting our ansatz into the equations of motion leads to (4.7) TijVj + VijVj Â— 0 ) (4.8) Vi = Ca,e lbJt (4.9) (4.10) PAGE 43 37 These equations constitute n linear homogeneous equations for the a*Â’s and consequently can have a solution only if the determinant of the coefficients vanishes: Vn-u 2 T n V u cu 2 T u V n ~ U 2 T 2 i V 12 Â— U 2 T 22 V n -u 2 T 2l (4.11) We have in effect an algebraic equation of the nth degree for u 2 , and the roots of the determinant provide the frequencies for which Eqn. 4.9 represents a correct solution to the equations of the motion. Each of these values of cu 2 the equations maybe solved for the amplitudes of a*, or more precisely, for n Â— 1 of the amplitudes in terms of the remaining a;. The general solution of the equations of motion may now be written as a summation over an index k: Vl = C k a lk e~'Â“ kt , (4.12) there being a complex scale factor C k for each resonant frequency. We should note that we are only interested in the real part of our equations of motion, consequently we ignore the negative values of u>. It is possible to transform the t?, to a new sort of generalized coordinates that are all simple periodic functions of time-a set of variables known as the normal coordinates. We define a new set of coordinates Q related to the original coordinates 77 * by the equations Tji (1/j , (4.13) or in linear algebra terms V = M(4.14) PAGE 44 38 Importantly, A diagonalizes V by a congruence transformation and the potential therefore reduces simply to (4.15) Similarly, the kinetic energy transforms to (4.16) Returning to the Lagrangian, we know have L = l (CiCi W, 2 C, 2 ) (4.17) so that the Lagrange equations for Q are Ci + ^iCi Â— 0 . (4.18) The immediate solutions are C i = C i e~ iUit . (4.19) Each of the new coordinates is thus a simple periodic function involving only one of the resonant frequencies. So it is therefore customary to call the CÂ’s the normal coordinates of the system. Individually each normal coordinate corresponds to a vibration of the system with only one frequency, and these component oscillations are called the normal modes of vibration. All of the particles in each mode vibrate with the same frequency and with the same phase; the relative amplitudes being determined by the matrix elements a ik . The complete motion is then built up out the the sum of the normal modes weighted with appropriate amplitude and phase factors contained in the C k s. PAGE 45 39 Harmonics of the fundamental frequencies are absent in the complete motion essentially because of the stipulation that the amplitude of oscillation be small. We are then allowed to represent the potential as a quadratic form, which is characteristic of simple harmonic motion. The normal coordinate transformation emphasizes this point, for the Lagrangian in the normal coordinates is seen to be the sum of Lagrangians for harmonic oscillators of frequencies uj k . We can thus consider the complete motion for small oscillations as being obtained by exciting the various harmonic oscillators with different intensities and phases. 4.2 Quantum Mechanical Theory of Vibrations The quantum mechanical harmonic oscillator is a superb pedagogical tool as it is a system that can be exactly solved in classical and quantum theory but it is also a system of enormous physical relevance. Any system displaced by small amounts near a configuration of stable equilibrium may be described either by an oscillator or by a collection of decoupled harmonic oscillators. The dynamics of a collection of noninteracting oscillators is no more complicated than that of a single oscillator aside from the ./V-fold increase in degrees of freedom. While addressing the solution of the oscillator we are actually confronting the general problem of small oscillations near equilibrium of an arbitrary system. 4.2.1 The Quantum Mechanical Harmonic Oscillator The canonical example of a single harmonic oscillator is a mass m coupled to a spring of force constant k. Small deviations in the position x will exert the force given by HookeÂ’s law, F = ~kx, and produce a potential V = \kx 2 . The Hamiltionian for this system is H = T + V r = Â— + \muj 2 x 2 2m 2 ( 4 . 20 ) PAGE 46 40 where oj = (k/m) 2 is the classical frequency of oscillation. Any Hamiltonian that is quadratic in the coordinates and momenta will be called the harmonic oscillator Hamiltonian. The mass-spring system is just one among the following family of systems described by the oscillator Hamiltonian. A particle moving in a potential V (x) is placed at one of its minima x 0 , it will remain there in a state of stable, static equilibrium. Consider the dynamics of this particle as it fluctuates by small amounts near x = x 0 . The potential may be expanded as a Taylor series as in the classical model. For small oscillations, we may again neglect all but the leading term and arrive at the potential Eqn. 4.20. Therefore, we identify d 2 V/dx 2 with k = mo; 2 . A system of harmonic oscillators can be formed by simple addition of the individual Hamiltonians and decoupling them through a coordinate transformation. In general, a system of N coupled harmonic oscillators can be decoupled into N separate harmonic oscillators. 4.2.2 Quantization of the Harmonic Oscillator We focus our attention on the time-independent Schrodinger equation eigenvalue problem: A clever method due to Dirac allows us to work in the energy basis without having to know ahead of time the operators X and P. From the basis independent commutation relation (4.21) [x,P ] = ih, (4.22) we are motivated to introduce the operator (4.23) PAGE 47 41 and its adjoint + / muj\ 2 . / 1 \ 2 a = \ 2h) x ~ l Gwi) p They satisfy the commutation relation ( 4 . 24 ) a, a f = 1, ( 4 . 25 ) and is related to the harmonic oscillator Hamiltionian so that H = f a^a + ) hu>. ( 4 . 26 ) For simplification let us define H = H /hu and e = E/hu. Two important relations can be immediately obtained: a, H a f , H ( 4 . 27 ) The utility of a and a t is that given a particular eigenstate they can be used to generate others. For example Ha | e ) (aH (aH (ca,H])|e) a) |e) l)a|e) ( 4 . 28 ) We can conclude that a | e ) is an eigenstate of H with eigenvalue e Â— 1 thus a | e } = C t \ e Â— 1 ) ( 4 . 29 ) PAGE 48 42 where C e is a constant and | e ) and | e Â— 1 ) are normalized eigenkets. Similarly we observe that Ha f | e ) (a*H + a^ | e) (e + 1) a f | e) (4.30) so that a f |e) = C e+1 |c + l). (4.31) Clearly this is why one refers to a and a^ as creation and annihilation operators. We are led to conclude that if e is an eigenvalue of H, so are eÂ±l, eÂ±2, .., eÂ±oo. However, we must remember that the quadratic terms in H make it a positive operator with positive eigenvalues. Consequently, there must exist a state | e 0 ) that cannot be lowered further: a | e 0 ) = 0 (4.32) and H|e 0 ) = ^. (4.33) It is convenient to define e n = (n + |) , n = 0, 1, 2, . . . E n Â— (n + n = 0, 1, 2, . . . (4.34) Since there are no degeneracies in one dimension these constitute all of the eigenstates of H. PAGE 49 43 With some manipulation it can be shown that the coefficients are given by (4.35) where is an arbitrary phase. The energy eigenstates can be succinctly written in terms of the creation operator Returning to the position basis with X Â— > x and P Â— > d/dx we can observe that the inversion of Eqn. 4.23 and Eqn. 4.24 leads to 4.3.1 Rotations When we discuss rotations we do so in the context of a rigid body. A rigid body is a system of mass points subject to the constraints that the distances between all pairs of points remain constant throughout the motion. This is something of an idealization, especially in the case of molecular motion, but it allows us to discuss the important aspects of rotational kinematics and dynamics. In the case of a rigid body with N particles it can at most have 3 N degrees of freedom. Although, as we see by definition there exists a set of constraints. These constraints serve to reduce the number of degrees of freedom greatly. We only need to establish the position of just three non-collinear (4.36) n ) = lf> n (z) (4.37) where H n ( y ) are the Hermite polynomials. 4.3 Classical Theory of Rotations PAGE 50 44 particles of the body to define its location. This gives us nine degrees of freedom, except we are not free to alter the distances between the particles which reduces the total number to just 6 degrees of freedom. Three of which describe the location of the rigid body and the other three describe the rotations. Rotations of rigid bodies are thought of as orthogonal transformations where the coordinates of the position of the particles in the rigid body are transformed into another set of coordinates. The transformation can be written as: 3 x\ = '^Ta i jXj i = 1, 2, 3. (4.38) j= 1 Since the transformation must preserve the distances between particles in the rigid body the coefficients form a special type of matrix which is a representation of an orthogonal transformation. This matrix, which will be the representation of an operator A, is orthogonal and therefore must satisfy 'y ' QÂ’ijQ'ik 3jk(4-39) i The operator A may be thought of as a change of basis or for our purposes as a transformation of the position vector r to a different vector r': f = A r. (4.40) It is important to note that a product of two orthogonal transformations is also an orthogonal transformation. If we restrict ourselves to transformations with determinant +1 then rotations done consecutively are equivalent to one single rotation. PAGE 51 45 4.3.2 Euler Angles As we observed in the previous section, rotations can be mathematically realized as orthogonal transformations. Here we describe how these transformations are constructed and practically implemented. The simplest rotation is one about a particular axis through and angle 6. For such a rotation of the transformation matrix can be easily calculated using basic trigonometry, for example a rotation about the x-axis in three dimensions: / \ V 10 0 A= 0 cos 6 sin (9 Â• (4.41) 0 Â— sin 6 cos 6 In order to describe the motion of rigid bodies in the mechanics three independent parameters are needed. The choice of these parameters is one made of convenience and appropriateness for particular problems, here we will describe two of the more popular formulations. The most popular set of parameters are the Euler Angles. They are defined as three successive rotations performed in a specific sequence through three angles. Within limits, the choice of rotation angles is arbitrary but the most common is to rotate the system about the coordinate axis (as in Eqn. 4.41) using a particular order. One such sequence is started by rotation the initial system of axes, xyz, by an angle (p counterclockwise about the z-axis, giving the new axis as x'y'z. In the second stage the intermediate axes are rotated about the x'-axis counterclockwise by an angle 9 to produce another intermediate set of axes x'y" z' . Finally these axes are rotated again about the x'-axis in a counterclockwise direction by and angle ip. The three angles 9, (p and ip constitute the three Euler angles and they completely specify the orientation, labeled XYZ, of the new system relative to the original coordinates. PAGE 52 46 The elements of the complete transformation A can be obtained by writing the matrix as the triple product of the separate rotations, each of which has a relatively simple matrix form similar to Eqn. 4.41 . The complete transformation A = BCD is composed of D ^ cos (f> sin (f) 0 ^ Â— sin (f) cos (j) 0 0 0 1 and B 1 0 0 0 cos 9 sin 9 ^ 0 Â— sin 9 cos 9 ^ cos 'ip sin 0 0 ^ sin ip cos ip 0 0 0 1 (4.42) (4.43) (4.44) (4.45) 4.3.3 Cayley-Klein Parameters Although we have seen that only three independent quantities are needed to specify the orientation of a rigid body, there are occasions when it is desirable to use more than the minimum number of quantities. The Euler angles are difficult to use in numerical computation because of the large number of trigonometric functions involved, and the four-parameter representations are much better adapted for use on computers. We can write the matrix A in terms of four real parameters e 0 , ei, e 2 , PAGE 53 47 The matrix representation is then A = 4 4e\ Â— e\ Â— e\ 2 (eie 2 + ^ 063 ) 2 (eie 3 Â— eoe 2 ) 2 Â„2 1 Â„2 Â„2 e o e i 2 (eie2 Â— eoes) ^ 2 (eie3 + eo e 2) 2 (e2e3 Â— eoei) e 2 Â— e 3 2(e 2 e 3 + eoei) e o e i ~ e 2 + 4 (4.46) The parameters obey the relation 2 , 2 , 2 . 2 1 e 0 + e l + e 2 + e 3 Â— 1 (4.47) and are derived from the rotation formula of Rodrigues[32], This formula represents a single finite rotation about an axis to transform the body to its new orientation. It is given by r* = f cos PAGE 54 48 Here r is the original vector which is rotated through an angle $ in a plane defined by a vector normal to the rotation axis n as in Fig. 4.2. The rotation formula can be rewritten Figure 4.2: Vector diagram for derivation of the rotation formula using the following relations: PAGE 55 49 which can be shown to be equivalent to the orthogonal transformation given by Eqn. 4.46. 4.3.4 Rate of Change of a Vector The dynamics of the rotational motion of a rigid body is best understood by examining infinitesimal rotations first. Consider some arbitrary vector r involved in a mechanical problem, such as the position vector of a point in the body, or the total angular momentum. Usually such a vector will vary in time as the body moves, but the change will often depend on the coordinate system to which the observations are referred. For example, if the vector happens to be the radius vector from the origin of the body set of axes to a point in the rigid body then, clearly, such a vector appears constant when measured in body set of axes. For an observer fixed in the space set of axes finds that the components of the vector, vary in time, when the body is in motion. The change d t in a time of the components of a general vector f as seen by an observer in the body system of axes will differ from the corresponding change as seen by an observer in the space system. We can derive a relation between the two differential changes in f based on physical arguments. We can write that the only difference between the two is the effect of rotation of the body axes: ( d 0 space = W body + ( d F> rotation ' (4.51) Consider now a vector fixed in the rigid body. As the body rotates there is of course no change in the components of this vector as observed from the body frame. The only contribution to (d r) space is then the effect of the rotation of the body. But since the vector is fixed in the body system , it rotates with it counterclockwise, and the change in the vector as observed in space is that given infinitesimal version of Eqn. 4.46 which PAGE 56 50 can be written as (d r\ otatlon = d(l x f. (4.52) The time rate of change of the vector fas seen by the two observers is then space + lJ x r , (4.53) where C u is the angular velocity of the body defined by the relation udt = df2. The vector Co lies along the axis of the infinitesimal rotation occurring between t and t + d t, a direction known as the instantaneous axis of rotation. The magnitude of u measures the instantaneous rate of rotation of the body. The angular velocity can be expressed in terms of the Euler angles and their derivatives in both the space and body frames. In the body frame the general infinitesimal rotation associated with u can be considered as consisting of three successive infinitesimal rotations with angular velocities = (/>, u>g = 9 and = ip. In the body frame in Cartesian coordinates these become c o' x = (p sin 6 sin ip + 9 cos ip uj' y = (p sin 9 cos ip Â— 9 sin ip uj' z = (p cos 6 + ip. (4.54) In the space frame we have u) x = 9 cos
= r ( CS | J x | CS ) = r cos a sin /3, ( CS | J y | CS ) = r sin a sin /3, < CS I J Z |CS) = r cos P, (5.43) and ( CS | L 2 | CS ) = ( CS | J 2 1 CS ) = r(r + 2). (5.44) It is then possible to interpret these parameters. If follow that the parameter r is the angular momentum modulus, the pairs of angles ip, and 9\ and a, and (3 are the azimuthal and the polar angles of ( CS | L | CS ) and ( CS | J | CS ) vectors in the body-fixed and laboratory frame, respectively. The angle 7 determines the relative orientation of the body-fixed frame with respect to the space-fixed one. Finally, the probability for the CS to be in a rotational state 1 1 M K ) is IMK X ' Mif I!(I + M)!(I-M)!(I + K)!(I-K)! 21+M / P \ 21-M ( @ cos ^ Â— sin Â— cos 21+K ^ 2 ' 0 > x exp(Â— r) I! sm [( 21 )! 21-K (I! (I + M)! (I M)! (I + K)! (I K)! xg(I + K)(l-#K )exp(Â— r)^, (5.45) p (I+M) (l -p) (I ~ M) (5.46) (5.47) |