Citation |

- Permanent Link:
- https://ufdc.ufl.edu/UFE0015671/00001
## Material Information- Title:
- Search for Heavy Resonances Decaying into tt Pairs
- Creator:
- NECULA, VALENTIN (
*Author, Primary*) - Copyright Date:
- 2008
## Subjects- Subjects / Keywords:
- Average linear density ( jstor )
Calorimeters ( jstor ) Cumulative distribution functions ( jstor ) Electromagnetism ( jstor ) Electrons ( jstor ) Leptons ( jstor ) Muons ( jstor ) Neutrinos ( jstor ) Quarks ( jstor ) Signals ( jstor )
## Record Information- Source Institution:
- University of Florida
- Holding Location:
- University of Florida
- Rights Management:
- Copyright Valentin Necula. Permission granted to University of Florida to digitize and display this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
- Embargo Date:
- 8/31/2006
- Resource Identifier:
- 649810160 ( OCLC )
## UFDC Membership |

Downloads |

## This item has the following downloads:
necula_v ( .pdf )
necula_vpage_Page_033.txt necula_vpage_Page_100.txt necula_vpage_Page_031.txt necula_vpage_Page_105.txt necula_vpage_Page_050.txt necula_vpage_Page_004.txt necula_vpage_Page_070.txt necula_vpage_Page_119.txt necula_vpage_Page_072.txt necula_vpage_Page_073.txt necula_vpage_Page_041.txt necula_vpage_Page_123.txt necula_vpage_Page_011.txt necula_vpage_Page_091.txt necula_vpage_Page_089.txt necula_vpage_Page_075.txt necula_vpage_Page_030.txt necula_vpage_Page_007.txt necula_vpage_Page_032.txt necula_vpage_Page_120.txt necula_vpage_Page_047.txt necula_vpage_Page_127.txt necula_vpage_Page_024.txt necula_vpage_Page_097.txt necula_vpage_Page_015.txt necula_vpage_Page_058.txt necula_vpage_Page_049.txt necula_vpage_Page_104.txt necula_vpage_Page_121.txt necula_vpage_Page_074.txt necula_vpage_Page_080.txt necula_vpage_Page_053.txt necula_vpage_Page_026.txt necula_vpage_Page_087.txt necula_vpage_Page_083.txt necula_vpage_Page_110.txt necula_vpage_Page_095.txt necula_vpage_Page_066.txt necula_vpage_Page_037.txt necula_vpage_Page_054.txt necula_vpage_Page_084.txt necula_vpage_Page_035.txt necula_vpage_Page_106.txt necula_vpage_Page_063.txt necula_vpage_Page_111.txt necula_vpage_Page_076.txt necula_vpage_Page_113.txt necula_vpage_Page_115.txt necula_vpage_Page_077.txt necula_vpage_Page_029.txt necula_vpage_Page_018.txt necula_vpage_Page_061.txt necula_vpage_Page_112.txt necula_vpage_Page_125.txt necula_vpage_Page_021.txt necula_vpage_Page_044.txt necula_vpage_Page_019.txt necula_vpage_Page_069.txt necula_vpage_Page_116.txt necula_vpage_Page_092.txt necula_vpage_Page_078.txt necula_vpage_Page_028.txt necula_vpage_Page_079.txt necula_vpage_Page_003.txt necula_vpage_Page_128.txt necula_vpage_Page_059.txt necula_vpage_Page_107.txt necula_vpage_Page_013.txt necula_vpage_Page_023.txt necula_vpage_Page_020.txt necula_vpage_Page_046.txt necula_vpage_Page_034.txt necula_vpage_Page_096.txt necula_vpage_Page_022.txt necula_vpage_Page_036.txt necula_vpage_Page_006.txt necula_vpage_Page_016.txt necula_vpage_Page_009.txt necula_vpage_Page_099.txt necula_vpage_Page_102.txt necula_vpage_Page_101.txt necula_vpage_Page_012.txt necula_vpage_Page_008.txt necula_vpage_Page_057.txt necula_vpage_Page_071.txt necula_vpage_Page_098.txt necula_vpage_Page_082.txt necula_vpage_Page_086.txt necula_vpage_Page_045.txt necula_vpage_Page_056.txt necula_vpage_Page_052.txt necula_vpage_Page_126.txt necula_vpage_Page_093.txt necula_vpage_Page_002.txt necula_vpage_Page_109.txt necula_vpage_Page_040.txt necula_vpage_Page_081.txt necula_vpage_Page_117.txt necula_vpage_Page_025.txt necula_vpage_Page_068.txt necula_vpage_Page_065.txt necula_vpage_Page_001.txt necula_vpage_Page_085.txt necula_vpage_Page_060.txt necula_vpage_Page_055.txt necula_vpage_Page_094.txt necula_vpage_Page_118.txt necula_vpage_Page_064.txt necula_vpage_Page_042.txt necula_vpage_Page_062.txt necula_vpage_Page_122.txt necula_vpage_Page_048.txt necula_vpage_Page_090.txt necula_vpage_Page_103.txt necula_vpage_Page_014.txt necula_vpage_Page_088.txt necula_vpage_Page_005.txt necula_vpage_Page_067.txt necula_vpage_Page_038.txt necula_vpage_Page_039.txt necula_vpage_Page_017.txt necula_vpage_Page_108.txt necula_vpage_Page_010.txt necula_v_pdf.txt necula_vpage_Page_027.txt necula_vpage_Page_051.txt necula_vpage_Page_124.txt necula_vpage_Page_114.txt necula_vpage_Page_043.txt |

Full Text |

SEARCH FOR HEAVY RESONANCES DECAYING INTO tt PAIRS By VALENTIN NECULA A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2006 Copyright 2006 by Valentin Necula I dedicate this work to my parents, Maria-Doina and Eugen Necula. ACKNOWLEDGMENTS I take this opportunity to express my deepest thanks to my advisors, Prof Guenakh Mitselmakher and Prof Jacobo Konigsberg, for their guidance, continuous support and patience, which played a crucial role in the successful completion of this work and will continue to be a source of inspiration in the future. I would like to take this opportunity to thank Dr. Roberto Rossin for his important contribution to the success of this analysis, from writing code to running jobs and writing documentation, and nonetheless for all the interesting little chats we had, be it politics, history, finance or sports. I am also grateful for the advice I received and the discussions I had with Prof Andrey Korytov, Prof Konstantin Matchev, Dr. Sergey Klimenko and Prof John Yelton. At last but not at least I would like to thank Prof Richard P. Woodard for making my first years at University of Florida very exciting and rewarding. Sometimes I just miss those exams. My stay at CDF benefitted from the interaction I had with many people, and without making any attempt at an exhaustive list I would mention Dr. Florencia Canelli, Dr. Mircea Coca, Dr. Adam Gibson, Dr. Alexander Sukhanov, Dr. Song Ming Wang, Dr. Daniel Whiteson, Dr. Kohei Yorita, Prof John Conway, Prof Eva Halkiadakis, Dr. Douglas Glenzinski, Prof Takasumi MaL, Ii.1111.. Prof Evelyn Thomson, and Prof Young-Kee Kim. Special thanks to Dr Alexandre Pronko, who was my officemate in my early days at CDF, and with whom I had quite interesting discussions and played much fewer chess games that I should have. The more relaxing moments I enjoyed in the company of Gheorghe Lungu and Dr. Gavril A. Giurgiu were very useful as well and I would like to thank them both. TABLE OF CONTENTS page ACKNOWLEDGMENTS ................... ...... iv LIST OF TABLES ...................... ......... ix LIST OF FIGURES ..................... .......... xi ABSTRACT ....................... ........... xv CHAPTER 1 INTRODUCTION .................... ....... 1 1.1 Historical Perspective ........... ..... ....... 1 1.2 The Standard Model of Elementary Particles ............ 3 1.2.1 Leptons ................... ........ 3 1.2.2 Quarks ......... .............. .... 5 1.3 Beyond the Standard Model .......... ....... ... 6 2 NEW PHYSICS AND THE TOP QUARK ......... ......... 8 3 EXPERIMENTAL APPARATUS .......... ........ ..... 12 3.1 Tevatron Overview ................... .... 12 3.2 CDF Overview and Design .......... ........ .... 15 3.2.1 Calorimetry ................... ... 17 3.2.2 Tracking System ........... ............ 21 3.2.3 The Muon System .......... ............. 25 3.2.4 The Trigger System .......... ........ .... 26 4 EVENT RECONSTRUCTION ......... ............. 29 4.1 Quark and Gluons ................... ..... 29 4.1.1 Jet Clustering Algorithm ......... .......... 30 4.1.2 Jet Energy Corrections ......... ........ .. 31 4.2 Electrons .................... ........ 33 4.3 Muons ........... ...... ........ ...... 34 4.4 N. ii! i in ...4 ................... ........... 35 5 EVENT SELECTION AND SAMPLE COMPOSITION . ... 37 5.1 Choice of Decay Channel ................ .... .. 38 5.2 Data Samples .................. .......... .. 39 5.3 Event Selection .................. ....... .. .. 40 5.4 Sample Composition .................. ..... .. 41 6 GENERAL OVERVIEW OF THE METHOD AND PRELIMINARY T E ST S . . . . . . . . 44 6.1 Top Mass Measurement Algorithm ...... ......... 45 6.1.1 The Matrix Elements (\I IJ ) ................. 48 6.1.2 Approximations: Change of Integration Variables . 50 6.2 Monte Carlo Generators ............... .. .. 51 6.3 Basic Checks at Parton Level ............. .. .. 52 6.4 Tests on Smeared Partons . . ..... .... 54 6.5 Tests on Simulated Events with Realistic Transfer Functions 55 6.5.1 Samples and Event Selection . . ..... 55 6.5.2 Transfer Functions ...... ......... . .. 55 7 f1., RECONSTRUCTION ............... .... .. 58 7.1 Standard Model tt Reconstruction ..... . . 58 7.2 Signal and other \! Backgrounds ..... . . 63 8 SENSITIVITY STUDIES ............... .... .. 77 8.1 General Presentation of the Limit Setting Methodology . 77 8.2 Application to This Analysis ............. . .. 78 8.2.1 Templates ............... ....... .. 79 8.2.2 Template Weighting ................. . .. 81 8.2.3 Implementation .... . . . .. 82 8.2.4 Cross Section Measurement and Limits Calculation .. 83 8.2.5 Expected Sensitivity and Discovery Potential . ... 85 9 SYSTEMATICS ............... ........... .. 87 9.1 Shape Systematics ............... .... .. 87 9.1.1 Jet Energy Scale ....... ........ .. .. 87 9.1.2 Initial and Final State Radiation . . 88 9.1.3 W -Q2 Scale .................. ..... .. 89 9.1.4 Parton Distribution Functions Uncert.ii . ... 91 9.1.5 Overall Shape Systematic Uncertainties . .... 91 9.2 Effect of Shape Systematics ............... . .. 92 9.3 Expected Sensitivity with Shape Systematics . . ... 94 10 RESULTS ... .............. .......... .... 96 10.1 First Results .................. ........... .. 96 10.2 Final Results .................. .......... .. 99 10.3 Conclusions .................. ........... .. 101 APPENDIX CHANGE OF VARIABLES AND JACOBIAN CALCULATION SKETCH ...................... ............... 107 REFERENCES ................... ....... ...... 111 BIOGRAPHICAL SKETCH ................... ........ 113 LIST OF TABLES Table page 1-1 Properties of leptons. Antiparticles are not listed. . ... 4 1-2 Properties of quarks. Additionally, each quark can also carry one of three color charges. . . . . . . 5 3-1 Summary of CDF calorimeters. Xo and Ao refer to the radiation length for the electromagnetic calorimeter and interaction length for the hadronic calorimeter, respectively. Energy resolutions correspond to a single incident particle. .............. .. .. .. 18 5-1 tt decays ................ .. ......... ..... 38 5-2 Event Selection. .................. ........... .. 40 5-3 Cross-sections and acceptance ................. .. 42 5-4 Signal acceptance .................. .......... .. 43 8-1 Acceptances for background samples. ................ .. 81 8-2 Acceptances for resonance samples .............. .. .. 82 9-1 Linear fit parameters describing the uncert.,iiil' due to JES systematic; JES- and JES+ labels designate a +0a or -a variation in energy scale. The uncertainty on cross-section is parametrized with Joxo = o + a l ax o . . . . . 89 9-2 Linear fit parameters describing the uncert.iiil, v due to ISR modeling. The uncertainty in cross section is parametrized with &TX0o = o + a l a x o . . . . . 90 9-3 Linear fit parameters describing the uncert.,iiii v due to FSR modeling. The uncertainty in cross section is parametrized with &TX0o = o + a l a x o . . . . . 90 9-4 Linear fit parameters describing the uncert.iiili v due to W-Q2 scale, The uncertainty in cross section is parametrized with TX0o = o + a l ax o . . . . . 90 10-1 Expected number of events assuming no signal. WW and QCD numbers are derived based on the total number of events observed in the search region above 400GeV/c2. ............... . .. 97 10-2 Expected number of events assuming no signal. WW and QCD numbers are derived based on the total number of events observed in the search region above the 400GeV/c2. .................. .... 99 10-3 Expected and observed upper limits on signal cross-section derived from a dataset with an integrated luminosity of 680 pb-1. . 104 LIST OF FIGURES Figure page 2-1 The CDF Run 1 tt invariant mass spectrum. . . 10 2-2 The CDF Run 1 upper limits for resonance production cross-section times branching ratio. .................. .... 11 3-1 Overview of the Fermilab accelerator complex. The pp collisions at the center-of-mass energy of 1.96 TeV are produced by a sequence of five individual accelerators: the Cockroft-Walton, Linac, Booster, Main Injector, and Tevatron. ........ .. 13 3-2 Drawing of the CDF detector. One quarter view. . .... 16 3-3 The r z view of the new Run II end plug calorimeter . ... 21 3-4 Longitudinal view of the CDF II Tracking System . . .... 22 3-5 Isometric view of the three barrel structure of the CDF Silicon Vertex Detector . .............. ............ .. 23 3-6 One sixth of the COT in end-view; odd superlayers are small-angle stereo layers and even superlayers are axial. ........... ..25 3-7 CDF II Data flow. .................. ........ 27 6-1 Main leading order contribution to tt production in pp collisions at s 1.96 TeV ...... ......... ............. 48 6-2 Gluon-gluon leading order contribution to tt production in pp collisions at I 1.96 TeV ................. .. ...... 49 6-3 Reconstructed top mass from 250 pseudoexperiments of 20 events at parton level with mt=175 GeV/c2. The left plot is derived using only the correct combination, while the right plot uses all combinations 52 6-4 Reconstructed top mass vs. true top mass from pseudoexperiments of 20 events using all 24 combinations, at parton level . ... 53 6-5 Reconstructed top mass vs. true top mass from pseudoexperiments of 20 events with smearing. The left plot is derived using only the correct combination, while the right plot uses all combinations 54 6-6 Light quarks transfer functions (x 1 E- Er ), binned in three absolute pseudorapidity regions [0, 0.7], [0.7, 1.3] and [1.3, 2.0] 56 6-7 b-quarks transfer functions (x 1 E- E ), binned in three absolute pseudorapidity regions [0, 0.7], [0.7, 1.3] and [1.3, 2.0] . ... 57 7-1 ,, reconstruction for the correct combination and for events with exactly four matched tight jets. .................. 59 7-2 H.,, reconstruction including all events ............. .. 60 7-3 Examples of if., reconstruction, event by event. . .... 61 7-4 H.., template for Standard Model tt events. . . 62 7-5 Reconstructed invariant mass for a resonance with Mxo 650 GeV. The left plot shows all events passing event selection, while the right plot shows only matched events ................ . 64 7-6 Reconstructed invariant mass for a resonance with Mxo 650 GeV. The left plot shows mismatched k l1i'l I ii events and the right plot shows non-lI. ,I l I 1 events ..... .......... 65 7-7 W+4p template (electron sample) .... . ... 67 7-8 W+4p template (muon sample) ................ . 68 7-9 QCD template ............... ........... .. 69 7-10 WW template ....... ....... ............... 70 7-11 W+2b+2p template (electron sample) . . ..... 71 7-12 W+2b+2p template (moun sample) ..... . . 72 7-13 W+4p template with alternative Q2 scale (electron sample) . 73 7-14 All Standard Model background templates used in the analysis 74 7-15 W+2b+2p template vs W+4p template. W+2b+2p was ignored since the expected contribution is at the level of 1-2% and the template is very similar to the W+4p template . . ..... 75 7-16 Signal templates .................. .......... .. 76 8-1 Signal and background examples. The signal spectrum on the left (Mxo 600 GeV/c2) has been fit with a triple Gaussian. The background spectrum from Standard Model tt has been fit with the exponential-like function. Fit range starts at 400GeV/c2. ..... 80 8-2 Linearity tests on fake (left) and real (right) templates. As test fake signal templates we used Gaussians with 60 GeV/c2 widths and means of 800 and 900 GeV/c2. We used also real templates with masses from 450 to 900 GeV/c2. The top plots show the input versus the reconstructed cross section after 1000 pseudoexperiments at integrated luminosity f L = 1000pb-1. Bottom plots show the deviation from linearity in expanded scale, with red-dotted lines representing a 2% deviation .................. ..... 83 8-3 Example posterior probability function for the signal cross section for a pseudoexperiment with input signal of 2 pb and resonance mass of 900 GeV/c2. The most probable value estimates the cross section, and 95% confidence level (CL) upper and lower limits are extracted. The red arrow and the quoted value correspond to the 95% CL upper limit ............... ..... ..84 8-4 Upper limits at 95% CL. Only acceptance systematics are considered in this plot. ............... .......... .. .. 86 8-5 Probability of observing a non-zero lower limit versus input signal cross section at f L 1000pb-1. Only acceptance systematics are included in this plot ............... . ..86 9-1 Cross section shift due to JES uncertainty for f L = 1000pb-1. The shift represents the uncertainty on the cross section due to JES, as a function of cross-section .................. ...... 88 9-2 Cross section shift due to ISR (left) and FSR (right) uncertainties for f 1000 pb-. ............... . .... 89 9-3 Cross section shift due to W-Q2 scale uncertainty for f = 1000pb-1 91 9-4 Total shape systematic uncertainty versus signal cross section. . 92 9-5 Posterior probability function for the signal cross section. The smeared (convoluted) probability in green, including shape systematics, shows a longer tail than the original (black) distribution. As a consequence the UL quoted on the plot is shifted to higher values with respect to the one calculated based on the original posterior . ... 93 9-6 Upper limits at 95% CL. The plots show the results for two luminosity scenarios, including or excluding the contribution from shape systematic uncertainties .. ... .. ... .. .. ... ... .. ..... .. 94 9-7 Probability of observing a non-zero lower limit (LL) versus input signal cross section for f L 1000 pb-1. .................. .. 95 10-1 Reconstructed 1 ., in 320 pb-1 of CDF Run 2 data. The plot on the right shows events with at least one SECVTX tag . ... 96 10-2 Reconstructed it.. in 320 pb-1 of CDF Run 2 data, after the 400 GeV cut .......... .......... ................ 97 10-3 Resonant production upper limits from 320 pb-1 of CDF Run 2 data 98 10-4 Kolmogorov-Smirnoff (KS) test assuming only the Standard Model. The KS distance distribution from pseudoexperiments is shown in the right plot; the arrow indicates the KS distance between data and the Standard Model template .................. .. 100 10-5 Kolmogorov-Smirnoff (KS) test assuming signal with a mass of 500 GeV/c2 and a cross-section equal to the most likely value from the posterior probability. The KS distribution from pseudoexperiments is shown in the right plot; the arrow indicates the KS distance between data and the Standard Model + signal template. . ... 100 10-6 i ,, spectrum in data vs. Standard Model + 2 pb signal contribution from a resonance with a mass of 500 GeV/c2 . . .... 101 10-7 Reconstructed it., in CDF Run 2 data, 680 pb-1 . . ... .. 102 10-8 Resonant production upper limits in CDF Run 2 data, 680 pb- 102 10-9 Kolmogorov-Smirnoff test results are shown together with the reconstructed 1t., using 680 pb-1 and the corresponding Standard Model expectation template ................. ...... ........ 103 10-10 Posterior probability distributions for CDF data and masses between 450 and 700 GeV ................. ... ...... 105 10-11 Posterior probability distributions for CDF data and masses between 750 and 900 GeV. ................... ....... 106 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy SEARCH FOR HEAVY RESONANCES DECAYING INTO tt PAIRS By Valentin Necula August 2006 Chair: Guenakh Mitselmakher Cochair: Jacobo Konigsberg M. i' r Department: Physics We performed a search for narrow-width vector particles decaying into top-antitop pairs using 680 pb-1 of data collected by the CDF experiment during 2002-2005 Run 2 of the Tevatron. The center of mass energy of the pp collisions was 1.96 TeV. Model independent upper limits on the production cross-section times branching ratio are derived, at 95% confidence level. We exclude the existence of a leptophobic Z' boson in a topcolor-assisted technicolor model with a mass Mz, < 725GeV/c2, and our results can be used to constrain any other relevant theoretical model. CHAPTER 1 INTRODUCTION 1.1 Historical Perspective The science of Physics investigates the laws governing the behavior of matter, from the smallest subnuclear scales to the largest astronomical space-time regions and even the nature of the universe as a whole, as in cosmology. In High Energy Physics we are concerned with understanding the so-called fundamental "bricks" of matter or elementary particles and their interactions. It is not easy to ascertain elementariness, in fact it is quite impossible, and history shows us that more often than not what was considered elementary at one point was found later to be a composed system: molecules, which are the smallest units of substance possessing specific 1.li-,-i .,1 and chemical properties, were found to be made up of smaller units, atoms. A huge variety of organic matter with quite different 1.i -i. ... I. 1mi, .i1 properties is composed of just three atoms, hydrogen, carbon and oxygen. For some time atoms were considered to live up to their ancient meaning of indivisible units of matter, until the end of the 19th century when the ini,'--i. ius cathode rays puzzled ]'li,--i. i-1i with their properties. As J.J. Thomson correctly predicted, the cathode rays were actually streams of subatomic particles known today as electrons. It wasn't long until Rutherford proved in his famous scattering experiments that the positive charge inside atoms is confined to a pointlike core, or nucleus, a discovery which led to the classic planetary model of the atom. The elementariness of the atom vanished, and the focus moved to the structure of the nucleus. At first it was thought that the nucleus contained electrons and protons, but eventually the neutron (postulated by Rutherford) was discovered and the picture of matter had been simplified even more: just three particles, the proton, the neutron and the electron, were enough to build all known atoms. They were the new elementary particles, however soon they were joined by a large number of new particles with strange names like pions, kaons, eta and rho particles. The simple and maybe beautiful picture of three elementary particles at the basis of all matter had to be abandoned. Both experimental and theoretical breakthroughs lead to the understanding that protons, neutrons and the vast majority other particles are composed of smaller and stranger units, called quarks. Two different developments took place during this time though. First, one of the most brilliant lli, -i. i-1 of all times, P.A.M. Dirac, predicted in 1928, solely on theoretical grounds, the existence of a new particle which was later called the positron. It was supposed to be just like the electron, but positively charged, an antielectron. Amazingly, positrons were in fact observed only four years later and then it was found that other particles had antiparticles. It was an universal phenomenon. Secondly, searching for a particle postulated in the Yukawa theory of nuclear forces, experimentalists found something else, as it is often the case: a new negatively charged particle which behaved just like an elecron except it had much higher mass and it was unstable. It was called a muon. This phenomenon was found to have its own kind of universality and lead to the classification of elementary particles in three generations, as it will be detailed later. Particle 1.]li,-i also investigates the interactions or forces between the elementary constituents of matter. By mid 20th century 1li, -i. i-1, counted four distinct forces: the gravitational force, the electromagnetic force, the strong nuclear force responsible for holding quarks together inside a proton or neutron for instance, and the weak nuclear force responsible for / decays and other phenomena. The early picture of classical "force" fields mediating the interactions was abandoned after Dirac quantized the Maxwell's equations successfully, laying the foundation for quantum field theory and introducing the idea that interactions are mediated by exchanges of virtual particles. Later it was discovered that indeed the strong and weak nuclear forces are mediated by virtual particles, the gluon and the massive W+, W- and Z bosons respectively. However, even though we have a classical set of equations describing gravitation and powerful formalisms for quantizing fields, all attempts at quantum gravity failed. Delving into that inii-I ry is not the purpose of this dissertation though, and now we will proceed to a more formal presentation of the theoretical framework underlying our current understanding of elementary particles and their interactions. 1.2 The Standard Model of Elementary Particles The Standard Model is a quantum field theory which is based on the gauge symmetry SU(3)c x SU(2)L x U(i)y [1]. This gauge group includes the symmetry group of the strong interaction, SU(3)c and the symmetry group of the unified electroweak interaction, SU(2)L x U(i)y. As pointed out earlier, gravitation didn't fit the scheme and it is not part of the Standard Model. All the variety of phenomena is the result of the interactions of a small number of elementary particles, classified as leptons, quarks and force carriers or mediators. They are also classified in three generations with similar properties. 1.2.1 Leptons All leptons and hadrons have spin 1/2, and all force mediators have spin 1. There are three six charged leptons, the electron (e-), the muon (pt-), the tauon (r-) and their positively charged antiparticles. For each charged lepton there corresponds a neutral lepton, called a neutrino (u). Even though neutrinos do not carry electric charge, they have distinct antiparticles due to the fact that they possess a property called lepton number. There are three lepton numbers, the electronic lepton number, the muonic lepton number and the tauonic lepton number. An electron carries a +1 electronic lepton number and an electronic neutrino (ve) also carries a +1 electronic lepton number. Similarly a muon and a muon neutrino (uP) carry a +1 muonic lepton number, a tauon and a tau neutrino (VP) carry a +1 tauonic lepton number. The antiparticles of these particles carry -1 leptonic numbers and in the Standard Model each lepton number is conserved such that in any reaction the total lepton numbers of the initial state particles should be equal to the total lepton numbers of the final state particles. It should be noted that significant evidence has been gathered during the last decade indicating that neutrinos oscillate, thus violating the leptonic number conservation. Table 1-1: Properties of leptons. Antiparticles are not listed. Particle Spin Charge Mass 1st generation e 1/2 -1 0.510998920.00000004 MeV/c2 Ve 1/2 0 < 3 eV/c2 2nd generation I- 1/2 -1 10:.....- ;..',+0.000009 MeV/c2 uP 1/2 0 < 0.19 MeV/c2 3rd generation 1/2 -1 1776.99_+:. MeV/c2 U, 1/2 0 < 18.2 MeV/c2 The interactions of leptons are described by the electroweak theory which unifies electromagnetism and the weak force. In this gauge theory there are three massive force carriers, the W+, W- and Z bosons and one massless force carrier, the photon(7). In fact a pure gauge theory of leptons and gauge bosons would lead to massless particles, so in order for the particles to "c,. llii mass the spontaneous symmetry breaking mechanism was proposed. This adds an extra spin 0 boson to the picture, the Higgs boson, by which all gauge bosons except one (7) acquire mass, and leptons can acquire mass simply by coupling to the scalar Higgs field. Even though the massive bosons [2, 3, 4, 5] have been discovered at CERN more than 20 years ago, the Higgs boson has not been discovered. It is also possible that the mass problem is solved by some other mechanism. 1.2.2 Quarks There are six types of quarks and their antiparticles, commonly referred to as the up (u), down (d), strange (s), charm (c), bottom(b) and top(t) quarks. They carry fractional electrical charges and a new property called color, which is responsible for the strong interactions of quarks. Each quark can carry one of three colors, red, blue and green. The antiquarks carry anticolors, antired, antiblue and antigreen. Quarks' properties are summarized in Table 1-2. Quarks also take part in electroweak processes and that led to some remarkable predictions. It was found that in order to be able to renormalize the electroweak theory an equal number of generations of quarks and leptons was needed, but when these ideas appeared only three quarks were known, the u, d and s. Few years later in 1974 the c quark was discovered, thus completing the second quark generation as expected. Another three years later a third generation charged lepton was discovered, r, and in the same year a third generation quark was discovered, the b. The interesting part is that the massive bosons themselves were not discovered until 1983 The quest for the last missing pieces in the generation picture ended with the top quark discovery in 1994 at Fermilab and the V, discovery in 2000, also at Fermilab. Table 1-2: Properties of quarks. Additionally, each quark can also carry one of three color charges. Particle Spin Charge Mass 1st generation u 1/2 +2/3 1.5-4 MeV/c2 d 1/2 -1/3 4-8 MeV/c2 2nd generation c 1/2 +2/3 1.15-1.35 GeV/c2 s 1/2 -1/3 80-130 MeV/c2 3rd generation t 1/2 +2/3 178.0+4.3 GeV/c2 b 1/2 -1/3 4.1-4.4 GeV/c2 The strong interactions of quarks are mediated by eight massless gluons (g) which carry double color charge, thus being able to interact among themselves. The theory of strong interactions is known as Quantum Chromodynamics (QCD) and it is a gauge theory based on the SU(3) Lie group. It has two characteristics not found in the electroweak theory, called color confinement and ..1-,-lnil.1 i'- freedom. The interaction between colored particles is found to increase in strength with the distance between them, therefore quarks do not appear as free particles. Instead they form color singlet states either by combining three quarks with different colors (barions) or combining a quark and an antiquark (mesons). This is "color confinement". Conversely, at smaller and smaller distances the interaction strength decreases and the coupling constant as becomes small enough for perturbative methods to work. This feature is known as '. i-',i' ii..1 i' freedom." 1.3 Beyond the Standard Model The Standard Model has managed to explain very well a vast amount of experimental data, however there are reasons to believe it is an incomplete theory : As mentioned earlier, gravity is left out altogether Possibly connected to the previous point, the observed masses of particles are completely unexplained. The Higgs mechanism is just a way by which particles would ., Iiiiure" mass, both bosons and fermions, but it does not predict their values. The gauge anomaly of the electroweak theory is canceled only if we have an equal number of quark and lepton generations, and the charges of the particles within one generation obey a certain constraint equation. This implies that there is some deeper connection between quark and leptons which might also explain why we have only three generations. Besides particles' masses, there are still quite many arbitrary parameters in the Standard Model, like the relative strengths of the interactions, the Weinberg angle sin Ow, the elements of the Cabibbo-Kobayashi-Maskawa matrix which describe the strength of cross-generation direct coupling of quarks via charged currents. There are significant indications that neutrinos oscillate. The amount of known matter in the Universe is less than what would be necessary to produce a flat geometry as observed, and it is believed that there must exist other types of matter, dark matter, besides a non-zero cosmological constant or dark energy, which would explain the discrepancy. But these conclusions rely on the validity of General Relativity in describing the Universe as a whole, which is not quite obvious. Many theories beyond the Standard Model have been proposed, like Supersymmetry, String theories, Grand Unified Theories (GUTs), extra dimensions theories, Technicolor, quark compositeness theories and others. Some are basically impossible to test at current available energies, but most have a large parameter space and it is difficult to rule them out completely. In this work we decided to adopt a model independent approach to our search for Physics beyond the Standard Model, at least as much as it is possible. CHAPTER 2 NEW PHYSICS AND THE TOP QUARK The top quark is so much heavier than the other quarks, including its 3rd generation sibling the b quark, that it is natural to ask whether this fact is related to its possible coupling to New Physics. This idea was explored in a theory called "topcolor-assisted technical' ,'" [6, 7] which introduces new strong dynamics coupling preferentially to the third generation, thus making the tt and bb final states of particular interest. This theory introduces a topcolor heavy Z' and "topgli-, ,,-", both decaying into tt and bb pairs. There are other theoretical avenues for producing heavy resonances, like Universal Extra Dimension models [8, 9, 10]. The simpler versions [8, 9] assume only one extra dimension of size R, and lead to new particles via the Kaluza-Klein(KK) mechanism. In the minimal UED model [9] only one more parameter is needed in the theory, the cutoff scaleA. An interesting feature is the conservation of the KK number at tree level, and in general the conservation of the KK parity defined as (-1)" where n is the KK number. As a consequence the lightest KK partner at level 1 has negative KK parity and it is stable, therefore possible candidates for our search are level 2 KK partners. These can couple to Standard Model particles only through loop diagrams, given the need to conserve KK parity. Another UED model [10] assumes that all known particles propagate in two small extra dimensions, also leading to new states viathe Kaluza-Klein mechanism. Resonance states below 1 TeV are predicted in this model, and they have significant couplings to tt pairs. From a purely experimental point of view the tt production mechanism is an interesting process in which to search for New Physics since the full compatibility of tt candidate events with the Standard Model is not known with great precision due to quite limited statistics. There is room to explore for possible non-Standard Model sources within such an event sample. In this dissertation we focus on the search for a heavy resonance produced in pp collisions at s = 1.96 TeV which decays into tt pairs. The basic idea is to compute the tt invariant mass spectrum and search for indications of unexpected resonance peaks. We will implement the tools needed to set lower and upper limits for the resonance production cross-section times branching ratio at any given confidence level. A discovery would amount to a non-zero lower limit at a significant confidence level. A similar search was carried out at the Tevatron by the CDF [11] and DO [12] collaborations on the data gathered in "Run 1", the period of operation between 1992-1995. The tt invariant mass as reconstructed by the CDF analysis in the "lepton plus j. I channel is shown in Figure 2-1. There are only 63 events for the entire Run 1 dataset, which corresponds to an integrated luminosity of 110 pb-1. About half of them were tt events. Based on this distribution the 95% confidence level upper limits on tt resonant production cross-section times branching ratio were computed, as a function of resonance mass (Figure 2-2). The main challenge of this analysis is the reconstruction of the tt invariant mass spectrum. In this analysis we use an innovative approach which includes matrix element information to help with the reconstruction, as it will be explained in later chapters. S20 I f X tt Simulation 18 -- Mx=500GeVc2 . ri500S -F= 0.012 MN, 16 - 250 S14 -- 12 o S400 (600 00 Reconstructed M (GeV c2) 10 CDF Data (63 events) i-- tt and W+jets Simulations (63 events) 6 S -i W+jets Simulation (31.1 events) 4 I - 2 - 300 400 500 600 700 800 900 1000 Reconstructed Mt (GeV/c2) Figure 2-1: The CDF Run 1 tt invariant mass spectrum. 4000 0 00 600 700 800 900 1000 Mx (GeV/c ) Figure 2-2: The CDF times branching ratio. Run 1 upper limits for resonance production cross-section CHAPTER 3 EXPERIMENTAL APPARATUS The Fermi National Accelerator Laboratory (FNAL, Fermilab) has been a leading facility in experimental particle phl, -i, -, for the last 30 years. The hadron collider, called the Tevatron, is the world's most powerful accelerator where proton-antiproton collisions are investigated. While many measurements and searches have been carried out, probably the most famous results out of the Tevatron program are the discovery of the bottom quark in 1977 and the discovery of the top quark in 1994, during the 1992-1995 Tevatron operation period known as "Run I". At the moment of this writing we are in the middle of Run 2, the second Tevatron operation period which started in the spring of 2001. Record instantaneous luminosities ( ~ 1.7 1032 cm-2S-1) have been achieved recently, which makes the search for new particles including the last missing block of the Standard Model, the Higgs boson, a lot more interesting. The Collider Detector at Fermilab (CDF) and DO are two general purpose detectors built at almost opposite collision points along the accelerator. In this analysis we use data collected by the CDF collaboration during the period 2002-2005. The center of mass energy in Run 2 is s 1.96 TeV, the highest collision energy ever achieved. 3.1 Tevatron Overview The Fermilab accelerator complex is shown on a schematic drawing in Fig. 3-1. In order to produce such high energy pp collisions a sequence of five individual accelerators is needed. TFFRMIn.ARS A C'F!1FR-..TOR CHMIN TEVATAOft *l 0 "r i3 .i 'BECvCLER 1JLLl tIM ~ ~ ~ ~ ~ '- ~ -' ,; ii .*-'~II x Lv 1 -i, ""F-,WW mwk& = Figure 3-1: Overview of the Fermilab accelerator complex. The pp collisions at the center-of-mass energy of 1.96 TeV are produced by a sequence of five individual accelerators: the Cockroft-Walton, Linac, Booster, Main Injector, and Tevatron. First, the Cockroft-Walton accelerator boosts negative hydrogen ions to 750 KeV energy. Then, the ions are directed to the second stage of the process provided by the 145 m long linear accelerator (Linac) which further increases the energy of ions up to about 400 MeV. Before the next stage the ions are stripped of their electrons when they pass through a carbon foil, leaving a pure proton beam. These protons move to the next stage, the Booster, which is a synchrotron accelerator of about 150 m in diameter. At the end of this stage the protons reach an energy of 8 GeV. Next, protons are injected into another circular accelerator called the Main Injector. The Main Injector serves two functions. It provides a source of 120 GeV protons needed to produce anti-protons. It also boosts protons and anti-protons from 8 GeV up to 150 GeV before injecting them into the Tevatron. -- eBJrr r~ii -=,. ,1%E~ In order to produce anti-protons, 120 GeV protons are transported from the Main Injector to a nickel target. From the interaction sprays of secondary particles are produced, including anti-protons. Those anti-protons are selected and stored into the Debuncher ring where they are stochastically cooled to reduce the momentum spread. At the end of this process, the anti-protons are stored in the Accumulator, until they are needed in the Tevatron. The Tevatron is a proton-antiproton synchrotron collider situated in a 1 km radius tunnel. It accelerates 150 GeV protons and anti-protons up to 980 GeV, leading to a pp collision center-of-mass energy of 1.96 TeV. Inside the Tevatron the beams are split into 36 "biii. li, which are organized in three groups of 12. Within each group the bunches are separated in time by 396 ns. Collisions take place bunch by bunch, when a proton bunch meets an antiproton bunch at the interaction point. Just for clarity we should add that the beams are injected bunch by bunch. The collisions do not take place at the exact same location each time but are spread in space, according to a Gaussian distribution with a sigma of about 28 cm along the beam direction and also extending in the transverse plane with a circular cross-section defined by a radius of about 25 pm The instantaneous luminosity of the Tevatron is given by inst N f (3-1) where Np and Np are the numbers of protons and anti-protons per bunch, f is the frequency of bunch crossings and A is the effective area of the crossing beams. A compact period of time during which collisions take place in the Tevatron is called a "store" and it can last from few hours to over 24 hours. During a store the instantaneous Illilii ..-il is decreasing exponentially due to collisions and transverse spreading of the beams which leads to losses of protons and anti-protons. The instantaneous luminosity can drop one order of magnitude during one store. Run 2 initial instantaneous luminosity ranged from about 5 1030 cm-2S-1 in 2002 to the record 1.7 1032 cm-21-1 in 2006 and there are hopes for even higher values in the future. 3.2 CDF Overview and Design The Collider Detector at Fermilab (CDF) is a general purpose detector located at one of the two beam collision points along the Tevatron known as "BO". The idea of a general purpose detector is to allow the study of a wide range of processes occurring in pp collisions. For that purpose CDF is designed such that it can identify electrons, muons, photons and jets. It is indirectly sensitive to particles which escape detection, like the neutrinos. A schematic drawing of the CDF detector is shown in Fig. 3-2. It is cylindrically symmetric about the beam direction with a radius of about 5 m and a length of 27 m from end to end, and weighs over 5000 metric tons. The CDF collaboration uses a right-handed Cartesian coordinate system with its origin in the center of the detector, the positive z-axis along the proton beam direction, the positive x-axis towards the center of the Tevatron ring and the positive y-axis pointing upward. The azimuthal angle o is defined counterclockwise around the beam axis starting from the positive x-axis. The polar angle 0 is defined with respect to the positive z-axis. However, another quantity is widely used instead of the polar angle. It is called pseudo-rapidity and it is defined by the formula r= ln(tan(0/2)). The reason is that in the massless approximation, which is a very good one at these energies, relativistic boosts along the z-axis are additive in the pseudo-rapidity variable and this property is important, for instance in the consistent definition of jet cones. The pseudo-rapidity can also be defined with respect to the actual position of the interaction vertex, in which case it is called event pseudo-rapidity. CENTRAL DRIFT CHAMBER ELECTROMAGNETIC S-CALORI METER CHAMBER HADRON IC CALORIMETER MUON DRIFT CHAMBERS STEEL SHIELDING MUON SCINTILLATOR S COUNTER ISL (3 LAYERS) F_ SVX I (3 BARRELS) I----NTERACTION POINT (BO) ------ ~ SOLENOID COIL ,-PRESHOWER DETECTOR --. -- -- I- -I SHOWERMAX DETECTOR FT. m0 1 2m 3m 4m 5m S 4 6 8 10 12 14 16 ft1. Figure 3-2: Drawing of the CDF detector. One quarter view. The detector is composed by a series of subdetectors. Closest to the beam is the silicon vertex detectors which are surrounded by charged particle tracking chambers. The silicon vertex detectors are used to reconstruct the position of the collision vertex and particle moment. Next are the electromagnetic and hadronic calorimeters used for energy measurements and at last the muon chambers. There is also a time-of-flight system used for charged hadrons identification and the Cherenkov Luminosity Counters (CLC) which measure Illiiii -il v. For this analysis we use all major parts of the detector. The calorimetry is necessary for jet reconstruction, energy measurements for electrons, muon identification and also for the calculation of missing transverse energy. The tracking system plays a major role in electron and muon identification and in momentum measurement, and the muon chambers are important for muon identification. In this section we will provide a general description of the major components of the detector, mainly emphasizing the parts used for this analysis. A more comprehensive description can be found in the published literature [13] 3.2.1 Calorimetry The purpose of the calorimeters is to measure the energy depositions of particles passing through them. However not all particles interact in the same way. N' i, iiii li escape without any interaction at all, and high energy muons also escape the calorimeters without losing much energy. Apart from that, the rest of the particles leave their entire energy in the calorimeter with some exceptions in the case on pions for instance which can travel, rarely, beyond the calorimeter. Even though neutrinos do not interact with the calorimeter, by applying the conservation of momentum in the transverse plane one can calculate the total transverse momentum of the neutrinos. Since the calorimeter measures energy this inferred (qi .illi ', is known as missing transverse energy. In case the event contained high energy muons it needs further corrections before it can be identified as neutrino transverse momentum since, as mentioned before, the muons also do not leave much energy in the calorimeter. The electromagnetic calorimeter is designed such that it can measure well the energy of photons and electrons positronss). Electrons above 100 MeV lose their energy mostly through bremsstrahlung or photon radiation. High energy photons produce electron-positron pairs in the nuclear electromagnetic fields of the material, thus restarting the cycle and leading to the development of an electromagnetic shower of electrons, positrons and photons. At the last stage, low energy photons unable to create electron-positron pairs lose their energy by Compton scattering and photoelectric processes, while low energy electrons lose their energy by ionization. For simplicity we will assume that the initial particle moves perpendicular to the detector. Then as the shower develops in the calorimeter more and more energy is deposited, but at different depths or in different layers of the detector. However at some point the number of new shower particles starts to decrease and then later no new particles will be created. After this point the energy deposited per layer starts to decrease, exponentially. The depth of the maximum energy deposition layer is called the shower maximum and can be used for particle identification. Other charged particles like muons behave differently because the energy loss via radiation starts to dominate energy loss via ionization at much higher energies, higher by a factor of (m/mr)2, approximately. Given the energy scale at the Tevatron, a typical muon leaves roughly 10% of its energy in the electromagnetic calorimeter and thus it is not possible to identify and measure muon moment using the calorimeter. Table 3-1: Summary of CDF calorimeters. Xo and Ao refer to the radiation length for the electromagnetic calorimeter and interaction length for the hadronic calorimeter, respectively. Energy resolutions correspond to a single incident particle. Calorimeter -ill-,.-1- im rp coverage Depth Energy resolution a(E)/E CEM |p| < 1.1 18 Xo 13.5%//ET e 2% PEM 1.1 < Irl < 3.6 21 Xo 16%/ET 1% CHA Ir\ < 0.9 4.5 Ao 75%/ e 3% WHA 0.7 < The hadronic calorimeter functions on similar principles, it is designed to interact strongly with hadrons, thus making it possible to measure their energy by measuring the deposited energy. In this case the incoming particle interacts with the nuclei of the material in the detector leading to a similar shower development. The CDF calorimeter system covers the full azimuthal range and extends up to 5.2 in Iq Its components are the Central Electromagnetic Calorimeter (CEM) and the Central Hadronic Calorimeter (CHA) which cover the central region as the name -II.-.-, -. the Plug Electromagnetic Calorimeter (PEM) and the Plug Hadronic Calorimeter (PHA), which extend the |l| coverage more; the Endwall Hadronic Calorimeter (WHA), which is located in between the central and plug regions; and finally the Miniplug (\! NP), which is a forward electromagnetic calorimeter which is not used in this analysis. Some technical details are listed in Table 3-1. Each calorimeter -l-, --I,. II is divided in smaller units called towers and has a projective geometry, which means that all towers point to the center of the detector. Central Calorimeter. Each tower of the central calorimeters covers 150 in AO and 0.11 in Ap and it is composed of alternating layers of absorber and active material. When a particle passes through the dense absorber material it produces a shower of secondary particles which interact with the active material and produce light. The light is collected and converted in a measurement of energy deposition. The CEM is made of 0.5 cm thick pol --I vl i.i'- scintillator active layers which are separated by 0.32 cm thick lead absorber layers. The CEM extends from the radius of 173 cm up to 208 cm from the beam line and the total thickness of the CEM material is about 18 radiation lengths. It is divided into two identical pieces at q = 0 and both have an one inch thick iron plate at q = 0. This kind of uninstrumented region is commonly referred to as a An important parameter is the energy resolution. The CEM resolution for electrons or photons between 10 and 100 GeV is given by o(E) 13.5% .5% 2% (CEM), (3-2) E \E where ET (in GeV) is the transverse energy of the electron or photon and the symbol indicates that two independent terms are added in quadrature. Inside the GEM, at a depth of about six radiation lengths or 184 cm away from the beam line, there is the Central Electromagnetic Shower Maximum detector (CES). Its position corresponds to the location of the maximum development of the electromagnetic shower which was described earlier. The CES determines the shower position and its transverse development using a set of orthogonal strips and wires. Cathode strips are aligned in the azimuthal direction providing z-view information and anode wires are arranged along the z direction providing the r view information. The position measurement using this detector has a resolution of 0.2 cm for 50 GeV electrons. The CHA is located right after the CEM and its pseudorapidity coverage is Ir\ < 0.9 while WHA calorimeter extends this coverage up to Ir\ < 1.3. It has a depth of about 4.5 interaction lengths and consists of 1 cm thick acrylic scintillator layers interleaved with steel layers 2.5 cm thick. The end wall calorimeter uses 5 cm thick absorber layers. The electromagnetic and hadronic calorimeters were calibrated using electron and respectively pion test beams of 50 GeV. Their performance is described by the energy resolution. For charged pions between 10 and 150 GeV it is given by o(E) 75% -E) 7% 3% (CHA, WHA), (3-3) Plug Calorimeter. The PEM and PHA calorimeters cover an pq| range between 1.1 and 3.6 and employ the same principles.The PEM is a lead/scintillator calorimeter with 0.4 cm thick active layers and 0.45 cm thick lead layers. It also includes a shower maximum detector at a depth of about 6 radiation lengths, the PES, but it is not used in this analysis. The PHA contains 0.6 cm thick scintillator layers and 5 cm think iron layers. An r z cross section view of the CENTRAL TRACKING I, , ---'HADRON CALORIMETER .... ------- 2- Figure 3-3: The r z view of the new Run II end plug calorimeter CDF plug calorimeters is shown in 3-3. In this analysis the calorimeters were used to determine the momentum and direction of electrons and jets. 3.2.2 Tracking System The purpose of the tracking system is to reconstruct trajectories and moment of charged particles and find the location of the primary and secondary vertices. A primary vertex is the location where a pp interaction occurred. A secondary vertex is the location where a decay took place. For instance charm and bottom hadrons have a longer lifetime than light quarks hadrons, long enough that they can travel and decay at a location experimentally discernible from the primary vertex location. Such distances are of the order of hundreds of microns and this feature is exploited in heavy flavor t.-.'-ii-; algorithms. The components of the tracking system are the following: superconducting solenoid, silicon detectors and a large open-cell drift chamber known as Central Outer Tracker (COT). A diagram is shown in Figure 3-4. As it can be seen, the CDF Tracking Volume m END WALL 2.0 HADRON _1.0 CAL. 1. 0 .5 1.0 1.5 2.0 2.5 3.0 m 1.5 - 0 .5 1.0 1.5 2.0 2.5 3.0 m SVX II INTERMEDIATE 5 LAYERS SILICON LAYERS Figure 3-4: Longitudinal view of the CDF II Tracking System. COT isn't very useful for |II > 1 so CDF can rely only on the silicon detectors for that region. But for the |II < 1 range both silicon and COT information is used and a full 3D track reconstruction is possible. The Solenoid. This is a superconducting magnet which produces a 1.4 T uniform magnetic field oriented along the z-axis. It is 5 m long and 3 m in diameter and it allows for the determination of the momentum and sign of charged particles. Silicon Detectors. It is composed of three separate parts: Layer 00 (LOO), the Silicon Vertex Detector (SVX) and the Intermediate Silicon Layers (ISL). Layer 00. This is the innermost part of the silicon detectors and is made up by a single layer of radiation hard silicon attached to the beam pipe [14]. Its purpose is to improve the impact parameter resolution for low momentum particles which suffer multiple scattering in the materials and readout electronics found prior to other tracking system components. Also it can help extend the lifetime of the tracking system in general, given that the inner layers of the SVX will degrade due to radiation damage. Silicon Vertex Detector. The SVX is segmented into three barrels along the z-axis and has a total length of 96 cm. Each barrel is divided into 12 wedges in 9, which contain five layers of silicon microstrip detectors. All layers are double-sided (Figure 3-5). .> . 1 Ii Figure 3-5: Isometric view of the three barrel structure of the CDF Silicon Vertex Detector. It is located outside the LOO from 2.4 cm to 10.7 cm in radial coordinate. Both r z and r 9 coordinates are determined. This -idll-1 -I. im is used to tri-:. r on displaced vertices which are an indication of heavy flavor content and helps with the track reconstruction. It is a complex system involving a total of 405,504 channels and unfortunately it is impossible to present it in any detail without going into too many technicalities. Intermediate Silicon Layers. The ISL is composed of three layers of double-sided silicon with axial and small-angle stereo sides and it is placed just outside the SVX. The geometry is less intuitive but it can be seen in Figure 3-4: there is one layer in the central region (\rl\ < 1), at a radius of 22 cm. In the plug region (1 < \rl\ < 2) two layers of silicon are placed at radii of 20 and 28 cm, respectively. The SVX and ISL are a single functional system which provides stand-alone silicon tracking and heavy flavor l..-'.-ii; over the full region |rlK < 2.0. Central Outer Tracker. It is a large open-cell drift chamber which provides tracking at relatively large radii, between 44 cm and 132 cm and it covers the region |qrl < 1.0. It consists of four axial and four small angle (30) stereo super-layers. The superlayers are divided in small cells 0 and each cell contains 12 sense wires. The end-view of the COT detector is shown in Figure 3-6. The cells are filled with a gas mixture of Ar-Et-CF4 in proportions 50:35:15. The charged particles passing through the chamber ionize the gas and the produced electrons are attracted to the sense wires. When they arrive in the vicinity of the wire a process of avalanche ionization occurs and more electrons are produced and then collected by the wire. The location of the initial electron can be calculated based on the the sense wire which was hit and the drift velocity. This only describes how one 'point' of the trajectory is determined, but the process repeats in other cells and based on the location of many such hits a track trajectory is reconstructed. The important parameter to be reconstructed is the track curvature from which particle momentum is obtained. The COT has a resolution of about 0.7 10-4cm-1, which leads to a momentum resolution of 6pr/pT ~ 0.;;' (GeV/c)-. The typical drift velocity is about 100k/m/ns. Figure 3-6: One sixth of the COT in end-view; odd superlayers are small-angle stereo layers and even superlayers are axial. The COT allows for the reconstruction of tracks of charged particles in the r- and r z planes. 3.2.3 The Muon System The Muon System is positioned farthest from the beam line and it is composed of four systems of scintillators and proportional chambers. They cover the region up to |I| < 2. In this analysis we only muons detected by the three central muon detectors known as the Central Muon Detector (CMU), Central Muon Upgrade (CMP) and Central Muon Extension (CMX). Since these systems are placed behind the calorimeter and behind the return yoke of the magnet most other particles are absorbed by them. However, an extra layer of 60 cm of steel is added in front of the CMP for the same purpose of absorbing other particles. These three systems cover the region |I| < 1.0. The 1.0 < |I| < 2.0 range is covered by the Intermediate Muon System (IMU), but we don't use it in this analysis. 3.2.4 The Trigger System As mentioned earlier in Run II bunches of protons and antiprotons collide every 396 ns. The average number of pp collisions per bunch crossing depends on the instantaneous llijii.-il ,I but for typical luminosities in Run II we expect one pp collision or more per bunch crossing therefore if we were to record all events we would need to save 1.7 million events per second. The typical event size is about 250 kB so at such a rate we would need to save 435 GB of data per second. However most pp collisions are diffractive inelastic collisions in which the proton or antiproton is broken into hadrons before the two are close enough such that a l.,d core" interaction between partons can occur. These type of collisions are not of much interest and therefore there is no need to record them. The purpose of the tri.--. r system is to filter out these less interesting events, categorize and save the remaining ones. This is achieved through a 3-tier architecture shown in Fig. 3-7. Level-i (LI) and Level-2 (L2) tri-. -r systems use only part of the entire event to make a decision regarding the event. They use dedicated hardware to perform a partial event reconstruction. At Level-1 all events are considered. They are stored in a pipeline since the LI logic needs 4 /s to reach a decision, much longer than the 396 ns between two consecutive events. So while the decision making algorithm is executed by the LI hardware the event is pushed down the pipeline, which serves the purpose of temporary memory. When the event reaches the end of Dataflow of CDF "Deadtimeless" ] [ Trigger and DAQ I L1 Storage Pipeline: 42 Clock Cycles Deep L2 Buffers: 4 Events DAQ Buffers Levell: 7.6 MHz Synchronous pipeline 5544ns latency <50 kHz Accept rate Level 2: Asynchronous 2 stage pipeline ~20ps latency 300 Hz Accept Rate L1+L2 rejection: 20,000:1 PJW 10/28/96 Figure 3-7: CDF II Data flow. the pipeline the decision is made and the event is either ignored or allowed to move on to Level-2. It is important to bear in mind that the LI tri2 .-r is a synchronous pipeline, with decision making pipelined such that many events are present in the L1 tri2 .-r logic simultaneously yet at different stages. Even though it takes 4 fis to reach a decision and even though events come every 396 ns the trij.- r analyzes them all, just not one at a time. The LI tri2 .-r reduces the initial rate of about 1.7 MHz to below 20 kHz. The Level-2 tri-. -r is an .i-..nchronous system with an average decision time of 20 pts. The events passing LI are stored in one of the four L2 buffers waiting for a L2 decision. If an event arrives from LI and all the L2 buffers are full the system incurs dead time and it is recorded during the run. The L2 tri--. r has a an acceptance rate of about 300 Hz, another significant reduction. An event that passed L2 is transferred to the data acquisition (DAQ) buffers and then via a network switch to a Level-3 CPU node. L3 uses full event reconstruction to make a decision whether to write the event on tape or not. It consists of a "farm" of commercial CPUs, each processing one event at a time. If the event passes this level as well it is sent for writing on tape. The maximum output rate at L3 is 75 Hz, the main limitation being the data-logging rate with a typical value of 18 MB/s. Events are classified according to their characteristics and separated into different tri-.2 -r paths. Some of these classes of events are produced copiously and in order to leave enough bandwidth for less abundant event types a prescale mechanism is put in place. For example a prescale of 1:20 keeps only one event out of 20 that passed the tri.-.- r requirements. CHAPTER 4 EVENT RECONSTRUCTION The raw data out of the many subdetectors contains a wealth of information which is not always relevant from a pli, -i- analysis point of view. For instance, in this analysis we need to know the moment of electrons, among other things. But what we do have in terms of raw data is a series of hits in the tracking system and energy depositions in the electromagnetic and hadronic calorimeters, and these readings could be caused by other particles, or may not be compatible with the trajectory of an electron in the magnetic field of the detector. Therefore detailed studies are necessary in order to find an efficient way of identifying raw data patterns compatible with those produced by an electron passing through the detector and at the same time reject as much fakes as possible. In short the task of the event reconstruction is to identify the particles which were present in the event and measure their 4-momenta as well as possible. We will investigate this process in more detail for each kind of particle involved. 4.1 Quark and Gluons Quarks and gluons produce a spray of particles via parton showering, hadronization and decay. Therefore they do not interact with the detector directly but appear as a more or less compact set of tracks and calorimeter towers in which energy has been deposited. By "(,'Ip.... I we mean compact in the r] - plane. Such a detector pattern is called a jet and in this case the purpose of the reconstruction is to identify jets consistent with quark or gluon origins and estimate their overall energy and momentum. 4.1.1 Jet Clustering Algorithm There are a couple of algorithms to identify these jets and estimate their energy. In this analysis we used an iterative "fixed c, algorithm (JETCLU) for jet identification [15]. The idea is to find something like the center of the jet and then assign all towers within a given radius R in the 9q plane around this center to that jet. The algorithm begins by creating a list of all seed towers, or the towers with transverse energy above some fixed threshold (1 GeV). Then, for each of the seed towers starting with the highest ET tower, a precluster is formed by all seed towers within radius R of the seed tower. In this iterative process the seed towers already assigned to a precluster are removed from the list of available seed towers. For each precluster a new center is found by doing an ET weighted average of the 9q positions of the towers pertaining to the precluster. This is called centroidd". Now using the centroids as origin we can recluster the the towers, this time allowing for the inclusion of towers with energy above a lower threshold (100 MeV). Again we compute the centroid and the process is repeated until it converges, when the latest centroid is very close to the previous centroid. In the iterative procedure it is possible to have one tower belonging to two jets. But this would lead to inconsistencies because the total energy of the jets would not be equal to the total energy of the towers. Therefore after the iterative procedure is finished we have to resolve this double counting issue. One way is to merge the clusters that share towers. This happens if the overlapping towers' energy is more than 75% of the energy of the smaller cluster. But if this requirement is not satisfied each shared tower is assigned to the closest cluster. In order to find the 4-momenta of the particles we assign a massless 4-momenta for each electromagnetic and hadronic tower based on the measured energy in the tower. The direction is given by the unit vector pointing from the event vertex to the center of the calorimeter tower at the depth that corresponds to the shower maximum. The total jet 4-momenta is defined by summing over all towers in the cluster in the following way: N S= (Ef+ Ead) (4-1) i= 1 N m + Ehad Sn h d COS ... -2) p = y,(E7 sin 61 cos em + E sin ad cos .C) (4-2) i=1 N py = E sin m sin m + Eha sin 0d had) (4-3) i= 1 N pz = (Eem cos 0 + E s ad) (4-4) i= 1 where E7, E^a, [ 0,T, 0,hd are the electromagnetic and hadronic tower energies, azimuthal and polar angles for the ith tower in the cluster. The jet 4-momentum depends on the choice of R. For small values towers pertaining to the original parton are not included in the cluster, while for large values we risk merging jets pertaining to separate partons. A compromise used in many CDF analysis is R = 0.4, and this is the value used here as well. 4.1.2 Jet Energy Corrections The algorithm just presented returns an energy value that needs further corrections in order to reflect, on average, the parton energy. The reasons for the discrepancy are many, some instrumental and some due to underlying ]li,-i, .,l processes. A few important instrumental effects are listed below: Jets in regions less instrumented, like in between calorimeter wedges or in the r = 0 region will naturally measure less energy. It is known that for low energy charged pions (ET < 10GeV) the calorimeter response is non-linear, while in the energy measurement procedure it is assumed linear. Charged particles with transverse moment below 0.5 GeV/c are bent by the magnetic field and never get to the calorimeter. Fluctuations intrinsic to the calorimeter response. Important pl,' -i. .,l effects are the following: The jet can contain muons which leave little energy in the calorimeter, and neutrinos which escape undetected. Therefore the cluster energy underestimates the parton energy. Choosing a radius R = 0.4 in the clustering algorithm we lose all towers rightfully pertaining to the jet but laying outside that radius. Extra particles can hit the same towers, coming either from other interactions present in the event or from the underlying event (the interaction of the proton and antiproton remnants, i.e. the quarks that did not take part in the hard process). CDF developed a standard procedure [16] to correct for such effects. The user can choose to correct only for certain effects using the standard corrections and correct other effects with more analysis-specific corrections. This is also the case for this analysis, so we are using the standard corrections only for the instrumental effects. From there we use Monte Carlo simulations to map the correlation between the parton energy and the (partially) corrected measured jet energy. 4.2 Electrons In this analysis we are using only electrons detected in the central calorimeter. Most if not all of an electron's energy is deposited in the electromagnetic calorimeter, therefore the reconstruction algorithm starts by identifying the list of seed towers, which are towers with electromagnetic energy greater than 2 GeV. Then, towers .,li.1:ent to the seed towers are added to the cluster if they have non-zero electromagnetic or hadronic energy and are located in the same ) wedge and nearest in r] direction. At the end only clusters with electromagnetic ET greater than 2 GeV and electromagnetic to hadronic energy ratio smaller than 0.125 are kept. However this last requirement regarding the ratio is ignored for very energetic electrons with energy greater than 100 GeV. What has been described above is just an "electromagnetic object" candidate. It serves as basis for identifying both electrons and photons. Further selection criteria [17] are necessary to identify electrons and separate them from photons or isolated charged hadrons, 7r mesons and jets faking leptons. These other criteria are listed below: A quality COT track with a direction matching the location of the calorimeter cluster must be present. The ratio of hadronic energy to calorimeter energy (HADEM) satisfies HADEM < 0.055 + 0.00045 E, where E is the energy. Compatibility between the lateral shower profile of the candidate with that of test beam electrons. Compatibility between the CES shower profile and that of test beam electrons. The associated track's z position should be in the luminous region of the beam, which is within 60 cm of the nominal interaction point. The ratio of additional calorimeter transverse energy found in a cone of radius R=0.4 to the transverse energy of the candidate electron is less than 0.1 (isolation requirement). 4.3 Muons Muons leave little energy in the calorimeter but they can be identified by extrapolating the COT tracks to the muon chambers and looking for matching stubs there [18]. A stub is a collection of hits in the muon chambers that form a track segment. The muon candidates are preselected by requiring rather loose matching criteria between the COT track and the stubs. As for electrons, we apply a set of identification cuts [17] to separate muons from cosmic rays and hadrons penetrating the calorimeter: Energy deposition in the calorimeter consistent with a minimum ionizing particle, usually hadronic energy less than 6 GeV and electromagnetic energy less than 2 GeV. Small energy-dependent terms are added for very energetic muons with track momentum greater than 100 GeV. The distance between the extrapolated track and the stub is small, compatible with a muon trajectory. The actual value depends on the particular muon detector involved (CMP, CMU, CMX) but it is around 5 cm. The distance of closest approach between the reconstructed track to the beam line (do) is less than 0.2 cm for tracks containing no silicon hits and less than 0.02 cm for tracks containing silicon hits (which provide better resolution). As for electrons, the associated track's z position should be in the luminous region of the beam, within 60 cm of the nominal interaction point. The ratio of additional transverse ET in a cone of radius R = 0.4 around the track direction is less than 0.1 4.4 N ii i i -, NX IlIi ii' 'i escape detection entirely but since the transverse momentum of the event is zero, and that includes neutrinos, we can indirectly measure their total PT by summing all the transverse energy (momentum) measured in the detector and assigning any imbalance to neutrinos or other (undiscovered) long lived neutral particles escaping detection. This (Ill.ili il' is called iii---iig transverse (i i I[ and it is defined N = (Ecsin Oe" + Etdsin O@ad)cosQi (4-5) i=1 N Y = (E sin O" + Eadsin 08Id);:,,., (4-6) i=1 where E d, E" is the hadronic and respectively electromagnetic energy of the ith caloritemeter tower, 0' is the the polar angle of the line connecting the event vertex to the center of the ith tower and qi is a weighted average defined by: E Osin Orcosmn + hadEsn had( , i n Eemsn Om + Ehadsin hand (7) with '"",. ." weighted averages themselves but intratower. In the calculation of fi using the formulae above only towers with energy above 0.1 GeV are used. This requirement is applied individually to hadronic and electromagnetic components. The magnitude FT is given by _VT '+ 1+ (4-8) Since muons do not leave much energy in the calorimeter and raw jet energy measurements are systematically low it follows that the above quantity is only a first order approximation for the neutrinos' Pr and needs further corrections. The first correction is directly related to jet corrections. If we scale the energy of jets by some factor because that is a better match to parton energy then in computing the total measured ET we should replace the raw jet energy measured by the calorimeter with the corrected energy as given by the jet energy corrections. These corrections are applied only to jets with ET above 8 GeV, and therefore all calorimeter towers not included within such jets do not receive any correction. The second correction is related to muons being minimum ionizing particles, leaving little energy in the calorimeter. Therefore a better estimate of the total ET of the event is obtained by removing calorimeter towers associated with muons from the above calculations and replacing their contribution with the measured PT of the muons. In this analysis we use the missing ET value only for event selection. It plays no role in the reconstruction of the invariant mass and therefore more detailed studies on missing ET resolution are not included here. CHAPTER 5 EVENT SELECTION AND SAMPLE COMPOSITION The top quark decays so quickly that it does not have time to form any top hadrons and therefore a tt final state appears under different signatures based on the decay chain of the top quark: t W+b (5-1) W+ + 1 W+ qq (5-2) where I stands for one of the charged lepton types e, pt or T, q stands for u or c and q' for one of the "down" quarks d, s or b. The top quark can also decay to either a d or a s quark instead of b but the combined branching ratios for these two processes are below 1% and generally ignored. Based on these decay modes we can see that a tt pair decay can appear under three different experimental signatures: Six jets or sometimes more due to radiation, when both W bosons decay hadronically. This is the Ii., Ironic" channel. Four jets or more, a charged lepton and missing ET when only one W boson decays hadronically. This is the "ki .1 Il I i I channel. Two jets or more, two charged leptons of opposite sign and missing ET when both W bosons decay leptonically. This is the ",lil] 1.p1 i" channel. The scheme is complicated a bit because the T lepton also decays before detection and it can either "t i.,,-f I.. iI" into a jet, if it decays hadronically, or produce an electron or a muon and more neutrinos, if it decays leptonically. However, regardless of the T decay mode, these events are difficult to identify and we decided to develop an algorithm which should work well with non-r events only. The branching ratios are defined essentially by the W branching ratios and lead to the following numbers: Table 5-1: tt decays Category Branching Ratio Dilepton (excluding 7) 5% Dilepton (at least one r) 6% Lepton+Jets (excluding r) 30% r+Jets 15% Hadronic 44% 5.1 Choice of Decay Channel The choice for the decay channel has to take into account two more factors, the intrinsic f., reconstruction resolution and the signal to background ratio (S/B). The reconstruction resolution is worse when more information is missing. Let us take a look at each channel individually: In the dilepton channel we measure well the lepton moment, we have some uncert.,iiii on the two b quark moment due to various effects described in the previous chapter, and we don't measure at all the moment of the two neutrinos (6 variables). In the lI i1 ii I i 1 channel we measure well the lepton momentum, we have some uncert.iiiil v on the four quark moment and we don't measure at all the neutrino moment (3 variables). In the hadronic channel we have some uncert.,iil v on the six quark moment. In each case we can reduce the number of unknown variables by applying transverse momentum conservation, which yields two constraints, but since this is the same across the channels we can just compare them based on the facts stated above. If non-tt backgrounds were absent we would certainly pick the hadronic channel since it has the highest branching ratio and least loss of information because no neutrinos escape detection. However the S/B ratio for Standard Model tt in the hadronic channel, without any t..-'-ii,-; requirement, is about 1:20 while the S/B ratio for the l1i'. i I ii -I channel is roughly 1:2 with a branching ratio (2/3) comparable to the hadronic channel. Even though the resolution analysis would also favor the hadronic channel, with such a large background it has, most probably, less potential than the l( i'1 -i I ii -I channel. The dilepton channel has most unknown variables leading to poorest reconstruction resolution and significantly lower branching ratio, even though it enjoys the best S/B around 3:1. This qualitative analysis led us to pick the 1 i .i l i I i, channel as best candidate for this analysis at the beginning of Run 2 when we expected less than 1 fb-1 of integrated luminosity available for this dissertation. The final dataset on which this analysis is performed corresponds to 680 pb-1 of data. 5.2 Data Samples The data used in this analysis was collected between February 2002 and September 2005. A preselection of the data is carried out by the collaboration and bad runs in which various components of the detector malfunctioned are removed. The remaining good data corresponds to a total integrated liluiii ,-ii' v of 680 pb-1. Two distinct datasets were used, the high PT central electron dataset and the high PT muon dataset. The electron dataset is selected by a tri- -.-r path that requires a Level-3 electron candidate with CEM E} > 18GeV, Ehad/Ern < 0.125 and a COT track with pr > 9GeV/c. The muon dataset is selected by a tri.- --r path that requires a Level-3 muon candidate with pr > 18GeV/c. We use only CMX muons or muons with stubs in both CMU and CMP subdetectors. Dilepton e p events can appear in both datasets and one has to be careful to not double count them. 5.3 Event Selection In order to select tt events in the 1: i.-i il I I channel we have to require that each event contains at least four jets, an electron or a muon and 'VT consistent with the presence of a neutrino, that is, a fT value well above the fluctuations around the null measurement. Certainly this leaves a lot of space of maneuver with respect to the pr range and the minimum ET threshold required for each object. An exhaustive study for optimizing the cuts has not been done independently, however we adopted the widely used cuts for Standard Model tt selection in the l l 'i I ii 1 channel which can be found in most CDF top analyses. These cuts are the result of a great amount of work throughout Run 1 and Run 2 and are doing a fine job at separating signal (Standard Model tt in this case) from backgrounds. There could be better cuts that improve the resonant tt S/B but further studies would be necessary to understand the overall effect on sensitivity, and what would be an optimum for a 400 GeV/c2 mass resonance may not be so for a 800 GeV/c2 resonance. The task of studying in detail the impact of selection criteria on sensitivity will have to be addressed in a later version of the analysis. However we did compare the sensitivity among three versions of jet selections and chose the best, as it will be explained later. Table 5-2: Event Selection Object Requirements Electron GEM, fiducial, not from a conversion ET > 20 GeV + ID cuts Muon CMX or (CMU and CMP) detectors, not cosmics PT > 20 GeV + ID cuts -T Corrected -T > 20 GeV Tight Jets Corrected ET > 15 GeV, |rjl < 2.0 at least four tight jets Loose Jets Corrected ET > 8 GeV, Ir p < 2.4 not used for selection per se, but counted as jets In table 5-2 we present in a succinct form the requirements [19] for the selection of electrons, muons, jets and the f cut used. Positrons and antimuons follow the same selections, of course. By "fiduciality" of electrons it is meant that they are located in well instrumented areas of the towers, not near tower edges for instance. Conversion removal algorithms are used to remove electrons or positrons that come from photons hitting the various materials found before the calorimeter and producing e-e+ pairs. We are not interested in such electrons. The removal per se is done by a standard CDF algorithm [20]. There is also an algorithm for eliminating cosmic ray muons [21] and it is used to veto on such muons in our selection. We also require one and only one lepton and that the distance between the lepton's track ZO coordinate and the jets' vertex position is less than 5 cm, since consistency with tt production requires that all our objects must come from the same interaction point. The identification criteria complete the event selection rules and were discussed in the previous chapter, together with the corrections for FT and jets. A simple study was performed in which we compared the sensitivities of three jet selection criteria: exactly tight four jets four tight jets + extra jets (or none) three tight jets + extra jets (> 0). The first option provided the best sensitivity and we adopted it for our selection. 5.4 Sample Composition The leading Standard Model processes that can produce events passing these selection criteria are the following: W production associated with jets ( W+jets). The W decays leptonically producing a lepton and 1T . tt events. Multijet events where one jet fakes an electron. Will will refer to these generically as QCD. Diboson events such as WW, WZ and ZZ. The relative contribution of these processes can be derived if we know the theoretical cross-section and the acceptance for each of them. Table 5-3: Cross-sections and acceptance Process cross-section Acceptance SI\! tt 6.7 pb 4.5% WW 12.4 pb 0.14% WZ 3.7 pb (I 11' . ZZ 1.4 pb 0.02% W+jets ? 0.7% QCD ? 0.7% However the W+jets and QCD cross-sections are not known theoretically with good precision, but in other CDF top analyses the number of events from these processes is extracted from the data. For this analysis we decided to use only the ratio of the expected number of events as derived by these analyses and fit for the absolute normalization since in those analyses no room was left for any non-Standard Model process, and that could bias our search. The constraint used is given below: NQCD 0.1 (5-3) Nw where N represents the expected number of events. Resonant tt acceptance are listed for comparison in Table 5-4. The search algorithm finds the most likely values for Nw and signal cross-section as a function of resonance mass, and it is also able to compute the statistical Table 5-4: Signal acceptance Mxo (GeV/c2) 450 500 550 600 650 700 750 800 850 900 Acceptance 0.047 0.051 0.055 0.057 0.059 0.062 0.062 0.063 0.063 0.061 relevance of the most likely signal cross-section value. We will explore it in detail in the next chapters. CHAPTER 6 GENERAL OVERVIEW OF THE METHOD AND PRELIMINARY TESTS This analysis contains two major pieces, one is the tt invariant mass (3 [,, ) reconstruction and the second is the search for a non-Standard Model component in that spectrum, in particular a resonance contribution. The reconstruction is complicated because our parton level final state, after the top decay chain, is composed of two b-quarks, two light quarks, a neutrino and a charged lepton. Experimentally, we measure accurately only the lepton, which makes the task of reconstructing the tt invariant mass spectrum with good precision non-trivial. There are a total of seven poorly measured or unmeasured variables: four quark energies and three components of neutrino moment. In fact the jet direction is also smeared compared to the parton direction, but this is considered a second order effect compared to the above mentioned effects. Throughout the remaining of this dissertation we will always assume that the jet direction is a good approximation for the parton direction. In the CDF Run 1 analysis [11] a somewhat straightforward approach was used to reconstruct the invariant mass spectrum. A X2 fit was constructed based on jet resolutions and the knowledge of W and t masses and it was used to weight the unknown parton values. Minimizing the X2 with respect to the free parameters (the unknowns listed above) provided an estimate for their most probable values. Then those values were used to compute the invariant mass of the system, 1i ... In this dissertation we use an innovative approach using matrix element information to reconstruct the tt invariant mass spectrum. The maximum information about any given process is contained in its differential cross-section and it is therefore natural to think that by making use of more information in the analysis one can improve resolution and therefore sensitivity. Since we decided to pursue a model independent search we will not be able to use any resonance matrix elements. We will use Standard Model tt matrix element to help with weighting the various possible parton level configurations and extract an average value for the invariant mass, event by event. The invariant mass distribution obtained in such a way follows closely the Standard Model tt spectrum at parton level and it is also a good estimator for the resonant tt events as it will be shown later. In order to validate the matrix element machinery we performed a series of tests by implementing a conceptually simpler matrix element analysis, which is the top mass measurement using matrix elements. Our tests include only Monte Carlo simulation studies but they played a crucial role in pushing this analysis forward since our results were very similar to those of groups actually working on the top mass measurement using matrix element information. The remainder of this chapter will present these studies which will also familiarize to reader with the technical details common to both analyses. In the next chapter we will show how to extend the algorithm in order to reconstruct the I,, spectrum. 6.1 Top Mass Measurement Algorithm The purpose of this algorithm is to build a top mass dependent likelihood for each event using the differential cross-section for the I\ tt process. We will use the leading order (LO) term in the Standard Model tt cross-section formula. The final state is made up of the 6 decay products of the tt system. Let pi be their 3-momenta. We have the following equation representing the conservation of the transverse momentum of the system: 6 =0 (6-1) i=1 This is a constraint on the seven unknown variables mentioned in the previous chapter and it will be used in all the top mass tests we will show in this chapter. In reality we have initial and final state radiation (ISR and FSR) which leads to a non-zero PT value. Still, the average PT is null so constraining it to 0 should not bias the result for top mass but maybe only increase the statistical error. For the resonance search analysis though we will use the PT distribution from Monte Carlo simulation and integrate over it since it helps narrow the reconstructed resonance peak. The probability of a given parton level final state configuration pi relative to other configurations is given by: 1 F - dP(pA'ntop) dZ= t dZbfk(Za)fl(Zb)d(Tkl (A (p top, ZaP, ZbP) (6-2) (Tm(fMtop) or in short dP (1imt) op= part(im t JJ dtop 3} (6-3) Indices k, I cover the partons types in the proton and antiproton respectively. Summation over both indices is implied. The parton distribution functions (PDFs) are given by fk(z) and P, P designate the proton and antiproton momentum. Plugging in the differential cross-section formula d(Tu / I' -) Ak ( 2 (2)44 11(d3E (6-4) 4EkE, n, -, | (27)32Ei one can obtain an explicit form for 7part(Almtop). The top mass (rtop) enters as a parameter. We combine the probability densities (7) of all events in the sample into a joint likelihood which is a function of mtop: (6-5) L(mtop) = X1iX2...7x We expect that maximizing this likelihood with respect to the parameter (mtop) yields its correct (input) value, as it should. The algorithm presented above is only a first step, since it assumes we know the parton level moment which is not true experimentally. But the treatment of more realistic situations in which we don't measure the final state completely or accurately enough follows the same line of thought, basically we compute the probability density of observing a 1: ili Ii I event: l7obs(Jlj2,J3,j4,I' 'II 4 = P~prt(ip(1), p(2), Pp(3), pp(4)', 1, P top)d37 Ti(jp) p(i))d (66) p i=1 In this formula we assume that the first two arguments of the parton density (7prt) function represent the b-quark moment, the jet 3-momenta are denoted by j and the parton 3-momenta by p1. T4(j|p) is the probability density that a parton with 3-momenta ; is measured as a jet with 3 moment j. These functions are called parton-to-jet transfer functions. We use different transfer functions for b quarks and lighter quarks, so we added an index to differentiate the two. With our conventions T1 = T2 Tb and T3 = T4 = Tight. In practice we approximate the parton direction with the jet direction, as mentioned earlier, which simplifies the calculations a bit. Even with b-t..._.in._; information available, there is no unique assignment of jets to partons. This indistinguishability is addressed by summing over all allowed permutations using the p E S4 permutation variable. A permutation is allowed if it doesn't contradict available b-t...- i._-; information. The procedure to extract the top mass is the same as in the idealized case of a perfect measurement of the final state discussed before, that is, combine all events in a joint likelihood and maximize it with respect to the parameter mtop. Figure 6-1: Main leading order contribution to tt production in pp collisions at s = 1.96 TeV 6.1.1 The Matrix Elements (\II) The leading order matrix element for the process qq tt W+bW-b qqblvb (Fig. 6-1) is not easily calculable analytically without making any approximation. We found it useful to compute the ME directly using explicit spinors and Dirac matrices because this allows us to compute new, non-Standard Model matrix elements very easily in case we wanted to incorporate them in the algorithm later on. Dedicated searches for specific models (spin 0 resonance, spin 1 resonance, color octet resonance) would be interesting as well, but we will not address them in this dissertation. Ignoring numerical factors the quark annihilation diagram amplitude is given by MA qq V(Pq)pMU(Pq) u(p)y'(1 nj)v(pd) u(I )'(1 5)(p,) pt2 2- m+ mtp Pt p m,+ nimtFt Q 0 PW p /m2 p PW P /m2 6-7) (pq + Pq)2 p2+ 2w + imwFw PW M 2 imwFw If we consider the masses of the light quarks and leptons negligible we can simplify the expression of the W propagators so the ME reads V (pq ," "(P) u(pun)7(l 5)v(pd) u( )y7(l y5)V(p,) qq (pPq + Pq)2 P2'+ m + imww PW m2 + ww W W 1:, rnt r f. +rnt (p2 m 2 + imtFt 2pT Mi + MtFt We tested our numerical calculation using explicit Dirac matrices and spinors with the analytical calculation for the squared amplitude by Barger [22] and we found the two calculations in good agreement. That calculation uses the narrow width approximation (NWA) in treating the top quark propagators and therefore the two methods are not equivalent when one or both of the top masses are off-shell. We also tested our implementation on simpler QED matrix element calculations and it produced results identical with their exact analytical expressions. 9 t 9 S t/ -^AA-VVV-- -AW/- t Figure 6-2: Gluon-gluon leading order contribution to tt production in pp collisions at = 1.96 TeV The gluon-gluon production mechanism is described by three diagrams in Fig. 6-2, in which the top decays have not been depicted explicitly. The matrix element needed in the cross-section formula for the gluon-gluon production mechanism has the structure: IM,,2 A1 A2 A32 (6-9) color where Ai are the amplitudes corresponding to the three diagrams. The color sum covers all possible color configurations for the gluons and quarks. This expression is not optimal with regard to CPU time if we were to do these sums as they stand. We can rewrite it as IM,,g2 = Y( 1 2 + I12 + IA32 2* {A I} + 2 2. 1R{AAX 2 R{A2A*N-10) color This form is very convenient, the color sums can be evaluated for each individual term regardless of the kinematics because the amplitudes are factorized as A Akin Color We can write again IMg2 l ki22 f in 12 . JA1 A -I' f- -A -I'+ Re{f1 A + fkin A + 23 A A, } (6-11) All the color summing is encoded in the six constants fi, fij. We found these to be 3/16, 1/12, 1/12, -3i/16, 3i/16 and -1/48 respectively. We cross-checked against the analytical formula available for the 2 -- 2 process described in the diagrams above (ignoring the top decays) and found them in perfect agreement. The procedure just presented works as well for the 2 6 process and this is how we compute it. 6.1.2 Approximations: Change of Integration Variables The method as presented involves seven integrals (three over neutrino 3-momentum and four over quark moment) and summing over combinatorics. If for instance we choose to set the tt transverse momentum to zero that would amount to two constraints reducing the number of integrals by two. Or we could choose to set the W or top on shell, depending on the level of precision and speed desired. Even from a purely numerical point of view, it would be easier to integrate only around the top and W mass poles rather than over the large range of the original variables mentioned before. For all these reasons a change of variable was performed. The new variables are the tt transverse momentum and the intermediate particle masses mwi, mw2, mT1, mT2. This is a set of only six new variables, which means we need to keep one of the initial variables unchanged (one of the light quarks' energy). The change of variable and the associated Jacobian calculations are detailed in the Appendix. Since the calculations are a bit lengthy we wanted to make sure no mistake was made so we used simulated events where all variables are available and any change of variables can be readily checked. We found that the change of variable implementation works very well. In the implementation of the algorithm we always use these variables, both for these preliminary top mass tests and for the [,, reconstruction. 6.2 Monte Carlo Generators For some of the top mass tests we used CompHep 4.4 [23], which is a matrix element based event generator. One can select explicitly which diagrams to use for event generation. CompHep preserves all spin correlations and off-shell contributions since it doesn't attempt to simplify the diagrams in any way. CompHep generates events separately for each diagram uu -- tt dd -- tt and gg -- tt. We also used Pythia [24] and Herwig [25] official CDF samples ("Gen5") but the first tests for top mass were done with parton level CompHep events and then with Gaussian smeared partons. The Gaussian smearing of parton energies is meant to simulate the relationship between the jet and parton energies. 6.3 Basic Checks at Parton Level Most Likely Top Mass mpv Most Likely Top Mass mpv Entries 250 Entries 250 j Mean 175 j Mean 175 70 RMS 0.1709 5 70- RMS 0.1721 > X2 1 ndf 11.37/9 > :2 /ndf 9.428 9 60 Prob 0.2512 Prob 0.3987 Constant 58.42 4.925 60- Constant 56.3 4.884 0 Mean 175 0.01045 Mean 175 0.01081 0 Sigma 0.159 0.008629 50 Sigma 0.1664 0.009748 40- I 40 30- 30- 20- 20- 10 10- -3 173.6 174 174.6 176 176.6 176 176.5 177 W3 173.5 174 174.5 175 176.6 176 176.5 177 GeV GeV Figure 6-3: Reconstructed top mass from 250 pseudoexperiments of 20 events at parton level with mt=175 GeV/c2. The left plot is derived using only the correct combination, while the right plot uses all combinations Finding the top mass when the final state is known or measured perfectly is straightforward so we expect our method to produce the correct answer without any bias. Using uu -- t CompHep events, we performed 250 pseudoexperiments of 20 events each. Which means that we extracted the top mass from a joint likelihood of 20 events each time. We repeated this exercise for various generator level top masses to make sure there is no mass dependent bias. First, we used only the correct combination in the likelihood, that is, we not only assumed to have measured the parton 3-momenta ideally, but also identified the quark flavors. For mt = 175 GeV the reconstructed mass is shown in the right plot of Figure 6-3. As it can be seen, we get back the exact input mass. Similarly good results were obtained for other masses. Next we let all 24 combinations contribute to the event likelihood by summing over all permutations and repeated the same exercise. The reconstructed top mass is barely modified by the inclusion of all combinations, as shown in the second plot of Figure 6-3. Again, tests on other samples with different top masses didn't produce any surprise. These results are summarized in Figure 6-4 showing the output (reconstructed) mass vs input mass when using all combinations. The slope is consistent with 1.0 and the intercept is consistent with 0, which proves that there are no mass dependent effects, at least not in the mass range of interest. Perhaps it would be useful to remind the reader that the purpose of these studies is to establish the validity of the matrix element calculations and overall correctness of implementation of a non-trivial algorithm. Otherwise they are quite simple. We also looked at the rms of the pull distributions for each mass and it was found to be 1.0 within errors, which is a more compelling indication that we are modeling these events very well with our likelihood. Top Mass : Reconstructed vs True S2 / ndf 1.251/3 0 1 8 5 ------................... Prob 0.7408........ ....... p0 -0.1337 0.1102 p1 1.001+ 0.0006387 180 175 170 1 6 5 ............................................................I.............................. ...............................I........... 165 165 170 175 180 185 GeV Figure 6-4: Reconstructed top mass vs. true top mass from pseudoexperiments of 20 events using all 24 combinations, at parton level 6.4 Tests on Smeared Partons A more realistic test involves a rudimentary simulation of the calorimeter response obtained by smearing the parton energies (the four final state quarks' energies). Also, the neutrino 3-momentum information is ignored in reconstruction. We used 20% Gaussian -i' .iiir._. which is quite realistic when compared to parton-to-jet transfer functions' rms. The tt transverse momentum was taken to be zero and also the top quark was forced on shell, thus the number of integrals was reduced to just three. We used the same uu -- t CompHep events for these tests but later we did check with Herwig events and the results were similar. The same pseudoexperiments of 20 events were performed and in Figure 6-5 we show the reconstructed mass vs the true mass for the right combination and for all 24 combinations. Top Mass : Reconstructed vs True I X2 / ndf 1.421 3 / 0 185 Prob 0.7007 p0 -0.214 1.996 p1 1.001 0.01149 180 - 175- 170- 165 - 165 170 175 180 185 GeV Top Mass : Reconstructed vs True I 0 185 180 175 170 165 X2 / ndf 4.742 / 3 Prob 0.1917 pO -1.33 2.384 p1 1.009 0.01374 - 165 170 175 180 185 GeV Figure 6-5: Reconstructed top mass vs. true top mass from pseudoexperiments of 20 events with smearing. The left plot is derived using only the correct combination, while the right plot uses all combinations We fit the pulls from pseudoexperiments with a Gaussian and the returned width was 1.09 0.07 for the 175 GeV sample, again consistent with 1. We observed similar pulls for other masses as well. The purpose of this set of tests was to validate the new additions to the algorithm implementation: transfer functions, transformation of variables and integration over unmeasured quantities. The success of this tests gives us confidence that the more realistic version of the algorithm is well designed and well implemented. 6.5 Tests on Simulated Events with Realistic Transfer Functions 6.5.1 Samples and Event Selection We used CDF official tt samples generated with Pythia and Herwig event generators. We apply the event reconstruction and event selection described in the previous chapters requiring for each event to contain one and only one reconstructed charged lepton, at least four tight jets and missing ET > 20 GeV. 6.5.2 Transfer Functions Transfer functions are necessary when we run over simulated events or data in order to describe the relationship between final state quark moment and jet moment. In this case we are interested in the probability distribution of the jet energy given the parton energy. This distribution varies with the energy and pseudorapidity of the parton, so we bin it with respect to these variables. Since the detector is forward-backward symmetric we only need to bin in absolute pseudorapidity. We have only three bins in absolute pseudorapidity, with the boundaries at 0 0.7, 1.3 and 2. The parton energy bins are determined based on the statistics available, requiring minimum 3000 parton-jet pairs per energy bin. This allows for a rather smooth function which can be fit well. For example the central region b-quark energy bin boundaries are chosen to be 10 GeV, 37 GeV, 47 GeV, 57 GeV, 67 GeV, 77 GeV, 87 GeV, 97 GeV, 107 GeV, 117 GeV, 128 GeV, 145 GeV, and 182 GeV. Anything above 182 GeV is considered part of one more bin. We should perhaps emphasize that these are parton energy bins. In order to derive the transfer functions we need to match jets to partons first. For matching purposes we require that all four final state quarks are matched uniquely to jets in a cone of 0.4, that is, the AR distance between the parton direction and jet direction is less than 0.4. If this requirement is not met, we do not use the event for deriving transfer functions. The direction smearing is considered a second order effect and ignored, which amounts to identifying the quark direction with the jet direction. This approximation can be corrected to some degree by using "effective wi li h- for W and top instead of theoretical values. In other words the smearing in direction leads to a smearing of the mass peak even when there is no energy smearing. The effect can be quantified based on simulation and a corresponding larger width can be en'pl'v.d in the analysis. In fact we do use such a larger width (4 GeV) for the hadronic W mass in our resonance search analysis. Our studies showed that it narrows the resonance peak a bit, but no such tests were performed for top mass. 4500 2200 1200 2000 4000 1800 1000 - 3500 1600 3000 1400 800 2500 1200 2000 1000 1500 400 1000 - 1000 200 500 200 I I 10 0 1I I I . I 0 I -1 08 06 04 02 0 02 04 06 08 1 -1 08 06 04 02 0 02 04 06 08 1 -1 08 06 04 02 0 02 04 06 08 1 Figure 6-6: Light quarks transfer functions (x 1 ), binned in three Eparton absolute pseudorapidity regions [0, 0.7], [0.7, 1.3] and [1.3, 2.0] In Figures 6-6 and 6-7 we show examples of transfer functions for both light quarks and b-quarks, respectively. We fit the shape with a sum of three Gaussians, which works fine. The variable plotted is 1 Ejet/Eparton, since it varies less with 57 4500 2200- 1200 4000 2000- 0 3500 1800 3000- 1600 800 - 1400 2500 1200 600 2000 1000 1500 800 400 600 100 400 200 500 200 1 ,I0 ,, ,I ,,I I --- I l 0 ,.I,..L. 0 0I -1 8 06 4 02 0 02 04 06 08 1 -1 08 406 4 4 2 0 02 04 06 08 1 -1 08 06 04 4 2 0 2 4 06 08 Figure 6-7: b-quarks transfer functions (x 1 Ejet ), binned in three absolute Everton pseudorapidity regions [0, 0.7], [0.7, 1.3] and [1.3, 2.0] the parton energy. It is related to the distribution we introduced as '1.I,-I. r f',ll, l i i via a simple change of variable. Our transfer functions are between parton energy and corrected jet energy, as explained in chapter 4. With these tools in place we ran similar pseudoexperiments on the Herwig sample. The returned mtopvalue was 178.1 0.4 GeV/c2 and the pulls' width was 1.05 0.09. The correct (generated) mass for this sample is 178 GeV/c2. We did not run any other tests because the only change we made in the algorithm at this stage was to plug in realistic transfer functions and run it over fully simulated events. As such, the only new thing that needed testing was the derivation of the realistic transfer functions based on Monte Carlo simulation. This is by far a simpler business than the implementation of matrix elements calculations and change of variables together with the rest of the machinery. Based on the results presented above we concluded that our transfer functions' implementation is fine and the algorithm as a whole works very well, is properly constructed and implemented. Also, our top mass results on Monte Carlo were very similar to those of analyses doing the top mass measurement using matrix elements. In the next chapter we will show how the top mass matrix element algorithm can be extended to compute the tt invariant mass, i,, CHAPTER 7 ,1 ., RECONSTRUCTION 7.1 Standard Model tt Reconstruction All the tools developed for the top mass can be turned around to reconstruct any kinematical variable of interest, in particular f.,, Let's assume for simplicity of presentation that we know which is the right combination, that is, we know how to match jets to partons. In that case P({p}, {j}) 7 prt({p}) T({j} {p}) (7-1) defines the probability that an event has the parton moment {p} and is observed with the jet moment {j}. In our notation {p} and {j} refer to the set of all parton and jet 3-momenta. Integrating on the parton variables, given the observed jets, we obtain the probability used for the top mass measurement. However, the expression provides a weight for any parton configuration once the jets are measured. Any quantity that is a function of parton moment can be assigned a probability distribution based on the ,.,-I, r" distribution above, Mt included, and this is our approach. Technically this amounts to the following integration: p(xz{j}) = 7rprt({p}) T({j}{p}) (x if., ({p})){dp} (7-2) with p(xz{j}) being the I,, probability distribution given the observed jet moment. It should be noted that if we remove the delta function we retrieve the event probability formula used for the top mass measurement method presented before, and therefore all the validation tests presented before are as relevant for S.,, reconstruction. In terms of the modifications in the algorithm these are also minimal, there is nothing much to be added except histogramming .1 ,, during integration. In other words we obtain an invariant mass distribution per event. We will use the mean of this 1,, distribution as our event 1 ,, value. Before running on all events in our various samples and producing templates we want to make sure the .,, reconstruction algorithm works well. We selected events in which we could match uniquely partons to jets and which contained only four tight jets. These are the circumstances that allow full consistency between the reconstruction algorithm and the events reconstructed and that is a self-consistent test of the method, which is what we intend to show here. hdif mttReco mttHepg Entries 5000 Mean 0.2261 800 RMS 24.93 Smtt HEPG 1200 700 1000 600- mtt RECO 500 800 400 600 300 400 200- 200 100 - 0 100 200 300 400 500 600 700 800 9001000 -00 -150 -100 -50 0 50 100 150 200 M, [GeV] GeV Figure 7-1: ., reconstruction for the correct combination and for events with exactly four matched tight jets. We ran the algorithm on these selected events and we were able to reconstruct S.,, back to the parton level as it can be seen in the left plot of Figure 7-1. Both plots are produced after running on events selected from the CDF official Pythia sample. Since we use the Standard Model tt matrix element we do expect to reconstruct these events very well and that seems to be the case indeed, as it is shown also in the right plot of Figure 7-1. There the difference between the reconstructed value and the true value is histogrammed in order to see the intrinsic resolution and check for any bias. The results are very good and we consider the testing and validation part of the analysis ended. 800 mtt RECO 700 600 500 400 300 200 100 0 1 1 11" 1 11 1 1 1' 11 ""1 '" ' 0 100 200 300 400 500 600 700 800 9001000 Mt [GeV] Figure 7-2: .1f, reconstruction including all events Since in reality we don't know which is the correct combination we adopt the top mass method approach and sum over all allowed combinations in the formula 7-2. We expect the right combination to contribute more than the others as it happens for the top mass analysis. The ,1 as reconstructed for all events, without any of the requirements mentioned above, is shown in Figure 7-2. This is what we expect to be the Standard Model contribution to the I,, spectrum in the data. Some examples of event by event reconstruction are shown in Figure 7-3. The 4th event is a dilepton event and the 8th is a 1.,i, IP iI event. Interestingly Entries 2000 0.14- Mean 396.7 0.14- RMS 20.21 o.12 0.12 - 0.1 0.1 - 0.08 0.08 - 0.06[ 0.06 - 0.04- 0 .04 0.02 0.02 0 .5 .... 7.... 8.... I. 1.. .0 300 400 500 600 700 800 900 1000 300 400 Figure 7-3: Examples of reconstruction, event by event. they have larger widths than the others which are all 1: i1.1ii I 1 events. Adding combinations together can lead to double or multiple peaks. The top mass used on data is mtop = 175 GeV. Therefore this is the value used in our algorithm when producing I,, templates corresponding to various processes. Figure 7-4 shows the actual template used for fitting the data, derived by fitting 5000 reconstructed events. Certain approximations were made, since we cannot perform all integrals which appear in the formal presentation because the CPU time involved would be SMttbar Template Mean_SIVM Enties 5000 Mean 456.6 - RMS 92.26 4450- S/ ndf 48.09/56 400 Prob 0.7649 Constant 14.65 0.1266 350 Slope -0.2152 0.005337 Expo 0.6148 0.003323 300- 250- 200- 150 100- 50- F l l I l + 0 200 400 600 800 1000 1200 Mt [GeV] SMttbar Template o -a)-- D -A - a I I I I I I Mean_ Enties Mean RMS 2 / ndf Prob Constant 1' Slope -0.215 Expo 0.614 I l I II^ 0 200 400 600 800 Figure 7-4: .V, template for Standard Model tt events. astronomical, even using the computing farms commonly available to CDF users. This is so because we need to model the ,, spectrum for 10 signal samples and a couple of backgrounds, and then perform the systematics studies which require recomputing the templates each time. As it was mentioned in the previous chapter, the implementation uses a different set of variables for integration, namely the masses of the two W bosons, the masses of the two top quarks, the total transverse momentum of the tt system and one "W" quark energy. Studies showed that the best approach, given the CPU time limitations, is to set the two top quarks' masses on shell and also set on shell the mass of the W which decays leptonically, leaving us with four integrals to SIVtt 5000 456.6 92.26 48.09 / 56 0.7649 4.65 0.1266 2 0.005337 18 0.003323 1000 1200 M [GeV] ,,,, perform. Even so, for systematics studies we needed about 100,000 CPU hours and we used extensively the CDF computing farms. 7.2 Signal and other Y \ Backgrounds The Monte Carlo samples for signal and all other Standard Model backgrounds (besides tt are run through the same algorithm, thus producing new distributions corresponding to signal and backgrounds respectively. Even though the signal is not 100% correctly modeled by the Standard Model tt matrix element, we expect the reconstruction to work quite well since a significant part of the matrix element is concerned with the top and W decays and that won't depend on the specific tt production mechanism. Especially in the case of a spin 1 resonance the differences between the correct resonance matrix element and the Standard Model matrix element are minimal, since the gluon is a spin 1 particle after all. Even tough the methods presented in this dissertation can be applied to more general cases, the actual limits we are deriving at the end are valid for vector resonances because the Monte Carlo signal samples were generated with a vector resonance model. We want to remind the reader that it was our initial decision to do a model independent search anyway. The results are not completely model independent only because of the Monte Carlo generators used to produce signal samples. Applying the reconstruction to non-tt events doesn't produce any particularly meaningful distributions, but they are backgrounds needed to model the data. In what follows we briefly describe the results obtained when running this reconstruction method on the various backgrounds needed in our analysis and presented in a previous chapter. Signal samples We generated signal samples with resonance masses from 450 GeV/c2 up to 900 GeV/c2, every 50 GeV/c2, using Pythia [24]. The reconstructed ,1 for all is shown in Figure 7-16. The peaks match very well the true value of > > | 450o S450 500oo- 400 > '>350 -400- - S300 300- 250 S200 200 150o 100- 100- 50 0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000 Mtt [GeV] Mt [GeV] Figure 7-5: Reconstructed invariant mass for a resonance with Mxo = 650 GeV. The left plot shows all events passing event selection, while the right plot shows only matched events the resonance mass. In order to better understand the low mass shoulder we split these events in three orthogonal subsamples: events with all four jets matched to partons, mismatched events and fake k1id .1i I I. events (dilepton or hadronic events passing the klid' .1 I i -, event selection). The method is expected to work well on matched events and indeed this is what we see in Figures 7-5 and 7-6. The shoulder is given by the superposition of mismatched events and fake 1: i..i, I iI -. events on top of the nice peak from matched events. The generated width for the resonance was 1.2% of the resonance mass. As it can be seen the reconstructed resonance mass is much wider, due to the relatively large uncertainties in jet measurements and non measuring the neutrino z component at all. However the peak remains prominent enough to be easily distinguished from the exponentially dropping Standard Model processes. SW+jet samples We use the CDF official W + 4 partons ALPGEN [26] samples which are then run through Herwig for parton showering. We looked at W + 2b + 2 S90- 80 30 680 S20- 50- 40 15- 30 10 20 - 5- 10 - 0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000 Mt [GeV] Mt [GeV] Figure 7-6: Reconstructed invariant mass for a resonance with Mxo = 650 GeV. The left plot shows mismatched 1: i.1.i, I i I- events and the right plot shows non-k i .1 ii I r 1 events partons also but decided not to include it explicitly since the shape is very, very similar and the expected contribution at the level of 1-2% compared to 60% or more for the W + 4 partons. These can be seen in Figures 7-7, 7-11, 7-12 and a direct comparison of fit templates is shown in 7-15. So all W+jets events are modeled by the W + 4 partons sample. QCD For QCD we used the data to extract the shape. MIill ijet data is scanned for jets with high electromagnetic fraction which are reinterpreted as electrons based on the assumption that the jets that do fake an electron are very similar to the ones just mentioned. With that said, the usual event selection is applied and the events are reconstructed just like the others. This process produces the template shown in Figure 7-9. The shape is not much different from W + 4 partons, in fact they are quite close as assumed in the CDF Run 1 analysis when the QCD template was ignored altogether. Dibosons WW, WZ and ZZ The cross-sections for the WW, WZ and ZZ processes are 12.4 pb, 3.7 pb and 1.4 pb. The acceptance follow the same trend with 0.1.!' -, 0 11,' and 0.02% respectively. Moreover, the WZ and ZZ official samples have fewer events left after event selection and the fits have larger errors. Given that WW dominates anyway we decided to use only that template but increase the acceptance such that the expected number of events will cover the small WZ and ZZ contributions. Since overall the whole diboson part is almost negligible this procedure isn't expected to have any impact other than simplifying the analysis. It can be added that the WW template which is shown in Figure 7-10 is also very similar to the Standard Model tt W + jets and QCD templates. We put all of them on top of each other for easy comparison in Figure 7-14. All these templates are used to fit the data and extract limits. The procedure is explained in the next chapter. Wenu+4p Template I I 0 200 Mean_We4p Entries 1856 Mean 445.2 81.35 46.18/41 E 400 600 80( rob 0.2668 constant 12.74 0.205 lope -0.09996 0.004666 xpo 0.724 0.006357 S 1000 1200 M. [GeV] 0 200 400 600 800 1000 1200 Mt [GeV] Figure 7-7: W+4p template (electron sample) n . m 0 200 400 600 800 1000 1200 Mt [GeV] 68 munu+4p Template MeanWmu4p Entries 2066 Mean 442.6 RMS 80.14 |2 / ndf 52.65 / 45 Prob 0.202 Constant 17.03 0.2461 Slope -0.5316 0.01643 Expo 0.5172 0.004151 0 200 400 600 800 1000 1200 Mt [GeV] tt Figure 7-8: W+4p template (muon sample) QCD Template MeanQCD Entries 2975 Mean 450.7 RMS 87 2 / ndf 31.51/48 Prob 0.9684 Constant 16.06 0.1829 Slope -0.3666 0.01002 Exnn 0.5583 0.003672 * I I II II ItI I II I I11.I11 0 200 400 600 800 1000 1200 Mt, [GeV] Figure 7-9: QCD template 0 200 400 600 800 1000 1200 Mt [GeV] SM WW Template Mean WW Entries 584 Mean 437.2 RMS 74.6 2 / ndf 20.66 / 29 Prob 0.8711 Constant 12.44 0.4107 Slope -0.13 0.01037 Expo 0.6985 0.01085 0 200 400 600 800 1000 1200 Mt, [GeV] Figure 7-10: WW template 0 200 400 600 800 1000 1200 Mt [GeV] l 1 II1 0 200 400 600 800 1000 1200 Mt [GeV] 0 200 400 600 800 1000 1200 Mt. [GeV] Figure 7-11: W+2b+2p template (electron sample) SM Wmunu+2b+2p Template , I I1 .dili ean_Wmu2b2p Entries 1266 Mean 445 RMS 91.64 2 / ndf 34.51/42 Prob 0.7873 Constant 53.72 0.461 Slope -21.62 0.2214 Expo 0.1362 0.001413 V1Th,, ,,, 0 200 400 600 800 1000 1200 Mt [GeV] Figure 7-12: W+2b+2p template (moun sample) 0- 0 200 400 600 800 1000 1200 Mt [GeV] LI 1,,,irt'rl 0 200 400 600 800 1000 1200 Mt [GeV] 0 200 400 600 800 1000 1200 Mt. [GeV] Figure 7-13: W+4p template with alternative Q2 scale (electron sample) Background templates i00 300 400 500 600 700 800 900 100011001200 Mt [GeV] Figure 7-14: All Standard Model background templates used in the analysis 0.12 0.1 0.08 0.06 0.04 0.02 SBackground templates I 0.12 0.1 0.08 0.06 0.04 0.02 o00 300 400 500 600 700 800 900 100011001200 Mt [GeV] Figure 7-15: W+2b+2p template vs W+4p template. W+2b+2p was ignored since the expected contribution is at the level of 1-2% and the template is very similar to the W+4p template > Mean 44 4 44 Mean . 00 RM 5189 | 5 400 RM 6363 8250 4 .300 200 250 150- 200 1:0 t I 100 0 20o0 4400 600 Oooo G 0 l G 200 400 600 ) 80 lu 0 G 0 Mean >O Mean X 0- S350 RM 95505 RM 9118 |300 Ioo . 250 250 - 200 I 200 20 150 150 2100 4 100 00 > ------------------------ Mean 597 | > r----------------------- Mean 6302 | 50 50 - I r 4 200 100 Zulu 4uu \u ca luu \ 0 0 ulu 4u bluu 0o 200 400 600 401000 12 5o 20 40051 > 40Me 1 Men 220 200 3 180 R 160 19 100 e80 80 * 60 60 4o 40 20o -0.20 20L 7 200 400 600 800 1000 I G 0 o 200 400 600 800 1000 G,' GeV GeV > M. 160693 220400 120 120 o100 100 80 80 60 60 4O 40 .. 20 20 0 200 400 600 8 00 1000 0 200 400 600 800 1000 20 G.G.eV Figure 7-16: Signal templates CHAPTER 8 SENSITIVITY STUDIES In this chapter we will present the algorithm used for establishing lower and upper limits for signal cross-section times branching ratio at any desired confidence level (CL). We used a B.,' -i.,, approach which was shared with other CDF analyses. The main idea and -II,.-'. -I i. i,- for the implementation can be found in [27, 28]. 8.1 General Presentation of the Limit Setting Methodology For generality we will assume that the observed data quantities are contained in a vector n = (ni, n2, ... nnbin), which in our case would correspond to the bin content of the 1 ,, histogram. The modeling of the data contains one unknown parameter and we want to be able to make a probabilistic statement about that parameter once we look at the data. In other words we would like to obtain a posterior probability distribution for the parameter. We will call this parameter a, because in our particular case it corresponds to the signal cross-section times branching ratio. It is often the case that other parameters are involved, and their values are known with some uncertainty. We will assume their values are normally distributed with the uncert.,iiir being the standard deviation. We will denote these parameters v = (Vi, v2, ...) and call them nuisance parameters. We will formalize our prior knowledge of the nuisance parameters and a by introducing the prior probability density 7(a, v). In our case this can be factorized as a product of Gaussians for the nuisance parameters and a flat distribution for a. The Bayes theorem connects the likelihood of the measurement (prior probability) to the posterior density of a and v after the measurement: p(a, v n)= (n a, v) 7(a, v)/p(n) (8-1) where p(n) is the marginal probability density of the data p(n) f d f da(n a, v ( v) (8-2) In these equations p(a, vin) stands for the posterior density and (n a, v) stands for the prior density. We are not interested in the nuisance parameters so we integrate over them p(acn) = dup(Q, v\n) (8-3) to obtain the sought posterior probability density for the parameter of interest a. From this posterior p(aln) we can extract the information we need, like the most probable value, upper and lower limits at any confidence level, etc. 8.2 Application to This Analysis In our analysis the data n we observe is the binned iA ,, spectrum, the parameter of interest a is the resonant tt production cross section times branching ratio, axo BR(Xo -- tt, and the nuisance parameters are: the integrated luminosity, acceptance, and cross-sections. In order to build the likelihood (prior density) we need normalized A i,, templates for each process. We will use the notation Tj with j E {s, b} for the binned signal and background templates, and Tj for the ith bin of the jth template. Given the above definitions we can write the expected number of events in the ith bin of the spectrum as Pi = Ldt. a jcejTji = a8TAT + E NjT (8-4) jE{s,b} jE{b} where we separated the signal contribution from the backgrounds and we defined the auxiliary variables As f Ldt cs (also called effective acceptance) and Nj = f Ldt ajcj with j E {b}, the total expected number of events for each background, after event selection. The prior likelihood can be written: (nla, v) P( ) e-ATsii)i-, NbjTji iC{nbins} iC{nbins} (8-5) As we already pointed out, we may not know exactly As and the expected number of events from background. It is customary to take as priors for these parameters a truncated (to positive values) Gaussian to represent our prior knowledge' For the signal cross section a-( we use a flat prior. 8.2.1 Templates As pointed out in Eq. 8-5, in order to build the likelihood function we need to know the template distributions for the signal and for the backgrounds. Given the limited statistics available for the samples we decided to fit them and use the smoothed fit distributions as templates; this procedure removes 1il ii.1 -i. .1l empty bins or bumps. As already mention in Chapter 5, we consider as possible background contributions the following processes: 1 Given that the total efficiency is often the product of several efficiencies, the log-normal prior is often used too. Standard Model tt W ev + 4 partons W Ltv + 4 partons W e + 2 partons 2b W i / + 2 partons 2b Dibosons WW, WZ, ZZ QCD (from data) Mean X0_600 X21/ndf Prob Constant 1 Mean1 Mean 300- Sigmal 2 Constant 1 Mean2 250 Sigma2 Constant 200- Mean3 4 Sigma3 1 150 100 50 - 45.17/43 0.3815 37.7 +20.5 603.9+ 2.4 22.85 +2.97 32.4+ 19.5 577.9 5.6 i7.79 6.44 86.6 11.0 165.7 +32.9 43.4+15.4 SMean SMtt I x2/ndf 48.09/56 Prob 0.7649 450- Constant 14.65 0.13 Slope -0.2152 0.0053 400 Expo 0.6148 0.0033 350- 300 1 250- 200- 150 100 50- "O 200 400 600 800 1000 1200 "0 200 400 600 800 1000 1200 Mt, GeV Mt, GeV Figure 8-1: Signal and background examples. The signal spectrum on the left (Mxo = 600 GeV/c2) has been fit with a triple Gaussian. The background spectrum from Standard Model tt has been fit with the exponential-like function. Fit range starts at 400GeV/c2 The f., histograms are fit with an exponential-like function f(x) = a e'" in the region above 400 GeV/c2. The signal histogram is fit with a double or triple Gaussian, or a truncated double Gaussian and a truncated exponential distribution2 An example is shown in Fig 8-1. All templates can be found at the end of the previous chapter. 2 This set of the fitting functions guarantees a fit with good x2 probability. III _L We discussed the backgrounds in Chapter 5, and we will remind the reader that we decided it is safe to absorb the small W + 2 partons + 2 b contributions into the W + 4 partons templates. Similarly, the WZ and ZZ contributions are absorbed in the ZZ template by increasing by 20% the nominal WW cross section. 8.2.2 Template Weighting Equation 8-4 shows that in order to build the likelihood we need to know the number of background events Nj for each background type. Table 8-1: Acceptances for background samples. Sample Event Selection Reconstruction and 400GeV/c2 cut Total acceptance '\! tt 0.045 0.72 0.032 WW 0.0014 0.60 0.0008 W(ev) 0.0076 0.66 0.0050 W(ep,) 0.0072 0.65 0.0047 QCD 0.0070 0.71 0.0050 In general we estimate the cross-section, acceptance and integrated luminosity in order to get this number, but since the cross sections for the processes pp -- W + nj and multijets (QCD) are not known with good precision we decided to estimate the number of events from these backgrounds based on the total number of events seen in the data: N dt (1sAs + A t A+ + wwAww) + NWe4p + N 4p NQCD (8-6) with the constraints NWe4p/AWe4p NW 4p/AW 4p, ii i,. = 10. NQCD (8-7) The relative weights for We4p, Wk4p backgrounds have been set such that they have the same number of events before the event selection and reconstruction because the (unknown) cross sections are considered to be the same. The relative weight between QCD and W+4p has been set to 10% as discussed in Chapter 5 and established in this analysis [29]. Acceptances used in calculations are listed in Tables 8-1 and 8-2. Cross-sections are listed in section 5.4, Table 5-3. Table 8-2: Acceptances for resonance samples. Mxo (GeV/c2) Event Selection Reconstruction and 400 GeV/c2 cut Total 450 0.047 0.86 0.040 500 0.051 0.93 0.048 550 0.055 0.94 0.051 600 0.057 0.97 0.055 650 0.059 0.97 0.057 700 0.062 0.97 0.060 750 0.062 0.98 0.060 800 0.063 0.98 0.061 850 0.063 0.97 0.061 900 0.061 0.98 0.059 8.2.3 Implementation After building the likelihood for a given observation n according to Eq. 8-5 we need to calculate the posterior density for ( according to Equations 8-1, 8-2 and 8-3. In practice we do not divide by p(n) in Eq. 8-1 since that is only a global normalization factor we can apply at the end. In this way we do not need Eq. 8-2 any more and we can rewrite Eq. 8-1 in a simplified and more explicit form: p(a(; A,, Nb n) = (nl.(; A,, Nb) 7(s; A,, Nb) (8-8) To obtain the posterior probability density for ( only we carry out the integration on the nuisance parameters As and Nb using a Monte Carlo method. Following the ;-.---. -i ..i-, in [28] on page 20, we implement the "Sample & Scan" method. We repeatedly (1000 times) sample the priors 7(As) and 7j(Nj), which are truncated Gaussians with respective widths of 6As and 6Nj. Then we scan (400 bins) the ( up to some value where the posterior is negligible. At each scan point we add to the corresponding bin in a histogram of os a weight equal to (n o-s, As, Nb) 7r(o-, As, Nb). This yields the posterior density for a-s. 8.2.4 Cross Section Measurement and Limits Calculation Having calculated the signal cross section posterior density we can extract limits and ,in ..-i ire" the cross section. We define as our estimator for the cross section and therefore as our measurement the most probable value of the distribution. This choice is supported by many linearity tests we run both with fake signal templates (simple Gaussians) and with real Xo templates. Lum=1000pb-1 Lum=1000pb-1 XoMass450 5 -Xo Mass 600 o fpb] ( O pb] XoXo Masasss 890000 psLum=1000pb-1 e a i Lum=1t f pb-1 s Xo Mass 800 0 1 2 3 4 opb] 0o s 1 2 2s .es pb %pb] d t io Pb] Figure 8-2: Linearity tests on fake (left) and real (right) templates. As test fake signal templates we used Gaussians with 60 GeV/c2 widths and means of 800 and 900 GeV/c2. We used also real templates with masses from 450 to 900 GeV/c2 The top plots show the input versus the reconstructed cross section after 1000 pseudoexperiments at integrated luminosity J L = 1000pb-1. Bottom plots show the deviation from linearity in expanded scale, with red-dotted lines representing a 2% deviation Figure 8-2 shows the results of the tests with fake Gaussian signal templates of 800 and 900 GeV/c2 masses and 60 GeV/c2 width and with real 1 1, templates for Xo masses from 450 to 900 GeV/c2 at an integrated luminosity equal to f 1000lpb-1. The reconstructed cross section agrees very well with the input value, showing only a small relative shift of about . However our measurement is meaningless as long as it is consistent with the null hypothesis, being only a statistical fluctuation. Therefore the key quantities to extract are the upper and lower limits (UL, LL) on the cross-section at a given confidence level. This is done by finding an interval defined by limits LL and UL, which satisfy: f UL JLL p(T(ln) f L aC (8-9) jo P(a n) and p(LLIn) = p(ULln) (8-10) with a the desired confidence level, for example 0.95 for 95' Cross-section posterior p.d.f. hxsec x10- 5 Entn.rs 400000 x10 M-an 2598 0.2- RMS 03435 0.18 Undflo 0 0.16 Ig 0.14 0.12 oa < 3.225 at 95% CL 0.1- 0.08 0.06 0.04 0.02 S 1 2 3 4 5 6 7 8 9 10 S,pb Figure 8-3: Example posterior probability function for the signal cross section for a pseudoexperiment with input signal of 2 pb and resonance mass of 900 GeV/c2 The most probable value estimates the cross section, and 95% confidence level (CL) upper and lower limits are extracted. The red arrow and the quoted value correspond to the 95% CL upper limit In this way we can extract LL and UL for each pseudoexperiment or for data. Figure 8-3 shows an example of posterior for a pseudoexperiment with input signal of 2 pb, Mxo = 900 GeV/c2 and total integrated luminosity f 1000pb-1. Before looking at the data we need to know what are the expected limits without any signal present and what are their fluctuations for certain integrated luminosities. For these purposes we ran many (1000) pseudoexperiments for each Mxo and integrated luminosity and we filled histograms with the most likely value, LL and UL from each pseudoexperiment. The median of the UL histogram is considered the expected upper limit in the absence of any signal. We also define 68% and 95% CL intervals around the central value in order to get a feeling of the expected fluctuations in the upper limits. We also ran similar series of pseudoexperiments with signal in order to see what are our chances of observing a non-zero LL in a given scenario. More specifically, we computed the probability of observing a non-zero LL for a given resonance mass, integrated luminosity and signal cross-section. This quantity is very useful in assessing the power of the algorithm and what signal cross-sections are realistically possible to observe at any integrated luminosity. 8.2.5 Expected Sensitivity and Discovery Potential Figure 8-4 shows the distribution of the expected upper limit (UL) at 95% CL for various masses and two integrated luminosity scenarios, f = 319, 1000pb-1. Figure 8-5 shows the power of the algorithm in distinguishing signal from background. On the x axis we have input signal cross-section and on the y axis the fraction is the probability of observing a non-zero LL at 95% CL for f L = 1000pb-1. This plots do not include shape systematics, or systematic effects that lead to change in the shape of the templates. We will explore the treatment of shape systematics in the next chapter. |

Full Text |

PAGE 1 SEAR CH F OR HEA VY RESONANCES DECA YING INTO t t P AIRS By V ALENTIN NECULA A DISSER T A TION PRESENTED TO THE GRADUA TE SCHOOL OF THE UNIVERSITY OF FLORID A IN P AR TIAL FULFILLMENT OF THE REQUIREMENTS F OR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORID A 2006 PAGE 2 Cop yrigh t 2006 b y V alen tin Necula PAGE 3 I dedicate this w ork to m y paren ts, Maria-Doina and Eugen Necula. PAGE 4 A CKNO WLEDGMENTS I tak e this opp ortunit y to express m y deep est thanks to m y advisors, Prof Guenakh Mitselmakher and Prof Jacob o Konigsb erg, for their guidance, con tin uous supp ort and patience, whic h pla y ed a crucial role in the successful completion of this w ork and will con tin ue to b e a source of inspiration in the future. I w ould lik e to tak e this opp ortunit y to thank Dr. Rob erto Rossin for his imp ortan t con tribution to the success of this analysis, from writing co de to running jobs and writing do cumen tation, and non theless for all the in teresting little c hats w e had, b e it p olitics, history nance or sp orts. I am also grateful for the advice I receiv ed and the discussions I had with Prof Andrey Koryto v, Prof Konstan tin Matc hev, Dr. Sergey Klimenk o and Prof John Y elton. A t last but not at least I w ould lik e to thank Prof Ric hard P W o o dard for making m y rst y ears at Univ ersit y of Florida v ery exciting and rew arding. Sometimes I just miss those exams. My sta y at CDF b enetted from the in teraction I had with man y p eople, and without making an y attempt at an exhaustiv e list I w ould men tion Dr. Florencia Canelli, Dr. Mircea Co ca, Dr. Adam Gibson, Dr. Alexander Sukhano v, Dr. Song Ming W ang, Dr. Daniel Whiteson, Dr. Kohei Y orita, Prof John Con w a y Prof Ev a Halkiadakis, Dr. Douglas Glenzinski, Prof T ak asumi Maruy ama, Prof Ev elyn Thomson, and Prof Y oung-Kee Kim. Sp ecial thanks to Dr Alexandre Pronk o, who w as m y ocemate in m y early da ys at CDF, and with whom I had quite in teresting discussions and pla y ed m uc h few er c hess games that I should ha v e. The more relaxing momen ts I enjo y ed in the iv PAGE 5 compan y of Gheorghe Lungu and Dr. Ga vril A. Giurgiu w ere v ery useful as w ell and I w ould lik e to thank them b oth. v PAGE 6 T ABLE OF CONTENTS page A CKNO WLEDGMENTS . . . . . . . . . . . . . . iv LIST OF T ABLES . . . . . . . . . . . . . . . . ix LIST OF FIGURES . . . . . . . . . . . . . . . . xi ABSTRA CT . . . . . . . . . . . . . . . . . . xv CHAPTER1 INTR ODUCTION . . . . . . . . . . . . . . . 1 1.1 Historical P ersp ectiv e . . . . . . . . . . . . 1 1.2 The Standard Mo del of Elemen tary P articles . . . . . . 3 1.2.1 Leptons . . . . . . . . . . . . . . 3 1.2.2 Quarks . . . . . . . . . . . . . . 5 1.3 Bey ond the Standard Mo del . . . . . . . . . . 6 2 NEW PHYSICS AND THE TOP QUARK . . . . . . . . 8 3 EXPERIMENT AL APP ARA TUS . . . . . . . . . . . 12 3.1 T ev atron Ov erview . . . . . . . . . . . . . 12 3.2 CDF Ov erview and Design . . . . . . . . . . . 15 3.2.1 Calorimetry . . . . . . . . . . . . . 17 3.2.2 T rac king System . . . . . . . . . . . . 21 3.2.3 The Muon System . . . . . . . . . . . 25 3.2.4 The T rigger System . . . . . . . . . . . 26 4 EVENT RECONSTR UCTION . . . . . . . . . . . 29 4.1 Quark and Gluons . . . . . . . . . . . . . 29 4.1.1 Jet Clustering Algorithm . . . . . . . . . 30 4.1.2 Jet Energy Corrections . . . . . . . . . . 31 4.2 Electrons . . . . . . . . . . . . . . . . 33 4.3 Muons . . . . . . . . . . . . . . . . 34 4.4 Neutrinos . . . . . . . . . . . . . . . 35 5 EVENT SELECTION AND SAMPLE COMPOSITION . . . . 37 5.1 Choice of Deca y Channel . . . . . . . . . . . 38 5.2 Data Samples . . . . . . . . . . . . . . 39 vi PAGE 7 5.3 Ev en t Selection . . . . . . . . . . . . . . 40 5.4 Sample Comp osition . . . . . . . . . . . . 41 6 GENERAL O VER VIEW OF THE METHOD AND PRELIMINAR Y TESTS . . . . . . . . . . . . . . . . . 44 6.1 T op Mass Measuremen t Algorithm . . . . . . . . 45 6.1.1 The Matrix Elemen ts (ME) . . . . . . . . . 48 6.1.2 Appro ximations: Change of In tegration V ariables . . 50 6.2 Mon te Carlo Generators . . . . . . . . . . . 51 6.3 Basic Chec ks at P arton Lev el . . . . . . . . . . 52 6.4 T ests on Smeared P artons . . . . . . . . . . . 54 6.5 T ests on Sim ulated Ev en ts with Realistic T ransfer F unctions . 55 6.5.1 Samples and Ev en t Selection . . . . . . . . 55 6.5.2 T ransfer F unctions . . . . . . . . . . . 55 7 M t t RECONSTR UCTION . . . . . . . . . . . . . 58 7.1 Standard Mo del t t Reconstruction . . . . . . . . . 58 7.2 Signal and other SM Bac kgrounds . . . . . . . . . 63 8 SENSITIVITY STUDIES . . . . . . . . . . . . . 77 8.1 General Presen tation of the Limit Setting Metho dology . . . 77 8.2 Application to This Analysis . . . . . . . . . . 78 8.2.1 T emplates . . . . . . . . . . . . . 79 8.2.2 T emplate W eigh ting . . . . . . . . . . . 81 8.2.3 Implemen tation . . . . . . . . . . . . 82 8.2.4 Cross Section Measuremen t and Limits Calculation . . 83 8.2.5 Exp ected Sensitivit y and Disco v ery P oten tial . . . . 85 9 SYSTEMA TICS . . . . . . . . . . . . . . . 87 9.1 Shap e Systematics . . . . . . . . . . . . . 87 9.1.1 Jet Energy Scale . . . . . . . . . . . . 87 9.1.2 Initial and Final State Radiation . . . . . . . 88 9.1.3 WQ 2 Scale . . . . . . . . . . . . . 89 9.1.4 P arton Distribution F unctions Uncertain t y . . . . 91 9.1.5 Ov erall Shap e Systematic Uncertain ties . . . . . 91 9.2 Eect of Shap e Systematics . . . . . . . . . . 92 9.3 Exp ected Sensitivit y with Shap e Systematics . . . . . . 94 10 RESUL TS . . . . . . . . . . . . . . . . . 96 10.1 First Results . . . . . . . . . . . . . . . 96 10.2 Final Results . . . . . . . . . . . . . . 99 10.3 Conclusions . . . . . . . . . . . . . . . 101 vii PAGE 8 APPENDIX CHANGE OF V ARIABLES AND JA COBIAN CALCULA TION SKETCH . . . . . . . . . . . . . . . . . . 107 REFERENCES . . . . . . . . . . . . . . . . . 111 BIOGRAPHICAL SKETCH . . . . . . . . . . . . . . 113 viii PAGE 9 LIST OF T ABLES T able page 1{1 Prop erties of leptons. An tiparticles are not listed. . . . . . 4 1{2 Prop erties of quarks. Additionally eac h quark can also carry one of three color c harges. . . . . . . . . . . . . . 5 3{1 Summary of CDF calorimeters. X 0 and 0 refer to the radiation length for the electromagnetic calorimeter and in teraction length for the hadronic calorimeter, resp ectiv ely Energy resolutions corresp ond to a single inciden t particle. . . . . . . . . . . . . 18 5{1 t t deca ys . . . . . . . . . . . . . . . . . 38 5{2 Ev en t Selection . . . . . . . . . . . . . . . 40 5{3 Cross-sections and acceptances . . . . . . . . . . . 42 5{4 Signal acceptances . . . . . . . . . . . . . . 43 8{1 Acceptances for bac kground samples. . . . . . . . . . 81 8{2 Acceptances for resonance samples. . . . . . . . . . . 82 9{1 Linear t parameters describing the uncertain t y due to JES systematic; JESand JES+ lab els designate a + or v ariation in energy scale. The uncertain t y on cross-section is parametrized with X 0 = 0 + 1 Â¢ X 0 . . . . . . . . . . . . . . . . . 89 9{2 Linear t parameters describing the uncertain t y due to ISR mo deling. The uncertain t y in cross section is parametrized with X 0 = 0 + 1 Â¢ X 0 . . . . . . . . . . . . . . . . . 90 9{3 Linear t parameters describing the uncertain t y due to FSR mo deling. The uncertain t y in cross section is parametrized with X 0 = 0 + 1 Â¢ X 0 . . . . . . . . . . . . . . . . . 90 9{4 Linear t parameters describing the uncertain t y due to WQ 2 scale, The uncertain t y in cross section is parametrized with X 0 = 0 + 1 Â¢ X 0 . . . . . . . . . . . . . . . . . 90 10{1 Exp ected n um b er of ev en ts assuming no signal. WW and QCD n um b ers are deriv ed based on the total n um b er of ev en ts observ ed in the searc h region ab o v e 400 GeV =c 2 . . . . . . . . . . . . 97 ix PAGE 10 10{2 Exp ected n um b er of ev en ts assuming no signal. WW and QCD n um b ers are deriv ed based on the total n um b er of ev en ts observ ed in the searc h region ab o v e the 400 GeV =c 2 . . . . . . . . . . . 99 10{3 Exp ected and observ ed upp er limits on signal cross-section deriv ed from a dataset with an in tegrated luminosit y of 680 pb Â¡ 1 . . . 104 x PAGE 11 LIST OF FIGURES Figure page 2{1 The CDF Run 1 t t in v arian t mass sp ectrum. . . . . . . . 10 2{2 The CDF Run 1 upp er limits for resonance pro duction cross-section times branc hing ratio. . . . . . . . . . . . . 11 3{1 Ov erview of the F ermilab accelerator complex. The p p collisions at the cen ter-of-mass energy of 1.96 T eV are pro duced b y a sequence of v e individual accelerators: the Co c kroft-W alton, Linac, Bo oster, Main Injector, and T ev atron. . . . . . . . . . . 13 3{2 Dra wing of the CDF detector. One quarter view. . . . . . 16 3{3 The r Â¡ z view of the new Run I I end plug calorimeter . . . . 21 3{4 Longitudinal view of the CDF I I T rac king System. . . . . . 22 3{5 Isometric view of the three barrel structure of the CDF Silicon V ertex Detector. . . . . . . . . . . . . . . . . 23 3{6 One sixth of the COT in end-view; o dd sup erla y ers are small-angle stereo la y ers and ev en sup erla y ers are axial. . . . . . . 25 3{7 CDF I I Data o w. . . . . . . . . . . . . . . 27 6{1 Main leading order con tribution to t t pro duction in p p collisions at p s = 1 : 96 T eV . . . . . . . . . . . . . . 48 6{2 Gluon-gluon leading order con tribution to t t pro duction in p p collisions at p s = 1 : 96 T eV . . . . . . . . . . . . . . 49 6{3 Reconstructed top mass from 250 pseudo exp erimen ts of 20 ev en ts at parton lev el with m t =175 GeV =c 2 The left plot is deriv ed using only the correct com bination, while the righ t plot uses all com binations 52 6{4 Reconstructed top mass vs. true top mass from pseudo exp erimen ts of 20 ev en ts using all 24 com binations, at parton lev el . . . . 53 6{5 Reconstructed top mass vs. true top mass from pseudo exp erimen ts of 20 ev en ts with smearing. The left plot is deriv ed using only the correct com bination, while the righ t plot uses all com binations . 54 xi PAGE 12 6{6 Ligh t quarks transfer functions ( x = 1 Â¡ E j et E par ton ), binned in three absolute pseudorapidit y regions [0, 0.7], [0.7, 1.3] and [1.3, 2.0] . 56 6{7 b-quarks transfer functions ( x = 1 Â¡ E j et E par ton ), binned in three absolute pseudorapidit y regions [0, 0.7], [0.7, 1.3] and [1.3, 2.0] . . . . 57 7{1 M t t reconstruction for the correct com bination and for ev en ts with exactly four matc hed tigh t jets. . . . . . . . . . . 59 7{2 M t t reconstruction including all ev en ts . . . . . . . . 60 7{3 Examples of M t t reconstruction, ev en t b y ev en t. . . . . . 61 7{4 M t t template for Standard Mo del t t ev en ts. . . . . . . . 62 7{5 Reconstructed in v arian t mass for a resonance with M X 0 = 650 GeV. The left plot sho ws all ev en ts passing ev en t selection, while the righ t plot sho ws only matc hed ev en ts . . . . . . . . . . 64 7{6 Reconstructed in v arian t mass for a resonance with M X 0 = 650 GeV. The left plot sho ws mismatc hed lepton+jets ev en ts and the righ t plot sho ws non-lepton+jets ev en ts . . . . . . . . . 65 7{7 W+4p template (electron sample) . . . . . . . . . 67 7{8 W+4p template (m uon sample) . . . . . . . . . . 68 7{9 QCD template . . . . . . . . . . . . . . . 69 7{10 WW template . . . . . . . . . . . . . . . 70 7{11 W+2b+2p template (electron sample) . . . . . . . . 71 7{12 W+2b+2p template (moun sample) . . . . . . . . . 72 7{13 W+4p template with alternativ e Q 2 scale (electron sample) . . 73 7{14 All Standard Mo del bac kground templates used in the analysis . 74 7{15 W+2b+2p template vs W+4p template. W+2b+2p w as ignored since the exp ected con tribution is at the lev el of 1-2% and the template is v ery similar to the W+4p template . . . . . . . . 75 7{16 Signal templates . . . . . . . . . . . . . . 76 8{1 Signal and bac kground examples. The signal sp ectrum on the left ( M X 0 = 600 GeV =c 2 ) has b een t with a triple Gaussian. The bac kground sp ectrum from Standard Mo del t t has b een t with the exp onen tial-lik e function. Fit range starts at 400 GeV =c 2 . . 80 xii PAGE 13 8{2 Linearit y tests on fak e (left) and real (righ t) templates. As test fak e signal templates w e used Gaussians with 60 GeV =c 2 widths and means of 800 and 900 GeV =c 2 W e used also real templates with masses from 450 to 900 GeV =c 2 The top plots sho w the input v ersus the reconstructed cross section after 1000 pseudo exp erimen ts at in tegrated luminosit y R L = 1000 pb Â¡ 1 Bottom plots sho w the deviation from linearit y in expanded scale, with red-dotted lines represen ting a 2% deviation . . . . . . . . . . . 83 8{3 Example p osterior probabilit y function for the signal cross section for a pseudo exp erimen t with input signal of 2 pb and resonance mass of 900 GeV =c 2 The most probable v alue estimates the cross section, and 95% condence lev el (CL) upp er and lo w er limits are extracted. The red arro w and the quoted v alue corresp ond to the 95% CL upp er limit . . . . . . . . . . . . . 84 8{4 Upp er limits at 95% CL. Only acceptance systematics are considered in this plot. . . . . . . . . . . . . . . . 86 8{5 Probabilit y of observing a non-zero lo w er limit v ersus input signal cross section at R L = 1000 pb Â¡ 1 Only acceptance systematics are included in this plot . . . . . . . . . . . . . 86 9{1 Cross section shift due to JES uncertain t y for R L = 1000 pb Â¡ 1 The shift represen ts the uncertain t y on the cross section due to JES, as a function of cross-section . . . . . . . . . . . 88 9{2 Cross section shift due to ISR (left) and FSR (righ t) uncertain ties for R L = 1000 pb Â¡ 1 . . . . . . . . . . . . . . 89 9{3 Cross section shift due to WQ 2 scale uncertain t y for R L = 1000 pb Â¡ 1 91 9{4 T otal shap e systematic uncertain t y v ersus signal cross section. . . 92 9{5 P osterior probabilit y function for the signal cross section. The smeared (con v oluted) probabilit y in green, including shap e systematics, sho ws a longer tail than the original (blac k) distribution. As a consequence the U L quoted on the plot is shifted to higher v alues with resp ect to the one calculated based on the original p osterior . . . . 93 9{6 Upp er limits at 95% CL. The plots sho w the results for t w o luminosit y scenarios, including or excluding the con tribution from shap e systematic uncertain ties . . . . . . . . . . . . . . . 94 9{7 Probabilit y of observing a non-zero lo w er limit ( LL ) v ersus input signal cross section for R L = 1000 pb Â¡ 1 . . . . . . . . . . 95 xiii PAGE 14 10{1 Reconstructed M t t in 320 pb Â¡ 1 of CDF Run 2 data. The plot on the righ t sho ws ev en ts with at least one SECVTX tag . . . . . 96 10{2 Reconstructed M t t in 320 pb Â¡ 1 of CDF Run 2 data, after the 400 GeV cut . . . . . . . . . . . . . . . . . . 97 10{3 Resonan t pro duction upp er limits from 320 pb Â¡ 1 of CDF Run 2 data 98 10{4 Kolmogoro v-Smirno (KS) test assuming only the Standard Mo del. The KS distance distribution from pseudo exp erimen ts is sho wn in the righ t plot; the arro w indicates the KS distance b et w een data and the Standard Mo del template . . . . . . . . . 100 10{5 Kolmogoro v-Smirno (KS) test assuming signal with a mass of 500 GeV =c 2 and a cross-section equal to the most lik ely v alue from the p osterior probabilit y The KS distribution from pseudo exp erimen ts is sho wn in the righ t plot; the arro w indicates the KS distance b et w een data and the Standard Mo del + signal template. . . . . . 100 10{6 M t t sp ectrum in data vs. Standard Mo del + 2 pb signal con tribution from a resonance with a mass of 500 GeV =c 2 . . . . . . 101 10{7 Reconstructed M t t in CDF Run 2 data, 680 pb Â¡ 1 . . . . . . 102 10{8 Resonan t pro duction upp er limits in CDF Run 2 data, 680 pb Â¡ 1 . 102 10{9 Kolmogoro v-Smirno test results are sho wn together with the reconstructed M t t using 680 pb Â¡ 1 and the corresp onding Standard Mo del exp ectation template . . . . . . . . . . . . . . . . 103 10{10 P osterior probabilit y distributions for CDF data and masses b et w een 450 and 700 GeV. . . . . . . . . . . . . . . 105 10{11 P osterior probabilit y distributions for CDF data and masses b et w een 750 and 900 GeV. . . . . . . . . . . . . . . 106 xiv PAGE 15 Abstract of Dissertation Presen ted to the Graduate Sc ho ol of the Univ ersit y of Florida in P artial F ulllmen t of the Requiremen ts for the Degree of Do ctor of Philosoph y SEAR CH F OR HEA VY RESONANCES DECA YING INTO t t P AIRS By V alen tin Necula August 2006 Chair: Guenakh Mitselmakher Co c hair: Jacob o Konigsb erg Ma jor Departmen t: Ph ysics W e p erformed a searc h for narro w-width v ector particles deca ying in to top-an titop pairs using 680 pb Â¡ 1 of data collected b y the CDF exp erimen t during 2002-2005 Run 2 of the T ev atron. The cen ter of mass energy of the p p collisions w as 1.96 T eV Mo del indep enden t upp er limits on the pro duction cross-section times branc hing ratio are deriv ed, at 95% condence lev el. W e exclude the existence of a leptophobic Z 0 b oson in a top color-assisted tec hnicolor mo del with a mass M Z 0 < 725 GeV =c 2 and our results can b e used to constrain an y other relev an t theoretical mo del. xv PAGE 16 CHAPTER 1 INTR ODUCTION 1.1 Historical P ersp ectiv e The science of Ph ysics in v estigates the la ws go v erning the b eha vior of matter, from the smallest subn uclear scales to the largest astronomical space-time regions and ev en the nature of the univ erse as a whole, as in cosmology In High Energy Ph ysics w e are concerned with understanding the so-called fundamen tal \bric ks" of matter or elemen tary particles and their in teractions. It is not easy to ascertain elemen tariness, in fact it is quite imp ossible, and history sho ws us that more often than not what w as considered elemen tary at one p oin t w as found later to b e a comp osed system: molecules, whic h are the smallest units of substance p ossessing sp ecic ph ysical and chemical prop erties, w ere found to b e made up of smaller units, atoms. A h uge v ariet y of organic matter with quite dieren t ph ysico c hemical prop erties is comp osed of just three atoms, h ydrogen, carb on and o xygen. F or some time atoms w ere considered to liv e up to their ancien t meaning of indivisible units of matter, un til the end of the 19 th cen tury when the m ysterious catho de ra ys puzzled ph ysicists with their prop erties. As J.J. Thomson correctly predicted, the catho de ra ys w ere actually streams of subatomic particles kno wn to da y as electrons. It w asn't long un til Rutherford pro v ed in his famous scattering exp erimen ts that the p ositiv e c harge inside atoms is conned to a p oin tlik e core, or n ucleus, a disco v ery whic h led to the classic planetary mo del of the atom. The elemen tariness of the atom v anished, and the fo cus mo v ed to the structure of the n ucleus. A t rst it w as though t that the n ucleus con tained electrons and protons, but ev en tually the neutron (p ostulated b y Rutherford) w as disco v ered and the picture of matter had b een simplied ev en more: just three 1 PAGE 17 2 particles, the proton, the neutron and the electron, w ere enough to build all kno wn atoms. They w ere the new elemen tary particles, ho w ev er so on they w ere joined b y a large n um b er of new particles with strange names lik e pions, k aons, eta and rho particles. The simple and ma yb e b eautiful picture of three elemen tary particles at the basis of all matter had to b e abandoned. Both exp erimen tal and theoretical breakthroughs lead to the understanding that protons, neutrons and the v ast ma jorit y other particles are comp osed of smaller and stranger units, called quarks. Tw o dieren t dev elopmen ts to ok place during this time though. First, one of the most brillian t ph ysicists of all times, P .A.M. Dirac, predicted in 1928, solely on theoretical grounds, the existence of a new particle whic h w as later called the p ositron. It w as supp osed to b e just lik e the electron, but p ositiv ely c harged, an antiel ectr on Amazingly p ositrons w ere in fact observ ed only four y ears later and then it w as found that other particles had an tiparticles. It w as an univ ersal phenomenon. Secondly searc hing for a particle p ostulated in the Y uk a w a theory of n uclear forces, exp erimen talists found something else, as it is often the case: a new negativ ely c harged particle whic h b eha v ed just lik e an elecron except it had m uc h higher mass and it w as unstable. It w as called a m uon. This phenomenon w as found to ha v e its o wn kind of univ ersalit y and lead to the classication of elemen tary particles in three generations, as it will b e detailed later. P article ph ysics also in v estigates the in teractions or forces b et w een the elemen tary constituen ts of matter. By mid 20 th cen tury ph ysicists coun ted four distinct forces: the gra vitational force, the electromagnetic force, the strong n uclear force resp onsible for holding quarks together inside a proton or neutron for instance, and the w eak n uclear force resp onsible for deca ys and other phenomena. The early picture of classical \force" elds mediating the in teractions w as abandoned after Dirac quan tized the Maxw ell's equations successfully la ying PAGE 18 3 the foundation for quan tum eld theory and in tro ducing the idea that in teractions are mediated b y exc hanges of virtual particles. Later it w as disco v ered that indeed the strong and w eak n uclear forces ar e mediated b y virtual particles, the gluon and the massiv e W + W Â¡ and Z b osons resp ectiv ely Ho w ev er, ev en though w e ha v e a classical set of equations describing gra vitation and p o w erful formalisms for quan tizing elds, all attempts at quan tum gra vit y failed. Delving in to that m ystery is not the purp ose of this dissertation though, and no w w e will pro ceed to a more formal presen tation of the theoretical framew ork underlying our curren t understanding of elemen tary particles and their in teractions. 1.2 The Standard Mo del of Elemen tary P articles The Standard Mo del is a quan tum eld theory whic h is based on the gauge symmetry S U (3) C Â£ S U (2) L Â£ U (1) Y [ 1 ]. This gauge group includes the symmetry group of the strong in teraction, S U (3) C and the symmetry group of the unied electro w eak in teraction, S U (2) L Â£ U (1) Y As p oin ted out earlier, gra vitation didn't t the sc heme and it is not part of the Standard Mo del. All the v ariet y of phenomena is the result of the in teractions of a small n um b er of elemen tary particles, classied as leptons, quarks and force carriers or mediators. They are also classied in three generations with similar prop erties. 1.2.1 Leptons All leptons and hadrons ha v e spin 1/2, and all force mediators ha v e spin 1. There are three six c harged leptons, the electron ( e Â¡ ), the m uon ( Â¡ ), the tauon ( Â¡ ) and their p ositiv ely c harged an tiparticles. F or eac h c harged lepton there corresp onds a neutral lepton, called a neutrino ( ). Ev en though neutrinos do not carry electric c harge, they ha v e distinct an tiparticles due to the fact that they p ossess a prop ert y called lepton n um b er. There are three lepton n um b ers, the electronic lepton n um b er, the m uonic lepton n um b er and the tauonic lepton n um b er. An electron carries a +1 electronic lepton n um b er and an electronic PAGE 19 4 neutrino ( e ) also carries a +1 electronic lepton n um b er. Similarly a m uon and a m uon neutrino ( ) carry a +1 m uonic lepton n um b er, a tauon and a tau neutrino ( ) carry a +1 tauonic lepton n um b er. The an tiparticles of these particles carry -1 leptonic n um b ers and in the Standard Mo del eac h lepton n um b er is conserv ed suc h that in an y reaction the total lepton n um b ers of the initial state particles should b e equal to the total lepton n um b ers of the nal state particles. It should b e noted that signican t evidence has b een gathered during the last decade indicating that neutrinos oscillate, th us violating the leptonic n um b er conserv ation. T able 1{1: Prop erties of leptons. An tiparticles are not listed. P article Spin Charge Mass 1 st generation e Â¡ 1/2 -1 0.51099892 Â§ 0.00000004 MeV/c 2 e 1/2 0 < 3 eV/c 2 2 nd generation Â¡ 1/2 -1 105.658369 Â§ 0.000009 MeV/c 2 1/2 0 < 0 : 19 MeV/c 2 3 r d generation Â¡ 1/2 -1 1776.99 +0 : 29 Â¡ 0 : 26 MeV/c 2 1/2 0 < 18 : 2 MeV/c 2 The in teractions of leptons are describ ed b y the electro w eak theory whic h unies electromagnetism and the w eak force. In this gauge theory there are three massiv e force carriers, the W + W Â¡ and Z b osons and one massless force carrier, the photon( ). In fact a pure gauge theory of leptons and gauge b osons w ould lead to massless particles, so in order for the particles to "acquire" mass the sp on taneous symmetry breaking mec hanism w as prop osed. This adds an extra spin 0 b oson to the picture, the Higgs b oson, b y whic h all gauge b osons except one ( ) acquire mass, and leptons can acquire mass simply b y coupling to the scalar Higgs eld. Ev en though the massiv e b osons [ 2 3 4 5 ] ha v e b een disco v ered at CERN more than 20 y ears ago, the Higgs b oson has not b een disco v ered. It is also p ossible that the mass problem is solv ed b y some other mec hanism. PAGE 20 5 1.2.2 Quarks There are six t yp es of quarks and their an tiparticles, commonly referred to as the up ( u ), do wn ( d ), strange ( s ), c harm ( c ), b ottom( b ) and top( t ) quarks. They carry fractional electrical c harges and a new prop ert y called color, whic h is resp onsible for the strong in teractions of quarks. Eac h quark can carry one of three colors, red, blue and green. The an tiquarks carry an ticolors, an tired, an tiblue and an tigreen. Quarks' prop erties are summarized in T able 1{2 Quarks also tak e part in electro w eak pro cesses and that led to some remark able predictions. It w as found that in order to b e able to renormalize the electro w eak theory an equal n um b er of generations of quarks and leptons w as needed, but when these ideas app eared only three quarks w ere kno wn, the u d and s F ew y ears later in 1974 the c quark w as disco v ered, th us completing the second quark generation as exp ected. Another three y ears later a third generation c harged lepton w as disco v ered, and in the same y ear a third generation quark w as disco v ered, the b The in teresting part is that the massiv e b osons themselv es w ere not disco v ered un til 1983 The quest for the last missing pieces in the generation picture ended with the top quark disco v ery in 1994 at F ermilab and the disco v ery in 2000, also at F ermilab. T able 1{2: Prop erties of quarks. Additionally eac h quark can also carry one of three color c harges. P article Spin Charge Mass 1 st generation u 1/2 +2/3 1.5-4 MeV/c 2 d 1/2 -1/3 4-8 MeV/c 2 2 nd generation c 1/2 +2/3 1.15-1.35 GeV/c 2 s 1/2 -1/3 80-130 MeV/c 2 3 r d generation t 1/2 +2/3 178.0 Â§ 4.3 GeV/c 2 b 1/2 -1/3 4.1-4.4 GeV/c 2 The strong in teractions of quarks are mediated b y eigh t massless gluons ( g ) whic h carry double color c harge, th us b eing able to in teract among themselv es. The PAGE 21 6 theory of strong in teractions is kno wn as Quan tum Chromo dynamics (QCD) and it is a gauge theory based on the SU(3) Lie group. It has t w o c haracteristics not found in the electro w eak theory called color connemen t and asymptotic freedom. The in teraction b et w een colored particles is found to increase in strength with the distance b et w een them, therefore quarks do not app ear as free particles. Instead they form color singlet states either b y com bining three quarks with dieren t colors (barions) or com bining a quark and an an tiquark (mesons). This is \color connemen t". Con v ersely at smaller and smaller distances the in teraction strength decreases and the coupling constan t s b ecomes small enough for p erturbativ e metho ds to w ork. This feature is kno wn as \asymptotic freedom." 1.3 Bey ond the Standard Mo del The Standard Mo del has managed to explain v ery w ell a v ast amoun t of exp erimen tal data, ho w ev er there are reasons to b eliev e it is an incomplete theory : As men tioned earlier, gra vit y is left out altogether P ossibly connected to the previous p oin t, the observ ed masses of particles are completely unexplained. The Higgs mec hanism is just a w a y b y whic h particles w ould \acquire" mass, b oth b osons and fermions, but it do es not predict their v alues. The gauge anomaly of the electro w eak theory is canceled only if w e ha v e an equal n um b er of quark and lepton generations, and the c harges of the particles within one generation ob ey a certain constrain t equation. This implies that there is some deep er connection b et w een quark and leptons whic h migh t also explain wh y w e ha v e only three generations. Besides particles' masses, there are still quite man y arbitrary parameters in the Standard Mo del, lik e the relativ e strengths of the in teractions, the W ein b erg angle sin W the elemen ts of the Cabibb o-Koba y ashi-Mask a w a PAGE 22 7 matrix whic h desctib e the strength of cross-generation direct coupling of quarks via c harged curren ts. There are signican t indications that neutrinos oscillate. The amoun t of kno wn matter in the Univ erse is less than what w ould b e necessary to pro duce a at geometry as observ ed, and it is b eliev ed that there m ust exist other t yp es of matter, dark matter, b esides a non-zero cosmological constan t or dark energy whic h w ould explain the discrepancy But these conclusions rely on the v alidit y of General Relativit y in describing the Univ erse as a whole, whic h is not quite ob vious. Man y theories b ey ond the Standard Mo del ha v e b een prop osed, lik e Sup ersymmetry String theories, Grand Unied Theories (GUTs), extra dimensions theories, T ec hnicolor, quark comp ositeness theories and others. Some are basically imp ossible to test at curren t a v ailable energies, but most ha v e a large parameter space and it is dicult to rule them out completely In this w ork w e decided to adopt a mo del indep enden t approac h to our searc h for Ph ysics b ey ond the Standard Mo del, at least as m uc h as it is p ossible. PAGE 23 CHAPTER 2 NEW PHYSICS AND THE TOP QUARK The top quark is so m uc h hea vier than the other quarks, including its 3 r d generation sibling the b quark, that it is natural to ask whether this fact is related to its p ossible coupling to New Ph ysics. This idea w as explored in a theory called \top color-assisted tec hnicolor" [ 6 7 ] whic h in tro duces new strong dynamics coupling preferen tially to the third generation, th us making the t t and b b nal states of particular in terest. This theory in tro duces a top color hea vy Z 0 and \topgluons", b oth deca ying in to t t and b b pairs. There are other theoretical a v en ues for pro ducing hea vy resonances, lik e Univ ersal Extra Dimension mo dels [ 8 9 10 ]. The simpler v ersions [ 8 9 ] assume only one extra dimension of size R and lead to new particles via the Kaluza-Klein(KK) mec hanism. In the minimal UED mo del [ 9 ] only one more parameter is needed in the theory the cuto scaleÂ¤. An in teresting feature is the conserv ation of the KK n um b er at tree lev el, and in general the conserv ation of the KK parit y dened as ( Â¡ 1) n where n is the KK n um b er. As a consequence the ligh test KK partner at lev el 1 has negativ e KK parit y and it is stable, therefore p ossible candidates for our searc h are lev el 2 KK partners. These can couple to Standard Mo del particles only through lo op diagrams, giv en the need to conserv e KK parit y Another UED mo del [ 10 ] assumes that all kno wn particles propagate in t w o small extra dimensions, also leading to new states viathe Kaluza-Klein mec hanism. Resonance states b elo w 1 T eV are predicted in this mo del, and they ha v e signican t couplings to t t pairs. F rom a purely exp erimen tal p oin t of view the t t pro duction mec hanism is an in teresting pro cess in whic h to searc h for New Ph ysics since the full compatibilit y 8 PAGE 24 9 of t t candidate ev en ts with the Standard Mo del is not kno wn with great precision due to quite limited statistics. There is ro om to explore for p ossible non-Standard Mo del sources within suc h an ev en t sample. In this dissertation w e fo cus on the searc h for a hea vy resonance pro duced in p p collisions at p s = 1 : 96 T eV whic h deca ys in to t t pairs. The basic idea is to compute the t t in v arian t mass sp ectrum and searc h for indications of unexp ected resonance p eaks. W e will implemen t the to ols needed to set lo w er and upp er limits for the resonance pro duction cross-section times branc hing ratio at an y giv en condence lev el. A disco v ery w ould amoun t to a non-zero lo w er limit at a signican t condence lev el. A similar searc h w as carried out at the T ev atron b y the CDF [ 11 ] and D0 [ 12 ] collab orations on the data gathered in \Run 1", the p erio d of op eration b et w een 1992-1995. The t t in v arian t mass as reconstructed b y the CDF analysis in the \lepton plus jets" c hannel is sho wn in Figure 2{1 There are onl y 63 ev en ts for the en tire Run 1 dataset, whic h corresp onds to an in tegrated luminosit y of 110 pb Â¡ 1 Ab out half of them w ere t t ev en ts. Based on this distribution the 95% condence lev el upp er limits on t t resonan t pro duction cross-section times branc hing ratio w ere computed, as a function of resonance mass (Figure 2{2 ). The main c hallenge of this analysis is the reconstruction of the t t in v arian t mass sp ectrum. In this analysis w e use an inno v ativ e approac h whic h includes matrix elemen t information to help with the reconstruction, as it will b e explained in later c hapters. PAGE 25 10 Figure 2{1: The CDF Run 1 t t in v arian t mass sp ectrum. PAGE 26 11 Figure 2{2: The CDF Run 1 upp er limits for resonance pro duction cross-section times branc hing ratio. PAGE 27 CHAPTER 3 EXPERIMENT AL APP ARA TUS The F ermi National Accelerator Lab oratory (FNAL, F ermilab) has b een a leading facilit y in exp erimen tal particle ph ysics for the last 30 y ears. The hadron collider, called the T ev atron, is the w orld's most p o w erful accelerator where proton-an tiproton collisions are in v estigated. While man y measuremen ts and searc hes ha v e b een carried out, probably the most famous results out of the T ev atron program are the disco v ery of the b ottom quark in 1977 and the disco v ery of the top quark in 1994, during the 1992-1995 T ev atron op eration p erio d kno wn as \Run 1". A t the momen t of this writing w e are in the middle of Run 2, the second T ev atron op eration p erio d whic h started in the spring of 2001. Record instan taneous luminosities ( 1 : 7 Â¢ 10 32 cm Â¡ 2 s Â¡ 1 ) ha v e b een ac hiev ed recen tly whic h mak es the searc h for new particles including the last missing blo c k of the Standard Mo del, the Higgs b oson, a lot more in teresting. The Collider Detector at F ermilab (CDF) and D0 are t w o general purp ose detectors built at almost opp osite collision p oin ts along the accelerator. In this analysis w e use data collected b y the CDF collab oration during the p erio d 2002-2005. The cen ter of mass energy in Run 2 is p s = 1 : 96 T eV, the highest collision energy ev er ac hiev ed. 3.1 T ev atron Ov erview The F ermilab accelerator complex is sho wn on a sc hematic dra wing in Fig. 3{1 In order to pro duce suc h high energy p p collisions a sequence of v e individual accelerators is needed. 12 PAGE 28 13 Figure 3{1: Ov erview of the F ermilab accelerator complex. The p p collisions at the cen ter-of-mass energy of 1.96 T eV are pro duced b y a sequence of v e individual accelerators: the Co c kroft-W alton, Linac, Bo oster, Main Injector, and T ev atron. First, the Co c kroft-W alton accelerator b o osts negativ e h ydrogen ions to 750 KeV energy Then, the ions are directed to the second stage of the pro cess pro vided b y the 145 m long linear accelerator (Linac) whic h further increases the energy of ions up to ab out 400 M eV Before the next stage the ions are stripp ed of their electrons when they pass through a carb on foil, lea ving a pure proton b eam. These protons mo v e to the next stage, the Bo oster, whic h is a sync hrotron accelerator of ab out 150 m in diameter. A t the end of this stage the protons reac h an energy of 8 GeV Next, protons are injected in to another circular accelerator called the Main Injector. The Main Injector serv es t w o functions. It pro vides a source of 120 GeV protons needed to pro duce an ti-protons. It also b o osts protons and an ti-protons from 8 GeV up to 150 GeV b efore injecting them in to the T ev atron. PAGE 29 14 In order to pro duce an ti-protons, 120 GeV protons are transp orted from the Main Injector to a nic k el target. F rom the in teraction spra ys of secondary particles are pro duced, including an ti-protons. Those an ti-protons are selected and stored in to the Debunc her ring where they are sto c hastically co oled to reduce the momen tum spread. A t the end of this pro cess, the an ti-protons are stored in the Accum ulator, un til they are needed in the T ev atron. The T ev atron is a proton-an tiproton sync hrotron collider situated in a 1 k m radius tunnel. It accelerates 150 GeV protons and an ti-protons up to 980 GeV leading to a p p collision cen ter-of-mass energy of 1.96 T eV. Inside the T ev atron the b eams are split in to 36 \bunc hes" whic h are organized in three groups of 12. Within eac h group the bunc hes are separated in time b y 396 ns Collisions tak e place bunc h b y bunc h, when a proton bunc h meets an an tiproton bunc h at the in teraction p oin t. Just for clarit y w e should add that the b eams are inj ected bunc h b y bunc h. The collisions do not tak e place at the exact same lo cation eac h time but are spread in space, according to a Gaussian distribution with a sigma of ab out 28 cm along the b eam direction and also extending in the transv erse plane with a circular cross-section dened b y a radius of ab out 25 m The instan taneous luminosit y of the T ev atron is giv en b y L inst = N p N p f A (3-1) where N p and N p are the n um b ers of protons and an ti-protons p er bunc h, f is the frequency of bunc h crossings and A is the eectiv e area of the crossing b eams. A compact p erio d of time during whic h collisions tak e place in the T ev atron is called a \store" and it can last from few hours to o v er 24 hours. During a store the instan taneous luminosit y is decreasing exp onen tially due to collisions and transv erse spreading of the b eams whic h leads to losses of protons and an ti-protons. PAGE 30 15 The instan taneous luminosit y can drop one order of magnitude during one store. Run 2 initial instan taneous luminosit y ranged from ab out 5 Â¢ 10 30 cm Â¡ 2 s Â¡ 1 in 2002 to the record 1 : 7 Â¢ 10 32 cm Â¡ 2 s Â¡ 1 in 2006 and there are hop es for ev en higher v alues in the future. 3.2 CDF Ov erview and Design The Collider Detector at F ermilab (CDF) is a general purp ose detector lo cated at one of the t w o b eam collision p oin ts along the T ev atron kno wn as \B0". The idea of a general purp ose detector is to allo w the study of a wide range of pro cesses o ccurring in p p collisions. F or that purp ose CDF is designed suc h that it can iden tify electrons, m uons, photons and jets. It is indirectly sensitiv e to particles whic h escap e detection, lik e the neutrinos. A sc hematic dra wing of the CDF detector is sho wn in Fig. 3{2 It is cylindrically symmetric ab out the b eam direction with a radius of ab out 5 m and a length of 27 m from end to end, and w eighs o v er 5000 metric tons. The CDF collab oration uses a righ t-handed Cartesian co ordinate system with its origin in the cen ter of the detector, the p ositiv e z -axis along the proton b eam direction, the p ositiv e x -axis to w ards the cen ter of the T ev atron ring and the p ositiv e y -axis p oin ting up w ard. The azim uthal angle is dened coun terclo c kwise around the b eam axis starting from the p ositiv e x -axis. The p olar angle is dened with resp ect to the p ositiv e z -axis. Ho w ev er, another quan tit y is widely used instead of the p olar angle. It is called pseudo-rapidit y and it is dened b y the form ula = Â¡ ln(tan ( = 2)). The reason is that in the massless appro ximation, whic h is a v ery go o d one at these energies, relativistic b o osts along the z -axis are additiv e in the pseudo-rapidit y v ariable and this prop ert y is imp ortan t, for instance in the consisten t denition of jet cones. The pseudo-rapidit y can also b e dened with resp ect to the actual p osition of the in teraction v ertex, in whic h case it is called ev en t pseudo-rapidit y PAGE 31 16 Figure 3{2: Dra wing of the CDF detector. One quarter view. The detector is comp osed b y a series of sub detectors. Closest to the b eam is the silicon v ertex detectors whic h are surrounded b y c harged particle trac king c ham b ers. The silicon v ertex detectors are used to reconstruct the p osition of the collision v ertex and particle momen ta. Next are the electromagnetic and hadronic calorimeters used for energy measuremen ts and at last the m uon c ham b ers. There is also a time-of-igh t system used for c harged hadrons iden tication and the Cherenk o v Luminosit y Coun ters (CLC) whic h measure luminosit y F or this analysis w e use all ma jor parts of the detector. The calorimetry is necessary for jet reconstruction, energy measuremen ts for electrons, m uon iden tication and also for the calculation of missing transv erse energy The trac king system pla ys a ma jor role in electron and m uon iden tication and in momen tum measuremen t, and the m uon c ham b ers are imp ortan t for m uon iden tication. PAGE 32 17 In this section w e will pro vide a general description of the ma jor comp onen ts of the detector, mainly emphasizing the parts used for this analysis. A more comprehensiv e description can b e found in the published literature [ 13 ] 3.2.1 Calorimetry The purp ose of the calorimeters is to measure the energy dep ositions of particles passing through them. Ho w ev er not all particles in teract in the same w a y Neutrinos escap e without an y in teraction at all, and high energy m uons also escap e the calorimeters without losing m uc h energy Apart from that, the rest of the particles lea v e their en tire energy in the calorimeter with some exceptions in the case on pions for instance whic h can tra v el, rarely b ey ond the calorimeter. Ev en though neutrinos do not in teract with the calorimeter, b y applying the conserv ation of momen tum in the transv erse plane one can calculate the total transv erse momen tum of the neutrinos. Since the calorimeter measures energy this inferred quan tit y is kno wn as missing transv erse ener g y In case the ev en t con tained high energy m uons it needs further corrections b efore it can b e iden tied as neutrino transv erse momen tum since, as men tioned b efore, the m uons also do not lea v e m uc h energy in the calorimeter. The electromagnetic calorimeter is designed suc h that it can measure w ell the energy of photons and electrons (p ositrons). Electrons ab o v e 100 M eV lose their energy mostly through bremsstrahlung or photon radiation. High energy photons pro duce electron-p ositron pairs in the n uclear electromagnetic elds of the material, th us restarting the cycle and leading to the dev elopmen t of an electromagnetic sho w er of electrons, p ositrons and photons. A t the last stage, lo w energy photons unable to create electron-p ositron pairs lose their energy b y Compton scattering and photo electric pro cesses, while lo w energy electrons lose their energy b y ionization. PAGE 33 18 F or simplicit y w e will assume that the initial particle mo v es p erp endicular to the detector. Then as the sho w er dev elops in the calorimeter more and more energy is dep osited, but at dieren t depths or in dieren t la y ers of the detector. Ho w ev er at some p oin t the n um b er of new sho w er particles starts to decrease and then later no new particles will b e created. After this p oin t the energy dep osited p er la y er starts to decrease, exp onen tially The depth of the maxim um energy dep osition la y er is called the sho w er maxim um and can b e used for particle iden tication. Other c harged particles lik e m uons b eha v e dieren tly b ecause the energy loss via radiation starts to dominate energy loss via ionization at m uc h higher energies, higher b y a factor of ( m=m e ) 2 appro ximately Giv en the energy scale at the T ev atron, a t ypical m uon lea v es roughly 10% of its energy in the electromagnetic calorimeter and th us it is not p ossible to iden tify and measure m uon momen ta using the calorimeter. T able 3{1: Summary of CDF calorimeters. X 0 and 0 refer to the radiation length for the electromagnetic calorimeter and in teraction length for the hadronic calorimeter, resp ectiv ely Energy resolutions corresp ond to a single inciden t particle. Calorimeter subsystem co v erage Depth Energy resolution ( E ) =E CEM j j < 1 : 1 18 X 0 13.5%/ p E T 2% PEM 1 : 1 < j j < 3 : 6 21 X 0 16%/ p E T 1% CHA j j < 0 : 9 4.5 0 75%/ p E T 3% WHA 0 : 7 < j j < 1 : 3 4.5 0 75%/ p E T 3% PHA 1 : 2 < j j < 3 : 6 7 0 80%/ p E T 5% The hadronic calorimeter functions on similar principles, it is designed to in teract strongly with hadrons, th us making it p ossible to measure their energy b y measuring the dep osited energy In this case the incoming particle in teracts with the n uclei of the material in the detector leading to a similar sho w er dev elopmen t. The CDF calorimeter system co v ers the full azim uthal range and extends up to 5.2 in j j Its comp onen ts are the Cen tral Electromagnetic Calorimeter (CEM) and the Cen tral Hadronic Calorimeter (CHA) whic h co v er the cen tral region as PAGE 34 19 the name suggests; the Plug Electromagnetic Calorimeter (PEM) and the Plug Hadronic Calorimeter (PHA), whic h extend the j j co v erage more; the Endw all Hadronic Calorimeter (WHA), whic h is lo cated in b et w een the cen tral and plug regions; and nally the Miniplug (MNP), whic h is a forw ard electromagnetic calorimeter whic h is not used in this analysis. Some tec hnical details are listed in T able 3{1 Eac h calorimeter subsystem is divided in smaller units called to w ers and has a pro jectiv e geometry whic h means that all to w ers p oin t to the cen ter of the detector. Cen tral Calorimeter Eac h to w er of the cen tral calorimeters co v ers 15 in Â¢ and 0.11 in Â¢ and it is comp osed of alternating la y ers of absorb er and activ e material. When a particle passes through the dense absorb er material it pro duces a sho w er of secondary particles whic h in teract with the activ e material and pro duce ligh t. The ligh t is collected and con v erted in a measuremen t of energy dep osition. The CEM is made of 0.5 cm thic k p olyst yrene scin tillator activ e la y ers whic h are separated b y 0.32 cm thic k lead absorb er la y ers. The CEM extends from the radius of 173 cm up to 208 cm from the b eam line and the total thic kness of the CEM material is ab out 18 radiation lengths. It is divided in to t w o iden tical pieces at = 0 and b oth ha v e an one inc h thic k iron plate at = 0. This kind of uninstrumen ted region is commonly referred to as a \crac k". An imp ortan t parameter is the energy resolution. The CEM resolution for electrons or photons b et w een 10 and 100 GeV is giv en b y ( E ) E = 13 : 5% p E T 2% ( C E M ) ; (3-2) where E T (in GeV ) is the transv erse energy of the electron or photon and the sym b ol indicates that t w o indep enden t terms are added in quadrature. PAGE 35 20 Inside the CEM, at a depth of ab out six radiation lengths or 184 cm a w a y from the b eam line, there is the Cen tral Electromagnetic Sho w er Maxim um detector (CES). Its p osition corresp onds to the lo cation of the maxim um dev elopmen t of the electromagnetic sho w er whic h w as describ ed earlier. The CES determines the sho w er p osition and its transv erse dev elopmen t using a set of orthogonal strips and wires. Catho de strips are aligned in the azim uthal direction pro viding z -view information and ano de wires are arranged along the z direction pro viding the r Â¡ view information. The p osition measuremen t using this detector has a resolution of 0.2 cm for 50 GeV electrons. The CHA is lo cated righ t after the CEM and its pseudorapidit y co v erage is j j < 0 : 9 while WHA calorimeter extends this co v erage up to j j < 1 : 3. It has a depth of ab out 4.5 in teraction lengths and consists of 1 cm thic k acrylic scin tillator la y ers in terlea v ed with steel la y ers 2.5 cm thic k. The end w all calorimeter uses 5 cm thic k absorb er la y ers. The electromagnetic and hadronic calorimeters w ere calibrated using electron and resp ectiv ely pion test b eams of 50 GeV Their p erformance is describ ed b y the energy resolution. F or c harged pions b et w een 10 and 150 GeV it is giv en b y ( E ) E = 75% p E T 3% ( C H A; W H A ) ; (3-3) Plug Calorimeter The PEM and PHA calorimeters co v er an j j range b et w een 1.1 and 3.6 and emplo y the same principles.The PEM is a lead/scin tillator calorimeter with 0.4 cm thic k activ e la y ers and 0.45 cm thic k lead la y ers. It also includes a sho w er maxim um detector at a depth of ab out 6 radiation lengths, the PES, but it is not used in this analysis. The PHA con tains 0.6 cm thic k scin tillator la y ers and 5 cm think iron la y ers. An r Â¡ z cross section view of the PAGE 36 21 Figure 3{3: The r Â¡ z view of the new Run I I end plug calorimeter CDF plug calorimeters is sho wn in 3{3 In this analysis the calorimeters w ere used to determine the momen tum and direction of electrons and jets. 3.2.2 T rac king System The purp ose of the trac king system is to reconstruct tra jectories and momen ta of c harged particles and nd the lo cation of the primary and secondary v ertices. A primary v ertex is the lo cation where a p p in teraction o ccurred. A secondary v ertex is the lo cation where a deca y to ok place. F or instance c harm and b ottom hadrons ha v e a longer lifetime than ligh t quarks hadrons, long enough that they can tra v el and deca y at a lo cation exp erimen tally discernible from the primary v ertex lo cation. Suc h distances are of the order of h undreds of microns and this feature is exploited in hea vy a v or tagging algorithms. The comp onen ts of the trac king system are the follo wing: sup erconducting solenoid, silicon detectors and a large op en-cell drift c ham b er kno wn as Cen tral Outer T rac k er (COT). A diagram is sho wn in Figure 3{4 As it can b e seen, the PAGE 37 22 COT 0 .5 1.0 1.5 2.0 0 .5 1.0 1.5 2.0 2.5 3.0 END WALLHADRONCAL. SVX II5 LAYERS 3 0 30 0 SOLENOID INTERMEDIATE SILICON LAYERS CDF Tracking Volume = 1.0 = 2.0 END PLUG EM CALORIMETER END PLUG HADRON CALORIMETER = 3.0 n n n m m Figure 3{4: Longitudinal view of the CDF I I T rac king System. COT isn't v ery useful for j j > 1 so CDF can rely only on the silicon detectors for that region. But for the j j < 1 range b oth silicon and COT information is used and a full 3D trac k reconstruction is p ossible. The Solenoid This is a sup erconducting magnet whic h pro duces a 1.4 T uniform magnetic eld orien ted along the z -axis. It is 5 m long and 3 m in diameter and it allo ws for the determination of the momen tum and sign of c harged particles. Silicon Detectors It is comp osed of three separate parts: La y er 00 (L00), the Silicon V ertex Detector (SVX) and the In termediate Silicon La y ers (ISL). La y er 00 This is the innermost part of the silicon detectors and is made up b y a single la y er of radiation hard silicon attac hed to the b eam pip e [ 14 ]. Its purp ose is to impro v e the impact parameter resolution for lo w momen tum particles whic h suer m ultiple scattering in the materials and readout electronics found prior to other trac king system comp onen ts. Also it can help extend the lifetime of the PAGE 38 23 trac king system in general, giv en that the inner la y ers of the SVX will degrade due to radiation damage. Silicon V ertex Detector The SVX is segmen ted in to three barrels along the z -axis and has a total length of 96 cm. Eac h barrel is divided in to 12 w edges in whic h con tain v e la y ers of silicon microstrip detectors. All la y ers are double-sided (Figure 3{5 ). Figure 3{5: Isometric view of the three barrel structure of the CDF Silicon V ertex Detector. It is lo cated outside the L00 from 2.4 cm to 10.7 cm in radial co ordinate. Both r Â¡ z and r Â¡ co ordinates are determined. This subsystem is used to trigger on displaced v ertices whic h are an indication of hea vy a v or con ten t and helps with the trac k reconstruction. It is a complex system in v olving a total of 405,504 PAGE 39 24 c hannels and unfortunately it is imp ossible to presen t it in an y detail without going in to to o man y tec hnicalities. In termediate Silicon La y ers The ISL is comp osed of three la y ers of double-sided silicon with axial and small-angle stereo sides and it is placed just outside the SVX. The geometry is less in tuitiv e but it can b e seen in Figure 3{4 : there is one la y er in the cen tral region ( j j < 1), at a radius of 22 cm In the plug region (1 < j j < 2) t w o la y ers of silicon are placed at radii of 20 and 28 cm resp ectiv ely The SVX and ISL are a single functional system whic h pro vides stand-alone silicon trac king and hea vy a v or tagging o v er the full region j j < 2 : 0. Cen tral Outer T rac k er It is a large op en-cell drift c ham b er whic h pro vides trac king at relativ ely large radii, b et w een 44 cm and 132 cm and it co v ers the region j j < 1 : 0. It consists of four axial and four small angle ( Â§ 3 ) stereo sup er-la y ers. The sup erla y ers are divided in small cells and eac h cell con tains 12 sense wires. The end-view of the COT detector is sho wn in Figure 3{6 The cells are lled with a gas mixture of Ar-Et-CF 4 in prop ortions 50:35:15. The c harged particles passing through the c ham b er ionize the gas and the pro duced electrons are attracted to the sense wires. When they arriv e in the vicinit y of the wire a pro cess of a v alanc he ionization o ccurs and more electrons are pro duced and then collected b y the wire. The lo cation of the initial electron can b e calculated based on the the sense wire whic h w as hit and the drift v elo cit y This only describ es ho w one 'p oin t' of the tra jectory is determined, but the pro cess rep eats in other cells and based on the lo cation of man y suc h hits a trac k tra jectory is reconstructed. The imp ortan t parameter to b e reconstructed is the trac k curv ature from whic h particle momen tum is obtained. The COT has a resolution of ab out 0 : 7 Â¢ 10 Â¡ 4 cm Â¡ 1 whic h leads to a momen tum resolution of p T =p 2T 0 : 3%( GeV =c ) Â¡ 1 The t ypical drift v elo cit y is ab out 100 m=ns PAGE 40 25 Figure 3{6: One sixth of the COT in end-view; o dd sup erla y ers are small-angle stereo la y ers and ev en sup erla y ers are axial. The COT allo ws for the reconstruction of trac ks of c harged particles in the r Â¡ and r Â¡ z planes. 3.2.3 The Muon System The Muon System is p ositioned farthest from the b eam line and it is comp osed of four systems of scin tillators and prop ortional c ham b ers. They co v er the region up to j j < 2. In this analysis w e only m uons detected b y the three cen tral m uon detectors kno wn as the Cen tral Muon Detector (CMU), Cen tral Muon Upgrade PAGE 41 26 (CMP) and Cen tral Muon Extension (CMX). Since these systems are placed b ehind the calorimeter and b ehind the return y ok e of the magnet most other particles are absorb ed b y them. Ho w ev er, an extra la y er of 60 cm of steel is added in fron t of the CMP for the same purp ose of absorbing other particles. These three systems co v er the region j j < 1 : 0. The 1 : 0 < j j < 2 : 0 range is co v ered b y the In termediate Muon System (IMU), but w e don't use it in this analysis. 3.2.4 The T rigger System As men tioned earlier in Run I I bunc hes of protons and an tiprotons collide ev ery 396 ns The a v erage n um b er of p p collisions p er bunc h crossing dep ends on the instan taneous luminosit y but for t ypical luminosities in Run I I w e exp ect one p p collision or more p er bunc h crossing therefore if w e w ere to record all ev en ts w e w ould need to sa v e 1.7 million ev en ts p er second. The t ypical ev en t size is ab out 250 kB so at suc h a rate w e w ould need to sa v e 435 GB of data p er second. Ho w ev er most p p collisions are diractiv e inelastic collisions in whic h the proton or an tiproton is brok en in to hadrons b efore the t w o are close enough suc h that a \hard core" in teraction b et w een partons can o ccur. These t yp e of collisions are not of m uc h in terest and therefore there is no need to record them. The purp ose of the trigger system is to lter out these less in teresting ev en ts, categorize and sa v e the remaining ones. This is ac hiev ed through a 3-tier arc hitecture sho wn in Fig. 3{7 Lev el-1 (L1) and Lev el-2 (L2) trigger systems use only part of the en tire ev en t to mak e a decision regarding the ev en t. They use dedicated hardw are to p erform a partial ev en t reconstruction. A t Lev el-1 all ev en ts are considered. They are stored in a pip eline since the L1 logic needs 4 s to reac h a decision, m uc h longer than the 396 ns b et w een t w o consecutiv e ev en ts. So while the decision making algorithm is executed b y the L1 hardw are the ev en t is pushed do wn the pip eline, whic h serv es the purp ose of temp orary memory When the ev en t reac hes the end of PAGE 42 27 L2 trigger Detector L3 Farm Mass Storage L1 Accept Level 2:Asynchronous 2 stage pipeline~20 m s latency 300 Hz Accept Rate L1+L2 rejection: 20,000:1 7.6 MHz Crossing rate132 ns clock cycle L1 trigger Level1:7.6 MHz Synchronous pipeline5544ns latency<50 kHz Accept rate L2 Accept L1 StoragePipeline:42 Clock Cycles DeepL2 Buffers: 4 EventsDAQ Buffers PJW 10/28/96 Dataflow of CDF "Deadtimeless" Trigger and DAQ Figure 3{7: CDF I I Data o w. the pip eline the decision is made and the ev en t is either ignored or allo w ed to mo v e on to Lev el-2. It is imp ortan t to b ear in mind that the L1 trigger is a sync hronous pip eline, with decision making pip elined suc h that man y ev en ts are presen t in the L1 trigger logic sim ultaneously y et at dieren t stages. Ev en though it tak es 4 s to reac h a decision and ev en though ev en ts come ev ery 396 ns the trigger analyzes them all, just not one at a time. The L1 trigger reduces the initial rate of ab out 1.7 MHz to b elo w 20 kHz. PAGE 43 28 The Lev el-2 trigger is an async hronous system with an a v erage decision time of 20 s The ev en ts passing L1 are stored in one of the four L2 buers w aiting for a L2 decision. If an ev en t arriv es from L1 and all the L2 buers are full the system incurs dead time and it is recorded during the run. The L2 trigger has a an acceptance rate of ab out 300 Hz, another signican t reduction. An ev en t that passed L2 is transferred to the data acquisition (D A Q) buers and then via a net w ork switc h to a Lev el-3 CPU no de. L3 uses full ev en t reconstruction to mak e a decision whether to write the ev en t on tap e or not. It consists of a \farm" of commercial CPUs, eac h pro cessing one ev en t at a time. If the ev en t passes this lev el as w ell it is sen t for writing on tap e. The maxim um output rate at L3 is 75 Hz, the main limitation b eing the data-logging rate with a t ypical v alue of 18 MB/s. Ev en ts are classied according to their c haracteristics and separated in to dieren t trigger paths. Some of these classes of ev en ts are pro duced copiously and in order to lea v e enough bandwidth for less abundan t ev en t t yp es a prescale mec hanism is put in place. F or example a prescale of 1:20 k eeps only one ev en t out of 20 that passed the trigger requiremen ts. PAGE 44 CHAPTER 4 EVENT RECONSTR UCTION The ra w data out of the man y sub detectors con tains a w ealth of information whic h is not alw a ys relev an t from a ph ysics analysis p oin t of view. F or instance, in this analysis w e need to kno w the momen ta of electrons, among other things. But what w e do ha v e in terms of ra w data is a series of hits in the trac king system and energy dep ositions in the electromagnetic and hadronic calorimeters, and these readings could b e caused b y other particles, or ma y not b e compatible with the tra jectory of an electron in the magnetic eld of the detector. Therefore detailed studies are necessary in order to nd an ecien t w a y of iden tifying ra w data patterns compatible with those pro duced b y an electron passing through the detector and at the same time reject as m uc h fak es as p ossible. In short the task of the ev en t reconstruction is to iden tify the particles whic h w ere presen t in the ev en t and measure their 4-momen ta as w ell as p ossible. W e will in v estigate this pro cess in more detail for eac h kind of particle in v olv ed. 4.1 Quark and Gluons Quarks and gluons pro duce a spra y of particles via parton sho w ering, hadronization and deca y Therefore they do not in teract with the detector directly but app ear as a more or less compact set of trac ks and calorimeter to w ers in whic h energy has b een dep osited. By \compact" w e mean compact in the Â¡ plane. Suc h a detector pattern is called a jet and in this case the purp ose of the reconstruction is to iden tify jets consisten t with quark or gluon origins and estimate their o v erall energy and momen tum. 29 PAGE 45 30 4.1.1 Jet Clustering Algorithm There are a couple of algorithms to iden tify these jets and estimate their energy In this analysis w e used an iterativ e \xed cone" algorithm (JETCLU) for jet iden tication [ 15 ]. The idea is to nd something lik e the cen ter of the jet and then assign all to w ers within a giv en radius R in the Â¡ plane around this cen ter to that jet. The algorithm b egins b y creating a list of all seed to w ers, or the to w ers with transv erse energy ab o v e some xed threshold (1 GeV ). Then, for eac h of the seed to w ers starting with the highest E T to w er, a precluster is formed b y all seed to w ers within radius R of the seed to w er. In this iterativ e pro cess the seed to w ers already assigned to a precluster are remo v ed from the list of a v ailable seed to w ers. F or eac h precluster a new cen ter is found b y doing an E T w eigh ted a v erage of the Â¡ p ositions of the to w ers p ertaining to the precluster. This is called \cen troid". No w using the cen troids as origin w e can recluster the the to w ers, this time allo wing for the inclusion of to w ers with energy ab o v e a lo w er threshold (100 M eV ). Again w e compute the cen troid and the pro cess is rep eated un til it con v erges, when the latest cen troid is v ery close to the previous cen troid. In the iterativ e pro cedure it is p ossible to ha v e one to w er b elonging to t w o jets. But this w ould lead to inconsistencies b ecause the total energy of the jets w ould not b e equal to the total energy of the to w ers. Therefore after the iterativ e pro cedure is nished w e ha v e to resolv e this double coun ting issue. One w a y is to merge the clusters that share to w ers. This happ ens if the o v erlapping to w ers' energy is more than 75% of the energy of the smaller cluster. But if this requiremen t is not satised eac h shared to w er is assigned to the closest cluster. In order to nd the 4-momen ta of the particles w e assign a massless 4-momen ta for eac h electromagnetic and hadronic to w er based on the measured energy in the to w er. The direction is giv en b y the unit v ector p oin ting from the ev en t v ertex to PAGE 46 31 the cen ter of the calorimeter to w er at the depth that corresp onds to the sho w er maxim um. The total jet 4-momen ta is dened b y summing o v er all to w ers in the cluster in the follo wing w a y: E = N X i =1 ( E em i + E had i ) (4-1) p x = N X i =1 ( E em i sin em i cos emi + E had i sin had i cos hadi ) (4-2) p y = N X i =1 ( E em i sin em i sin emi + E had i sin had i sin hadi ) (4-3) p z = N X i =1 ( E em i cos em i + E had i cos had i ) (4-4) where E em i ; E had i ; emi ; hadi ; em i ; had i are the electromagnetic and hadronic to w er energies, azim uthal and p olar angles for the i th to w er in the cluster. The jet 4-momen tum dep ends on the c hoice of R. F or small v alues to w ers p ertaining to the original parton are not included in the cluster, while for large v alues w e risk merging jets p ertaining to separate partons. A compromise used in man y CDF analysis is R = 0.4, and this is the v alue used here as w ell. 4.1.2 Jet Energy Corrections The algorithm just presen ted returns an energy v alue that needs further corrections in order to reect, on a v erage, the parton energy The reasons for the discrepancy are man y some instrumen tal and some due to underlying ph ysical pro cesses. PAGE 47 32 A few imp ortan t instrumen tal eects are listed b elo w: Jets in regions less instrumen ted, lik e in b et w een calorimeter w edges or in the = 0 region will naturally measure less energy It is kno wn that for lo w energy c harged pions ( E T < 10 GeV ) the calorimeter resp onse is non-linear, while in the energy measuremen t pro cedure it is assumed linear. Charged particles with transv erse momen ta b elo w 0.5 GeV =c are b en t b y the magnetic eld and nev er get to the calorimeter. Fluctuations in trinsic to the calorimeter resp onse. Imp ortan t ph ysical eects are the follo wing: The jet can con tain m uons whic h lea v e little energy in the calorimeter, and neutrinos whic h escap e undetected. Therefore the cluster energy underestimates the parton energy Cho osing a radius R = 0.4 in the clustering algorithm w e lose all to w ers righ tfully p ertaining to the jet but la ying outside that radius. Extra particles can hit the same to w ers, coming either from other in teractions presen t in the ev en t or from the underlying ev en t (the in teraction of the proton and an tiproton remnan ts, i.e. the quarks that did not tak e part in the hard pro cess). CDF dev elop ed a standard pro cedure [ 16 ] to correct for suc h eects. The user can c ho ose to correct only for certain eects using the standard corrections and correct other eects with more analysis-sp ecic corrections. This is also the case for this analysis, so w e are using the standard corrections only for the instrumen tal eects. F rom there w e use Mon te Carlo sim ulations to map the correlation b et w een the parton energy and the (partially) corrected measured jet energy PAGE 48 33 4.2 Electrons In this analysis w e are using only electrons detected in the cen tral calorimeter. Most if not all of an electron's energy is dep osited in the electromagnetic calorimeter, therefore the reconstruction algorithm starts b y iden tifying the list of seed to w ers, whic h are to w ers with electromagnetic energy greater than 2 GeV Then, to w ers adjacen t to the seed to w ers are added to the cluster if they ha v e non-zero electromagnetic or hadronic energy and are lo cated in the same w edge and nearest in direction. A t the end only clusters with electromagnetic E T greater than 2 GeV and electromagnetic to hadronic energy ratio smaller than 0.125 are k ept. Ho w ev er this last requiremen t regarding the ratio is ignored for v ery energetic electrons with energy greater than 100 GeV What has b een describ ed ab o v e is just an \electromagnetic ob ject" candidate. It serv es as basis for iden tifying b oth electrons and photons. F urther selection criteria [ 17 ] are necessary to iden tify electrons and separate them from photons or isolated c harged hadrons, 0 mesons and jets faking leptons. These other criteria are listed b elo w: A qualit y COT trac k with a direction matc hing the lo cation of the calorimeter cluster m ust b e presen t. The ratio of hadronic energy to calorimeter energy (HADEM) satises H AD E M < 0 : 055 + 0 : 00045 Â¢ E where E is the energy Compatibilit y b et w een the lateral sho w er prole of the candidate with that of test b eam electrons. Compatibilit y b et w een the CES sho w er prole and that of test b eam electrons. The asso ciated trac k's z p osition should b e in the luminous region of the b eam, whic h is within 60 cm of the nominal in teraction p oin t. PAGE 49 34 The ratio of additional calorimeter transv erse energy found in a cone of radius R=0.4 to the transv erse energy of the candidate electron is less than 0.1 (isolation requiremen t). 4.3 Muons Muons lea v e little energy in the calorimeter but they can b e iden tied b y extrap olating the COT trac ks to the m uon c ham b ers and lo oking for matc hing stubs there [ 18 ]. A stub is a collection of hits in the m uon c ham b ers that form a trac k segmen t. The m uon candidates are preselected b y requiring rather lo ose matc hing criteria b et w een the COT trac k and the stubs. As for electrons, w e apply a set of iden tication cuts [ 17 ] to separate m uons from cosmic ra ys and hadrons p enetrating the calorimeter: Energy dep osition in the calorimeter consisten t with a minim um ionizing particle, usually hadronic energy less than 6 GeV and electromagnetic energy less than 2 GeV Small energy-dep enden t terms are added for v ery energetic m uons with trac k momen tum greater than 100 GeV The distance b et w een the extrap olated trac k and the stub is small, compatible with a m uon tra jectory The actual v alue dep ends on the particular m uon detector in v olv ed (CMP CMU, CMX) but it is around 5 cm The distance of closest approac h b et w een the reconstructed trac k to the b eam line ( d 0 ) is less than 0.2 cm for trac ks con taining no silicon hits and less than 0.02 cm for trac ks con taining silicon hits (whic h pro vide b etter resolution). As for electrons, the asso ciated trac k's z p osition should b e in the luminous region of the b eam, within 60 cm of the nominal in teraction p oin t. The ratio of additional transv erse E T in a cone of radius R = 0.4 around the trac k direction is less than 0.1 PAGE 50 35 4.4 Neutrinos Neutrinos escap e detection en tirely but since the transv erse momen tum of the ev en t is zero, and that includes neutrinos, w e can indirectly measure their total ~ P T b y summing all the transv erse energy (momen tum) measured in the detector and assigning an y im balance to neutrinos or other (undisco v ered) long liv ed neutral particles escaping detection. This quan tit y is called \missing transv erse energy" and it is dened 6 E x = Â¡ N X i =1 ( E em i sin em i + E had i sin had i ) cos i (4-5) 6 E y = Â¡ N X i =1 ( E em i sin em i + E had i sin had i ) sin i (4-6) where E had i ; E em i is the hadronic and resp ectiv ely electromagnetic energy of the i th caloritemeter to w er, i is the the p olar angle of the line connecting the ev en t v ertex to the cen ter of the i th to w er and i is a w eigh ted a v erage dened b y: i = E em i sin em i cos emi + E had i sin had i cos hadi E em i sin em i + E had i sin had i (4-7) with emi ; hadi w eigh ted a v erages themselv es but in trato w er. In the calculation of 6 ~ E T using the form ulae ab o v e only to w ers with energy ab o v e 0.1 GeV are used. This requiremen t is applied individually to hadronic and electromagnetic comp onen ts. The magnitude 6 E T is giv en b y 6 E T = q 6 E 2 x + 6 E 2 y (4-8) Since m uons do not lea v e m uc h energy in the calorimeter and ra w jet energy measuremen ts are systematically lo w it follo ws that the ab o v e quan tit y is only a rst order appro ximation for the neutrinos' P T and needs further corrections. PAGE 51 36 The rst correction is directly related to jet corrections. If w e scale the energy of jets b y some factor b ecause that is a b etter matc h to parton energy then in computing the total measured ~ E T w e should replace the ra w jet energy measured b y the calorimeter with the corrected energy as giv en b y the jet energy corrections. These corrections are applied only to jets with E T ab o v e 8 GeV and therefore all calorimeter to w ers not included within suc h jets do not receiv e an y correction. The second correction is related to m uons b eing minim um ionizing particles, lea ving little energy in the calorimeter. Therefore a b etter estimate of the total ~ E T of the ev en t is obtained b y remo ving calorimeter to w ers asso ciated with m uons from the ab o v e calculations and replacing their con tribution with the measured ~ P T of the m uons. In this analysis w e use the missing E T v alue only for ev en t selection. It pla ys no role in the reconstruction of the in v arian t mass and therefore more detailed studies on missing E T resolution are not included here. PAGE 52 CHAPTER 5 EVENT SELECTION AND SAMPLE COMPOSITION The top quark deca ys so quic kly that it do es not ha v e time to form an y top hadrons and therefore a t t nal state app ears under dieren t signatures based on the deca y c hain of the top quark: t W + b (5-1) W + l + l ; W + q q 0 (5-2) where l stands for one of the c harged lepton t yp es e; or q stands for u or c and q 0 for one of the \do wn" quarks d s or b The top quark can also deca y to either a d or a s quark instead of b but the com bined branc hing ratios for these t w o pro cesses are b elo w 1% and generally ignored. Based on these deca y mo des w e can see that a t t pair deca y can app ear under three dieren t exp erimen tal signatures: Six jets or sometimes more due to radiation, when b oth W b osons deca y hadronically This is the \hadronic" c hannel. F our jets or more, a c harged lepton and missing ~ E T when only one W b oson deca ys hadronically This is the \lepton+jets" c hannel. Tw o jets or more, t w o c harged leptons of opp osite sign and missing ~ E T when b oth W b osons deca y leptonically This is the \dilepton" c hannel. The sc heme is complicated a bit b ecause the lepton also deca ys b efore detection and it can either \transform" in to a jet, if it deca ys hadronically or 37 PAGE 53 38 pro duce an electron or a m uon and more neutrinos, if it deca ys leptonically Ho w ev er, regardless of the deca y mo de, these ev en ts are dicult to iden tify and w e decided to dev elop an algorithm whic h should w ork w ell with non ev en ts only The branc hing ratios are dened essen tially b y the W branc hing ratios and lead to the follo wing n um b ers: T able 5{1: t t deca ys Category Branc hing Ratio Dilepton (excluding ) 5% Dilepton (at least one ) 6% Lepton+Jets (excluding ) 30% +Jets 15% Hadronic 44% 5.1 Choice of Deca y Channel The c hoice for the deca y c hannel has to tak e in to accoun t t w o more factors, the in trinsic M t t reconstruction resolution and the signal to bac kground ratio (S/B). The reconstruction resolution is w orse when more information is missing. Let us tak e a lo ok at eac h c hannel individually: In the dilepton c hannel w e measure w ell the lepton momen ta, w e ha v e some uncertain t y on the t w o b quark momen ta due to v arious eects describ ed in the previous c hapter, and w e don't measure at all the momen ta of the t w o neutrinos (6 v ariables). In the lepton+jets c hannel w e measure w ell the lepton momen tum, w e ha v e some uncertain t y on the four quark momen ta and w e don't measure at all the neutrino momen ta (3 v ariables). In the hadronic c hannel w e ha v e some uncertain t y on the six quark momen ta. In eac h case w e can reduce the n um b er of unkno wn v ariables b y applying transv erse momen tum conserv ation, whic h yields t w o constrain ts, but since this is the same across the c hannels w e can just compare them based on the facts stated ab o v e. If nont t bac kgrounds w ere absen t w e w ould certainly pic k the hadronic PAGE 54 39 c hannel since it has the highest branc hing ratio and least loss of information b ecause no neutrinos escap e detection. Ho w ev er the S/B ratio for Standard Mo del t t in the hadronic c hannel, without an y tagging requiremen t, is ab out 1:20 while the S/B ratio for the lepton+jets c hannel is roughly 1:2 with a branc hing ratio (2/3) comparable to the hadronic c hannel. Ev en though the resolution analysis w ould also fa v or the hadronic c hannel, with suc h a large bac kground it has, most probably less p oten tial than the lepton+jets c hannel. The dilepton c hannel has most unkno wn v ariables leading to p o orest reconstruction resolution and signican tly lo w er branc hing ratio, ev en though it enjo ys the b est S/B around 3:1. This qualitativ e analysis led us to pic k the lepton+jets c hannel as b est candidate for this analysis at the b eginning of Run 2 when w e exp ected less than 1 f b Â¡ 1 of in tegrated luminosit y a v ailable for this dissertation. The nal dataset on whic h this analysis is p erformed corresp onds to 680 pb Â¡ 1 of data. 5.2 Data Samples The data used in this analysis w as collected b et w een F ebruary 2002 and Septem b er 2005. A preselection of the data is carried out b y the collab oration and bad runs in whic h v arious comp onen ts of the detector malfunctioned are remo v ed. The remaining go o d data corresp onds to a total in tegrated luminosit y of 680 pb Â¡ 1 Tw o distinct datasets w ere used, the high P T cen tral electron dataset and the high P T m uon dataset. The electron dataset is selected b y a trigger path that requires a Lev el-3 electron candidate with CEM E em T > 18 GeV E had =E em < 0 : 125 and a COT trac k with p T > 9 GeV =c The m uon dataset is selected b y a trigger path that requires a Lev el-3 m uon candidate with p T > 18 GeV =c W e use only CMX m uons or m uons with stubs in both CMU and CMP sub detectors. Dilepton e Â¡ ev en ts can app ear in b oth datasets and one has to b e careful to not double coun t them. PAGE 55 40 5.3 Ev en t Selection In order to select t t ev en ts in the lepton+jet c hannel w e ha v e to require that eac h ev en t con tains at least four jets, an electron or a m uon and 6 E T consisten t with the presence of a neutrino, that is, a 6 E T v alue w ell ab o v e the uctuations around the n ull measuremen t. Certainly this lea v es a lot of space of maneuv er with resp ect to the range and the minim um E T threshold required for eac h ob ject. An exhaustiv e study for optimizing the cuts has not b een done indep enden tly ho w ev er w e adopted the widely used cuts for Standard Mo del t t selection in the lepton+jets c hannel whic h can b e found in most CDF top analyses. These cuts are the result of a great amoun t of w ork throughout Run 1 and Run 2 and are doing a ne job at separating signal (Standard Mo del t t in this case) from bac kgrounds. There could b e b etter cuts that impro v e the r esonant t t S/B but further studies w ould b e necessary to understand the o v erall eect on sensitivit y and what w ould b e an optim um for a 400 GeV =c 2 mass resonance ma y not b e so for a 800 GeV =c 2 resonance. The task of studying in detail the impact of selection criteria on sensitivit y will ha v e to b e addressed in a later v ersion of the analysis. Ho w ev er w e did compare the sensitivit y among three v ersions of jet selections and c hose the b est, as it will b e explained later. T able 5{2: Ev en t Selection Ob ject Requiremen ts Electron CEM, ducial, not from a con v ersion E T > 20 GeV + ID cuts Muon CMX or (CMU and CMP) detectors, not cosmics P T > 20 GeV + ID cuts ~ 6 E T Corrected 6 E T > 20 GeV Tigh t Jets Corrected E T > 15 GeV ; j j < 2 : 0 at least four tigh t jets Lo ose Jets Corrected E T > 8 GeV ; j j < 2 : 4 not used for selection p er se, but coun ted as jets PAGE 56 41 In table 5{2 w e presen t in a succinct form the requiremen ts [ 19 ] for the selection of electrons, m uons, jets and the 6 E T cut used. P ositrons and an tim uons follo w the same selections, of course. By \ducialit y" of electrons it is mean t that they are lo cated in w ell instrumen ted areas of the to w ers, not near to w er edges for instance. Con v ersion remo v al algorithms are used to remo v e electrons or p ositrons that come from photons hitting the v arious materials found b efore the calorimeter and pro ducing e Â¡ e + pairs. W e are not in terested in suc h electrons. The remo v al p er se is done b y a standard CDF algorithm [ 20 ]. There is also an algorithm for eliminating cosmic ra y m uons [ 21 ] and it is used to v eto on suc h m uons in our selection. W e also require one and only one lepton and that the distance b et w een the lepton's trac k Z 0 co ordinate and the jets' v ertex p osition is less than 5 cm since consistency with t t pro duction requires that all our ob jects m ust come from the same in teraction p oin t. The iden tication criteria complete the ev en t selection rules and w ere discussed in the previous c hapter, together with the corrections for ~ 6 E T and jets. A simple study w as p erformed in whic h w e compared the sensitivities of three jet selection criteria: exactly tigh t four jets four tigh t jets + extra jets (or none) three tigh t jets + extra jets ( > 0). The rst option pro vided the b est sensitivit y and w e adopted it for our selection. 5.4 Sample Comp osition The leading Standard Mo del pro cesses that can pro duce ev en ts passing these selection criteria are the follo wing: W pro duction asso ciated with jets ( W+jets). The W deca ys leptonically pro ducing a lepton and ~ 6 E T t t ev en ts. PAGE 57 42 Multijet ev en ts where one jet fak es an electron. Will will refer to these generically as QCD. Dib oson ev en ts suc h as W W ; W Z and Z Z The relativ e con tribution of these pro cesses can b e deriv ed if w e kno w the theoretical cross-section and the acceptance for eac h of them. T able 5{3: Cross-sections and acceptances Pro cess cross-section Acceptance SM t t 6.7 pb 4.5% WW 12.4 pb 0.14% WZ 3.7 pb 0.08% ZZ 1.4 pb 0.02% W+jets ? 0.7% QCD ? 0.7% Ho w ev er the W+jets and QCD cross-sections are not kno wn theoretically with go o d precision, but in other CDF top analyses the n um b er of ev en ts from these pro cesses is extracted from the data. F or this analysis w e decided to use only the ratio of the exp ected n um b er of ev en ts as deriv ed b y these analyses and t for the absolute normalization since in those analyses no ro om w as left for an y non-Standard Mo del pro cess, and that could bias our searc h. The constrain t used is giv en b elo w: N QC D N W = 0 : 1 (5-3) where N represen ts the exp ected n um b er of ev en ts. Resonan t t t acceptances are listed for comparison in T able 5{4 The searc h algorithm nds the most lik ely v alues for N W and signal cross-section as a function of resonance mass, and it is also able to compute the statistical PAGE 58 43 T able 5{4: Signal acceptances M X 0 (GeV/ c 2 ) Acceptance 450 0.047 500 0.051 550 0.055 600 0.057 650 0.059 700 0.062 750 0.062 800 0.063 850 0.063 900 0.061 relev ance of the most lik ely signal cross-section v alue. W e will explore it in detail in the next c hapters. PAGE 59 CHAPTER 6 GENERAL O VER VIEW OF THE METHOD AND PRELIMINAR Y TESTS This analysis con tains t w o ma jor pieces, one is the t t in v arian t mass ( M t t ) reconstruction and the second is the searc h for a non-Standard Mo del comp onen t in that sp ectrum, in particular a resonance con tribution. The reconstruction is complicated b ecause our parton lev el nal state, after the top deca y c hain, is comp osed of t w o b-quarks, t w o ligh t quarks, a neutrino and a c harged lepton. Exp erimen tally w e measure accurately only the lepton, whic h mak es the task of reconstructing the t t in v arian t mass sp ectrum with go o d precision non-trivial. There are a total of sev en p o orly measured or unmeasured v ariables: four quark energies and three comp onen ts of neutrino momen ta. In fact the jet direction is also smeared compared to the parton direction, but this is considered a second order eect compared to the ab o v e men tioned eects. Throughout the remaining of this dissertation w e will alw a ys assume that the jet direction is a go o d appro ximation for the parton direction. In the CDF Run 1 analysis [ 11 ] a somewhat straigh tforw ard approac h w as used to reconstruct the in v arian t mass sp ectrum. A 2 t w as constructed based on jet resolutions and the kno wledge of W and t masses and it w as used to w eigh t the unkno wn parton v alues. Minimizing the 2 with resp ect to the free parameters (the unkno wns listed ab o v e) pro vided an estimate for their most probable v alues. Then those v alues w ere used to compute the in v arian t mass of the system, M t t In this dissertation w e use an inno v ativ e approac h using matrix elemen t information to reconstruct the t t in v arian t mass sp ectrum. The maxim um information ab out an y giv en pro cess is con tained in its dieren tial cross-section 44 PAGE 60 45 and it is therefore natural to think that b y making use of more information in the analysis one can impro v e resolution and therefore sensitivit y Since w e decided to pursue a mo del indep enden t searc h w e will not b e able to use an y resonance matrix elemen ts. W e will use Standard Mo del t t matrix elemen t to help with w eigh ting the v arious p ossible parton lev el congurations and extract an a v erage v alue for the in v arian t mass, ev en t b y ev en t. The in v arian t mass distribution obtained in suc h a w a y follo ws closely the Standard Mo del t t sp ectrum at parton lev el and it is also a go o d estimator for the resonan t t t ev en ts as it will b e sho wn later. In order to v alidate the matrix elemen t mac hinery w e p erformed a series of tests b y implemen ting a conceptually simpler matrix elemen t analysis, whic h is the top mass measuremen t using matrix elemen ts. Our tests include only Mon te Carlo sim ulation studies but they pla y ed a crucial role in pushing this analysis forw ard since our results w ere v ery similar to those of groups actually w orking on the top mass measuremen t using matrix elemen t information. The remainder of this c hapter will presen t these studies whic h will also familiarize to reader with the tec hnical details common to b oth analyses. In the next c hapter w e will sho w ho w to extend the algorithm in order to reconstruct the M t t sp ectrum. 6.1 T op Mass Measuremen t Algorithm The purp ose of this algorithm is to build a top mass dep enden t lik eliho o d for eac h ev en t using the dieren tial cross-section for the SM t t pro cess. W e will use the leading order (LO) term in the Standard Mo del t t cross-section form ula. The nal state is made up of the 6 deca y pro ducts of the t t system. Let ~ p i b e their 3-momen ta. W e ha v e the follo wing equation represen ting the conserv ation of the transv erse momen tum of the system: ~ P T 6 = 6 X i =1 ~ p Ti = 0 (6-1) PAGE 61 46 This is a constrain t on the sev en unkno wn v ariables men tioned in the previous c hapter and it will b e used in all the top mass tests w e will sho w in this c hapter. In realit y w e ha v e initial and nal state radiation (ISR and FSR) whic h leads to a non-zero ~ P T 6 v alue. Still, the a v erage ~ P T 6 is n ull so constraining it to 0 should not bias the result for top mass but ma yb e only increase the statistical error. F or the resonance searc h analysis though w e will use the ~ P T 6 distribution from Mon te Carlo sim ulation and in tegrate o v er it since it helps narro w the reconstructed resonance p eak. The probabilit y of a giv en parton lev el nal state conguration ~ p i r elative to other c ongur ations is giv en b y: dP ( ~ p i j m top ) = 1 ( m top ) Z dz a Z dz b f k ( z a ) f l ( z b ) d k l ( ~ p i j m top ; z a ~ P ; z b ~ P ) (6-2) or in short dP ( ~ p i j m top ) = par t ( ~ p i j m top ) Y d 3 ~ p i (6-3) Indices k, l co v er the partons t yp es in the proton and an tiproton resp ectiv ely Summation o v er b oth indices is implied. The parton distribution functions (PDFs) are giv en b y f k ( z ) and ~ P ~ P designate the proton and an tiproton momen tum. Plugging in the dieren tial cross-section form ula d k l ( ~ p i j p k ; p l ) = jM k l j 2 4 E k E l j v k Â¡ v l j (2 ) 4 4 ( p k + p l Â¡ Â§ p i ) Y d 3 ~ p i (2 ) 3 2 E i (6-4) one can obtain an explicit form for par t ( ~ p i j m top ). The top mass ( m top ) en ters as a parameter. W e com bine the probabilit y densities ( ) of all ev en ts in the sample in to a join t lik eliho o d whic h is a function of m top : L ( m top ) = 1 2 ::: n (6-5) PAGE 62 47 W e exp ect that maximizing this lik eliho o d with resp ect to the parameter ( m top ) yields its correct (input) v alue, as it should. The algorithm presen ted ab o v e is only a rst step, since it assumes w e kno w the parton lev el momen ta whic h is not true exp erimen tally But the treatmen t of more realistic situations in whic h w e don't measure the nal state completely or accurately enough follo ws the same line of though t, basically w e compute the probabilit y densit y of observing a lepton+jet ev en t: obs ( ~ j 1 ; ~ j 2 ; ~ j 3 ; ~ j 4 ; ~ p l j m top ) = = X Z par t ( ~ p (1) ; ~ p (2) ; ~ p (3) ; ~ p (4) ; ~ p l ; ~ p j m top ) d 3 ~ p 4 Y i =1 T i ( ~ j ( i ) j ~ p ( i ) ) d 3 ~ p i (6-6) In this form ula w e assume that the rst t w o argumen ts of the parton densit y ( par t ) function represen t the b-quark momen ta, the jet 3-momen ta are denoted b y ~ j i and the parton 3-momen ta b y ~ p i T i ( ~ j j ~ p ) is the probabilit y densit y that a parton with 3-momen ta ~ p is measured as a jet with 3 momen ta ~ j These functions are called parton-to-jet transfer functions. W e use dieren t transfer functions for b quarks and ligh ter quarks, so w e added an index to dieren tiate the t w o. With our con v en tions T 1 = T 2 = T b and T 3 = T 4 = T l ig ht In practice w e appro ximate the parton direction with the jet direction, as men tioned earlier, whic h simplies the calculations a bit. Ev en with b -tagging information a v ailable, there is no unique assignmen t of jets to partons. This indistinguishabilit y is addressed b y summing o v er all al lowe d p erm utations using the 2 S 4 p erm utation v ariable. A p erm utation is allo w ed if it do esn't con tradict a v ailable b-tagging information. The pro cedure to extract the top mass is the same as in the idealized case of a p erfect measuremen t of the nal state discussed b efore, that is, com bine all ev en ts in a join t lik eliho o d and maximize it with resp ect to the parameter m top PAGE 63 48 q q g t t W W b b l n d u Figure 6{1: Main leading order con tribution to t t pro duction in p p collisions at p s = 1 : 96 T eV 6.1.1 The Matrix Elemen ts (ME) The leading order matrix elemen t for the pro cess q q t t W + bW Â¡ b q q b l b (Fig. 6{1 ) is not easily calculable analytically without making an y appro ximation. W e found it useful to compute the ME directly using explicit spinors and Dirac matrices b ecause this allo ws us to compute new, non-Standard Mo del matrix elemen ts v ery easily in case w e w an ted to incorp orate them in the algorithm later on. Dedicated searc hes for sp ecic mo dels (spin 0 resonance, spin 1 resonance, color o ctet resonance) w ould b e in teresting as w ell, but w e will not address them in this dissertation. Ignoring n umerical factors the quark annihilation diagram amplitude is giv en b y M q q v ( p q ) u ( p q ) Â¢ u ( p u ) (1 Â¡ 5 ) v ( p d ) Â¢ u ( p l ) (1 Â¡ 5 ) v ( p ) Â¢ u ( p b ) (1 Â¡ 5 ) 6 p t + m t p 2t Â¡ m 2t + im t Â¡ t 6 p t + m t p 2 t Â¡ m 2t + im t Â¡ t (1 Â¡ 5 ) v ( p b ) Â¢ g ( p q + p q ) 2 Â¢ g Â¡ P W + P W + =m 2W P 2 W + Â¡ m 2W + im W Â¡ W Â¢ g Â¡ P W Â¡ P W Â¡ =m 2W P 2 W Â¡ Â¡ m 2W + im W Â¡ W (6-7) PAGE 64 49 If w e consider the masses of the ligh t quarks and leptons negligible w e can simplify the expression of the W propagators so the ME reads M q q v ( p q ) u ( p q ) ( p q + p q ) 2 Â¢ u ( p u ) (1 Â¡ 5 ) v ( p d ) P 2 W + Â¡ m 2W + im W Â¡ W Â¢ u ( p l ) (1 Â¡ 5 ) v ( p ) P 2 W Â¡ Â¡ m 2W + im W Â¡ W Â¢ u ( p b ) (1 Â¡ 5 ) 6 p t + m t p 2t Â¡ m 2t + im t Â¡ t 6 p t + m t p 2 t Â¡ m 2t + im t Â¡ t (1 Â¡ 5 ) v ( p b ) (6-8) W e tested our n umerical calculation using explicit Dirac matrices and spinors with the analytical calculation for the squared amplitude b y Barger [ 22 ] and w e found the t w o calculations in go o d agreemen t. That calculation uses the narro w width appro ximation (NW A) in treating the top quark propagators and therefore the t w o metho ds are not equiv alen t when one or b oth of the top masses are o-shell. W e also tested our implemen tation on simpler QED matrix elemen t calculations and it pro duced results iden tical with their exact analytical expressions. g g g t t g g t t g g t t Figure 6{2: Gluon-gluon leading order con tribution to t t pro duction in p p collisions at p s = 1 : 96 T eV The gluon-gluon pro duction mec hanism is describ ed b y three diagrams in Fig. 6{2 in whic h the top deca ys ha v e not b een depicted explicitly PAGE 65 50 The matrix elemen t needed in the cross-section form ula for the gluon-gluon pro duction mec hanism has the structure: jM g g j 2 = 1 64 X col or jA 1 + A 2 + A 3 j 2 (6-9) where A i are the amplitudes corresp onding to the three diagrams. The color sum co v ers all p ossible color congurations for the gluons and quarks. This expression is not optimal with regard to CPU time if w e w ere to do these sums as they stand. W e can rewrite it as jM g g j 2 = 1 64 X col or ( jA 1 j 2 + jA 2 j 2 + jA 3 j 2 + 2 Â¢ R e fA 1 A Â¤2 g + 2 Â¢ R e fA 1 A Â¤3 g + 2 Â¢ R e fA 2 A Â¤3 g ) (6-10) This form is v ery con v enien t, the color sums can b e ev aluated for eac h individual term regardless of the kinematics b ecause the amplitudes are factorized as A = A k in Â¢ A col or W e can write again jM g g j 2 = f 1 Â¢ jA k in 1 j 2 + f 2 Â¢ jA k in 2 j 2 + f 3 Â¢ jA k in 3 j 2 + R e f f 12 Â¢ A k in 1 A k in Â¤ 2 + f 13 Â¢ A k in 1 A k in Â¤ 3 + f 23 Â¢ A k in 2 A k in Â¤ 3 g (6-11) All the color summing is enco ded in the six constan ts f i ; f ij W e found these to b e 3/16, 1/12, 1/12, -3i/16, 3i/16 and -1/48 resp ectiv ely W e cross-c hec k ed against the analytical form ula a v ailable for the 2 2 pro cess describ ed in the diagrams ab o v e (ignoring the top deca ys) and found them in p erfect agreemen t. The pro cedure just presen ted w orks as w ell for the 2 6 pro cess and this is ho w w e compute it. 6.1.2 Appro ximations: Change of In tegration V ariables The metho d as presen ted in v olv es sev en in tegrals (three o v er neutrino 3-momen tum and four o v er quark momen ta) and summing o v er com binatorics. If for instance w e c ho ose to set the t t transv erse momen tum to zero that w ould PAGE 66 51 amoun t to t w o constrain ts reducing the n um b er of in tegrals b y t w o. Or w e could c ho ose to set the W or top on shell, dep ending on the lev el of precision and sp eed desired. Ev en from a purely n umerical p oin t of view, it w ould b e easier to in tegrate only around the top and W mass p oles rather than o v er the large range of the original v ariables men tioned b efore. F or all these reasons a c hange of v ariable w as p erformed. The new v ariables are the t t transv erse momen tum and the in termediate particle masses m W 1 m W 2 m T 1 m T 2 This is a set of only six new v ariables, whic h means w e need to k eep one of the initial v ariables unc hanged (one of the ligh t quarks' energy). The c hange of v ariable and the asso ciated Jacobian calculations are detailed in the App endix. Since the calculations are a bit length y w e w an ted to mak e sure no mistak e w as made so w e used sim ulated ev en ts where all v ariables are a v ailable and an y c hange of v ariables can b e readily c hec k ed. W e found that the c hange of v ariable implemen tation w orks v ery w ell. In the implemen tation of the algorithm w e alw a ys use these v ariables, b oth for these preliminary top mass tests and for the M t t reconstruction. 6.2 Mon te Carlo Generators F or some of the top mass tests w e used CompHep 4.4 [ 23 ], whic h is a matrix elemen t based ev en t generator. One can select explicitly whic h diagrams to use for ev en t generation. CompHep preserv es all spin correlations and o-shell con tributions since it do esn't attempt to simplify the diagrams in an y w a y CompHep generates ev en ts separately for eac h diagram u u t t d d t t and g g t t W e also used Pythia [ 24 ] and Herwig [ 25 ] ocial CDF samples (\Gen5") but the rst tests for top mass w ere done with parton lev el CompHep ev en ts and then with Gaussian smeared partons. The Gaussian smearing of parton energies is mean t to sim ulate the relationship b et w een the jet and parton energies. PAGE 67 52 6.3 Basic Chec ks at P arton Lev el mpv Entries 250 Mean 175 RMS 0.1709 / ndf 2 c 11.37 / 9 Prob 0.2512 Constant 4.925 58.42 Mean 0.01045 175 Sigma 0.008629 0.159 GeV 173 173.5 174 174.5 175 175.5 176 176.5 177 Events 0 10 20 30 40 50 60 70 Most Likely Top Mass mpv Entries 250 Mean 175 RMS 0.1709 / ndf 2 c 11.37 / 9 Prob 0.2512 Constant 4.925 58.42 Mean 0.01045 175 Sigma 0.008629 0.159 Most Likely Top Mass mpv Entries 250 Mean 175 RMS 0.1721 / ndf 2 c 9.428 / 9 Prob 0.3987 Constant 4.884 56.3 Mean 0.01081 175 Sigma 0.009748 0.1664 GeV 173 173.5 174 174.5 175 175.5 176 176.5 177 Events 0 10 20 30 40 50 60 70 Most Likely Top Mass mpv Entries 250 Mean 175 RMS 0.1721 / ndf 2 c 9.428 / 9 Prob 0.3987 Constant 4.884 56.3 Mean 0.01081 175 Sigma 0.009748 0.1664 Most Likely Top Mass Figure 6{3: Reconstructed top mass from 250 pseudo exp erimen ts of 20 ev en ts at parton lev el with m t =175 GeV =c 2 The left plot is deriv ed using only the correct com bination, while the righ t plot uses all com binations Finding the top mass when the nal state is kno wn or measured p erfectly is straigh tforw ard so w e exp ect our metho d to pro duce the correct answ er without an y bias. Using u u t t CompHep ev en ts, w e p erformed 250 pseudo exp erimen ts of 20 ev en ts eac h. Whic h means that w e extracted the top mass from a join t lik eliho o d of 20 ev en ts eac h time. W e rep eated this exercise for v arious generator lev el top masses to mak e sure there is no mass dep enden t bias. First, w e used only the correct com bination in the lik eliho o d, that is, w e not only assumed to ha v e measured the parton 3-momen ta ideally but also iden tied the quark a v ors. F or m t = 175 GeV the reconstructed mass is sho wn in the righ t plot of Figure 6{3 As it can b e seen, w e get bac k the exact input mass. Similarly go o d results w ere obtained for other masses. Next w e let all 24 com binations con tribute to the ev en t lik eliho o d b y summing o v er all p erm utations and rep eated the same exercise. The reconstructed top mass PAGE 68 53 is barely mo died b y the inclusion of all com binations, as sho wn in the second plot of Figure 6{3 Again, tests on other samples with dieren t top masses didn't pro duce an y surprise. These results are summarized in Figure 6{4 sho wing the output (reconstructed) mass v s input mass when using all com binations. The slop e is consisten t with 1.0 and the in tercept is consisten t with 0, whic h pro v es that there are no mass dep enden t eects, at least not in the mass range of in terest. P erhaps it w ould b e useful to remind the reader that the purp ose of these studies is to establish the v alidit y of the matrix elemen t calculations and o v erall correctness of implemen tation of a non-trivial algorithm. Otherwise they are quite simple. W e also lo ok ed at the r ms of the pull distributions for eac h mass and it w as found to b e 1.0 within errors, whic h is a more comp elling indication that w e are mo deling these ev en ts v ery w ell with our lik eliho o d. GeV 165 170 175 180 185 GeV 165 170 175 180 185 GeV 165 170 175 180 185 / ndf 2 c 1.251 / 3 Prob 0.7408 p0 0.1102 -0.1337 p1 0.0006387 1.001 / ndf 2 c 1.251 / 3 Prob 0.7408 p0 0.1102 -0.1337 p1 0.0006387 1.001 Top Mass : Reconstructed vs True / ndf 2 c 1.251 / 3 Prob 0.7408 p0 0.1102 -0.1337 p1 0.0006387 1.001 / ndf 2 c 1.251 / 3 Prob 0.7408 p0 0.1102 -0.1337 p1 0.0006387 1.001 Figure 6{4: Reconstructed top mass vs. true top mass from pseudo exp erimen ts of 20 ev en ts using all 24 com binations, at parton lev el PAGE 69 54 6.4 T ests on Smeared P artons A more realistic test in v olv es a rudimen tary sim ulation of the calorimeter resp onse obtained b y smearing the parton energies (the four nal state quarks' energies). Also, the neutrino 3-momen tum information is ignored in reconstruction. W e used 20% Gaussian smearing, whic h is quite realistic when compared to parton-to-jet transfer functions' r ms The t t transv erse momen tum w as tak en to b e zero and also the top quark w as forced on shell, th us the n um b er of in tegrals w as reduced to just three. W e used the same u u t t CompHep ev en ts for these tests but later w e did c hec k with Herwig ev en ts and the results w ere similar. The same pseudo exp erimen ts of 20 ev en ts w ere p erformed and in Figure 6{5 w e sho w the reconstructed mass v s the true mass for the righ t com bination and for all 24 com binations. GeV 165 170 175 180 185 GeV 165 170 175 180 185 GeV 165 170 175 180 185 / ndf 2 c 1.421 / 3 Prob 0.7007 p0 1.996 -0.214 p1 0.01149 1.001 / ndf 2 c 1.421 / 3 Prob 0.7007 p0 1.996 -0.214 p1 0.01149 1.001 Top Mass : Reconstructed vs True / ndf 2 c 1.421 / 3 Prob 0.7007 p0 1.996 -0.214 p1 0.01149 1.001 / ndf 2 c 1.421 / 3 Prob 0.7007 p0 1.996 -0.214 p1 0.01149 1.001 GeV 165 170 175 180 185 GeV 165 170 175 180 185 GeV 165 170 175 180 185 / ndf 2 c 4.742 / 3 Prob 0.1917 p0 2.384 -1.33 p1 0.01374 1.009 / ndf 2 c 4.742 / 3 Prob 0.1917 p0 2.384 -1.33 p1 0.01374 1.009 Top Mass : Reconstructed vs True / ndf 2 c 4.742 / 3 Prob 0.1917 p0 2.384 -1.33 p1 0.01374 1.009 / ndf 2 c 4.742 / 3 Prob 0.1917 p0 2.384 -1.33 p1 0.01374 1.009 Figure 6{5: Reconstructed top mass vs. true top mass from pseudo exp erimen ts of 20 ev en ts with smearing. The left plot is deriv ed using only the correct com bination, while the righ t plot uses all com binations PAGE 70 55 W e t the pulls from pseudo exp erimen ts with a Gaussian and the returned width w as 1.09 Â§ 0.07 for the 175 GeV sample, again consisten t with 1. W e observ ed similar pulls for other masses as w ell. The purp ose of this set of tests w as to v alidate the new additions to the algorithm implemen tation: transfer functions, transformation of v ariables and in tegration o v er unmeasured quan tities. The success of this tests giv es us condence that the more realistic v ersion of the algorithm is w ell designed and w ell implemen ted. 6.5 T ests on Sim ulated Ev en ts with Realistic T ransfer F unctions 6.5.1 Samples and Ev en t Selection W e used CDF ocial t t samples generated with Pythia and Herwig ev en t generators. W e apply the ev en t reconstruction and ev en t selection describ ed in the previous c hapters requiring for eac h ev en t to con tain one and only one reconstructed c harged lepton, at least four tigh t jets and missing E T > 20 GeV 6.5.2 T ransfer F unctions T ransfer functions are necessary when w e run o v er sim ulated ev en ts or data in order to describ e the relationship b et w een nal state quark momen ta and jet momen ta. In this case w e are in terested in the probabilit y distribution of the jet energy giv en the parton energy This distribution v aries with the energy and pseudorapidit y of the parton, so w e bin it with resp ect to these v ariables. Since the detector is forw ard-bac kw ard symmetric w e only need to bin in absolute pseudorapidit y W e ha v e only three bins in absolute pseudorapidit y with the b oundaries at 0 0.7, 1.3 and 2. The parton energy bins are determined based on the statistics a v ailable, requiring minim um 3000 parton-jet pairs p er energy bin. This allo ws for a rather smo oth function whic h can b e t w ell. F or example the cen tral region b-quark energy bin b oundaries are c hosen to b e 10 GeV 37 GeV 47 GeV 57 GeV 67 PAGE 71 56 GeV 77 GeV 87 GeV 97 GeV 107 GeV 117 GeV 128 GeV 145 GeV and 182 GeV An ything ab o v e 182 GeV is considered part of one more bin. W e should p erhaps emphasize that these are par ton energy bins. In order to deriv e the transfer functions w e need to matc h jets to partons rst. F or matc hing purp oses w e require that all four nal state quarks are matc hed uniquely to jets in a cone of 0.4, that is, the Â¢ R distance b et w een the parton direction and jet direction is less than 0.4. If this requiremen t is not met, w e do not use the ev en t for deriving transfer functions. The direction smearing is considered a second order eect and ignored, whic h amoun ts to iden tifying the quark direction with the jet direction. This appro ximation can b e corrected to some degree b y using \eectiv e widths" for W and top instead of theoretical v alues. In other w ords the smearing in direction leads to a smearing of the mass p eak ev en when there is no energy smearing. The eect can b e quan tied based on sim ulation and a corresp onding larger width can b e emplo y ed in the analysis. In fact w e do use suc h a larger width (4 GeV ) for the hadronic W mass in our resonance searc h analysis. Our studies sho w ed that it narro ws the resonance p eak a bit, but no suc h tests w ere p erformed for top mass. -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 0 500 1000 1500 2000 2500 3000 3500 4000 4500 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 0 200 400 600 800 1000 1200 1400 1600 1800 2000 2200 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 0 200 400 600 800 1000 1200 Figure 6{6: Ligh t quarks transfer functions ( x = 1 Â¡ E j et E par ton ), binned in three absolute pseudorapidit y regions [0, 0.7], [0.7, 1.3] and [1.3, 2.0] In Figures 6{6 and 6{7 w e sho w examples of transfer functions for b oth ligh t quarks and b-quarks, resp ectiv ely W e t the shap e with a sum of three Gaussians, whic h w orks ne. The v ariable plotted is 1 Â¡ E j et =E par ton since it v aries less with PAGE 72 57 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 0 500 1000 1500 2000 2500 3000 3500 4000 4500 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 0 200 400 600 800 1000 1200 1400 1600 1800 2000 2200 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 0 200 400 600 800 1000 1200 Figure 6{7: b-quarks transfer functions ( x = 1 Â¡ E j et E par ton ), binned in three absolute pseudorapidit y regions [0, 0.7], [0.7, 1.3] and [1.3, 2.0] the parton energy It is related to the distribution w e in tro duced as \transfer function" via a simple c hange of v ariable. Our transfer functions are b et w een parton energy and cor r ected jet energy as explained in c hapter 4. With these to ols in place w e ran similar pseudo exp erimen ts on the Herwig sample. The returned m top v alue w as 178.1 Â§ 0.4 GeV =c 2 and the pulls' width w as 1.05 Â§ 0.09. The correct (generated) mass for this sample is 178 GeV =c 2 W e did not run an y other tests b ecause the only c hange w e made in the algorithm at this stage w as to plug in realistic transfer functions and run it o v er fully sim ulated ev en ts. As suc h, the only new thing that needed testing w as the deriv ation of the realistic transfer functions based on Mon te Carlo sim ulation. This is b y far a simpler business than the implemen tation of matrix elemen ts calculations and c hange of v ariables together with the rest of the mac hinery Based on the results presen ted ab o v e w e concluded that our transfer functions' implemen tation is ne and the algorithm as a whole w orks v ery w ell, is prop erly constructed and implemen ted. Also, our top mass results on Mon te Carlo w ere v ery similar to those of analyses doing the top mass measuremen t using matrix elemen ts. In the next c hapter w e will sho w ho w the top mass matrix elemen t algorithm can b e extended to compute the t t in v arian t mass, M t t PAGE 73 CHAPTER 7 M t t RECONSTR UCTION 7.1 Standard Mo del t t Reconstruction All the to ols dev elop ed for the top mass can b e turned around to reconstruct an y kinematical v ariable of in terest, in particular M t t Let's assume for simplicit y of presen tation that w e kno w whic h is the righ t com bination, that is, w e kno w ho w to matc h jets to partons. In that case P ( f p g ; f j g ) = par t ( f p g ) Â¢ T ( f j gjf p g ) (7-1) denes the probabilit y that an ev en t has the parton momen ta f p g and is observ ed with the jet momen ta f j g In our notation f p g and f j g refer to the set of all parton and jet 3-momen ta. In tegrating on the parton v ariables, giv en the observ ed jets, w e obtain the probabilit y used for the top mass measuremen t. Ho w ev er, the expression pro vides a w eigh t for an y parton conguration once the jets are measured. An y quan tit y that is a function of parton momen ta can b e assigned a probabilit y distribution based on the \master" distribution ab o v e, M t t included, and this is our approac h. T ec hnically this amoun ts to the follo wing in tegration: ( x jf j g ) = Z par t ( f p g ) Â¢ T ( f j gjf p g ) Â¢ ( x Â¡ M t t ( f p g )) f dp g (7-2) with ( x jf j g ) b eing the M t t probabilit y distribution giv en the observ ed jet momen ta. It should b e noted that if w e remo v e the delta function w e retriev e the ev en t probabilit y form ula used for the top mass measuremen t metho d presen ted b efore, and therefore all the v alidation tests presen ted b efore are as relev an t for M t t reconstruction. In terms of the mo dications in the algorithm these are also 58 PAGE 74 59 minimal, there is nothing m uc h to b e added except histogramming M t t during in tegration. In other w ords w e obtain an in v arian t mass distribution per ev ent W e will use the mean of this M t t distribution as our ev ent M t t v alue. Before running on all ev en ts in our v arious samples and pro ducing templates w e w an t to mak e sure the M t t reconstruction algorithm w orks w ell. W e selected ev en ts in whic h w e could matc h uniquely partons to jets and whic h con tained only four tigh t jets. These are the circumstances that allo w full consistency b et w een the reconstruction algorithm and the ev en ts reconstructed and that is a self-consisten t test of the metho d, whic h is what w e in tend to sho w here. [GeV] t t M 0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 mtt HEPG mtt RECO hdif Entries 5000 Mean 0.2261 RMS 24.93 GeV -200 -150 -100 -50 0 50 100 150 200 0 200 400 600 800 1000 1200 hdif Entries 5000 Mean 0.2261 RMS 24.93 mttReco mttHepg Figure 7{1: M t t reconstruction for the correct com bination and for ev en ts with exactly four matc hed tigh t jets. W e ran the algorithm on these selected ev en ts and w e w ere able to reconstruct M t t bac k to the parton lev el as it can b e seen in the left plot of Figure 7{1 Both plots are pro duced after running on ev en ts selected from the CDF ocial Pythia sample. Since w e use the Standard Mo del t t matrix elemen t w e do exp ect to reconstruct these ev en ts v ery w ell and that seems to b e the case indeed, as it is sho wn also in the righ t plot of Figure 7{1 There the dierence b et w een the PAGE 75 60 reconstructed v alue and the true v alue is histogrammed in order to see the in trinsic resolution and c hec k for an y bias. The results are v ery go o d and w e consider the testing and v alidation part of the analysis ended. [GeV] t t M 0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 mtt RECO Figure 7{2: M t t reconstruction including all ev en ts Since in realit y w e don't kno w whic h is the correct com bination w e adopt the top mass metho d approac h and sum o v er all allo w ed com binations in the form ula 7-2 W e exp ect the righ t com bination to con tribute more than the others as it happ ens for the top mass analysis. The M t t as reconstructed for all ev en ts, without an y of the requiremen ts men tioned ab o v e, is sho wn in Figure 7{2 This is what w e exp ect to b e the Standard Mo del con tribution to the M t t sp ectrum in the data. Some examples of ev en t b y ev en t reconstruction are sho wn in Figure 7{3 The 4th ev en t is a dilepton ev en t and the 8th is a tau+jets ev en t. In terestingly PAGE 76 61 fn Entries 2000 Mean 396.7 RMS 20.21 300 400 500 600 700 800 900 1000 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 fn Entries 2000 Mean 396.7 RMS 20.21 t t Event M fn Entries 2000 Mean 419.1 RMS 29.2 300 400 500 600 700 800 900 1000 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 fn Entries 2000 Mean 419.1 RMS 29.2 t t Event M fn Entries 2000 Mean 516.8 RMS 14.33 300 400 500 600 700 800 900 1000 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 fn Entries 2000 Mean 516.8 RMS 14.33 t t Event M fn Entries 2000 Mean 513.4 RMS 51.56 300 400 500 600 700 800 900 1000 0 0.01 0.02 0.03 0.04 0.05 fn Entries 2000 Mean 513.4 RMS 51.56 t t Event M fn Entries 2000 Mean 467.2 RMS 34.59 300 400 500 600 700 800 900 1000 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 fn Entries 2000 Mean 467.2 RMS 34.59 t t Event M fn Entries 2000 Mean 455.8 RMS 20.17 300 400 500 600 700 800 900 1000 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 fn Entries 2000 Mean 455.8 RMS 20.17 t t Event M fn Entries 2000 Mean 401.7 RMS 20.33 300 400 500 600 700 800 900 1000 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 fn Entries 2000 Mean 401.7 RMS 20.33 t t Event M fn Entries 2000 Mean 720 RMS 53.26 300 400 500 600 700 800 900 1000 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 0.02 fn Entries 2000 Mean 720 RMS 53.26 t t Event M fn Entries 2000 Mean 474.5 RMS 32.34 300 400 500 600 700 800 900 1000 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 fn Entries 2000 Mean 474.5 RMS 32.34 t t Event M Figure 7{3: Examples of M t t reconstruction, ev en t b y ev en t. they ha v e larger widths than the others whic h are all lepton+jets ev en ts. Adding com binations together can lead to double or m ultiple p eaks. The top mass used on data is m top = 175 GeV. Therefore this is the v alue used in our algorithm when pro ducing M t t templates corresp onding to v arious pro cesses. Figure 7{4 sho ws the actual template used for tting the data, deriv ed b y tting 5000 reconstructed ev en ts. Certain appro ximations w ere made, since w e cannot p erform all in tegrals whic h app ear in the formal presen tation b ecause the CPU time in v olv ed w ould b e PAGE 77 62 [GeV] t t M 0 200 400 600 800 1000 1200 Events/10GeV 0 50 100 150 200 250 300 350 400 450 SM ttbar Template Mean_SMtt Entries 5000 Mean 456.6 RMS 92.26 / ndf 2 c 48.09 / 56 Prob 0.7649 Constant 0.1266 14.65 Slope 0.005337 -0.2152 Expo 0.003323 0.6148 SM ttbar Template [GeV] t t M 0 200 400 600 800 1000 1200 Events/10GeV 1 10 10 2 SM ttbar Template Mean_SMtt Entries 5000 Mean 456.6 RMS 92.26 / ndf 2 c 48.09 / 56 Prob 0.7649 Constant 0.1266 14.65 Slope 0.005337 -0.2152 Expo 0.003323 0.6148 SM ttbar Template Figure 7{4: M t t template for Standard Mo del t t ev en ts. astronomical, ev en using the computing farms commonly a v ailable to CDF users. This is so b ecause w e need to mo del the M t t sp ectrum for 10 signal samples and a couple of bac kgrounds, and then p erform the systematics studies whic h require recomputing the templates eac h time. As it w as men tioned in the previous c hapter, the implemen tation uses a dieren t set of v ariables for in tegration, namely the masses of the t w o W b osons, the masses of the t w o top quarks, the total transv erse momen tum of the t t system and one \ W quark energy Studies sho w ed that the b est approac h, giv en the CPU time limitations, is to set the t w o top quarks' masses on shell and also set on shell the mass of the W whic h deca ys leptonically lea ving us with four in tegrals to PAGE 78 63 p erform. Ev en so, for systematics studies w e needed ab out 100,000 CPU hours and w e used extensiv ely the CDF computing farms. 7.2 Signal and other SM Bac kgrounds The Mon te Carlo samples for signal and all other Standard Mo del bac kgrounds (b esides t t ) are run through the same algorithm, th us pro ducing new distributions corresp onding to signal and bac kgrounds resp ectiv ely Ev en though the signal is not 100% correctly mo deled b y the Standard Mo del t t matrix elemen t, w e exp ect the reconstruction to w ork quite w ell since a signican t part of the matrix elemen t is concerned with the top and W deca ys and that w on't dep end on the sp ecic t t pro duction mec hanism. Esp ecially in the case of a spin 1 resonance the dierences b et w een the correct resonance matrix elemen t and the Standard Mo del matrix elemen t are minimal, since the gluon is a spin 1 particle after all. Ev en tough the metho ds presen ted in this dissertation can b e applied to more general cases, the actual limits w e are deriving at the end are v alid for v ector resonances b ecause the Mon te Carlo signal samples w ere generated with a v ector resonance mo del. W e w an t to remind the reader that it w as our initial decision to do a mo del indep enden t searc h an yw a y The results are not completely mo del indep enden t only b ecause of the Mon te Carlo generators used to pro duce signal samples. Applying the reconstruction to nont t ev en ts do esn't pro duce an y particularly meaningful distributions, but they are bac kgrounds needed to mo del the data. In what follo ws w e briey describ e the results obtained when running this reconstruction metho d on the v arious bac kgrounds needed in our analysis and presen ted in a previous c hapter. Signal samples W e generated signal samples with resonance masses from 450 GeV =c 2 up to 900 GeV =c 2 ev ery 50 GeV =c 2 using Pythia [ 24 ]. The reconstructed M t t for all is sho wn in Figure 7{16 The p eaks matc h v ery w ell the true v alue of PAGE 79 64 [GeV] t t M 0 100 200 300 400 500 600 700 800 900 1000 Events/20GeV 0 100 200 300 400 500 [GeV] t t M 0 100 200 300 400 500 600 700 800 900 1000 Events/20GeV 0 50 100 150 200 250 300 350 400 450 Figure 7{5: Reconstructed in v arian t mass for a resonance with M X 0 = 650 GeV. The left plot sho ws all ev en ts passing ev en t selection, while the righ t plot sho ws only matc hed ev en ts the resonance mass. In order to b etter understand the lo w mass shoulder w e split these ev en ts in three orthogonal subsamples: ev en ts with all four jets matc hed to partons, mismatc hed ev en ts and fak e lepton+jets ev en ts (dilepton or hadronic ev en ts passing the lepton+jets ev en t selection). The metho d is exp ected to w ork w ell on matc hed ev en ts and indeed this is what w e see in Figures 7{5 and 7{6 The shoulder is giv en b y the sup erp osition of mismatc hed ev en ts and fak e lepton+jets ev en ts on top of the nice p eak from matc hed ev en ts. The generated width for the resonance w as 1.2% of the resonance mass. As it can b e seen the reconstructed resonance mass is m uc h wider, due to the relativ ely large uncertain ties in jet measuremen ts and non measuring the neutrino z comp onen t at all. Ho w ev er the p eak remains prominen t enough to b e easily distinguished from the exp onen tially dropping Standard Mo del pro cesses. W+jet samples W e use the CDF ocial W + 4 partons ALPGEN [ 26 ] samples whic h are then run through Herwig for parton sho w ering. W e lo ok ed at W + 2b + 2 PAGE 80 65 [GeV] t t M 0 100 200 300 400 500 600 700 800 900 1000 Events/20GeV 0 10 20 30 40 50 60 70 80 90 [GeV] t t M 0 100 200 300 400 500 600 700 800 900 1000 Events/20GeV 0 5 10 15 20 25 30 Figure 7{6: Reconstructed in v arian t mass for a resonance with M X 0 = 650 GeV. The left plot sho ws mismatc hed lepton+jets ev en ts and the righ t plot sho ws non-lepton+jets ev en ts partons also but decided not to include it explicitly since the shap e is v ery v ery similar and the exp ected con tribution at the lev el of 1-2% compared to 60% or more for the W + 4 partons. These can b e seen in Figures 7{7 7{11 7{12 and a direct comparison of t templates is sho wn in 7{15 So all W+jets ev en ts are mo deled b y the W + 4 partons sample. QCD F or QCD w e used the data to extract the shap e. Multijet data is scanned for jets with high electromagnetic fraction whic h are rein terpreted as electrons based on the assumption that the jets that do fak e an electron are v ery similar to the ones just men tioned. With that said, the usual ev en t selection is applied and the ev en ts are reconstructed just lik e the others. This pro cess pro duces the template sho wn in Figure 7{9 The shap e is not m uc h dieren t from W + 4 partons, in fact they are quite close as assumed in the CDF Run 1 analysis when the QCD template w as ignored altogether. Dib osons WW, WZ and ZZ The cross-sections for the WW, WZ and ZZ pro cesses are 12.4 pb, 3.7 pb and 1.4 pb. The acceptances follo w the same trend with 0.14%, 0.08% and PAGE 81 66 0.02% resp ectiv ely Moreo v er, the WZ and ZZ ocial samples ha v e few er ev en ts left after ev en t selection and the ts ha v e larger errors. Giv en that WW dominates an yw a y w e decided to use only that template but increase the acceptance suc h that the exp ected n um b er of ev en ts will co v er the small WZ and ZZ con tributions. Since o v erall the whole dib oson part is almost negligible this pro cedure isn't exp ected to ha v e an y impact other than simplifying the analysis. It can b e added that the WW template whic h is sho wn in Figure 7{10 is also v ery similar to the Standard Mo del t t W + jets and QCD templates. W e put all of them on top of eac h other for easy comparison in Figure 7{14 All these templates are used to t the data and extract limits. The pro cedure is explained in the next c hapter. PAGE 82 67 [GeV] t t M 0 200 400 600 800 1000 1200 Events/10GeV 0 20 40 60 80 100 120 140 160 180 200 Wenu+4p Template Mean_We4p Entries 1856 Mean 445.2 RMS 81.35 / ndf 2 c 46.18 / 41 Prob 0.2668 Constant 0.205 12.74 Slope 0.004666 -0.09996 Expo 0.006357 0.724 Wenu+4p Template [GeV] t t M 0 200 400 600 800 1000 1200 Events/10GeV 1 10 10 2 Wenu+4p Template Mean_We4p Entries 1856 Mean 445.2 RMS 81.35 / ndf 2 c 46.18 / 41 Prob 0.2668 Constant 0.205 12.74 Slope 0.004666 -0.09996 Expo 0.006357 0.724 Wenu+4p Template Figure 7{7: W+4p template (electron sample) PAGE 83 68 [GeV] t t M 0 200 400 600 800 1000 1200 Events/10GeV 0 50 100 150 200 250 Wmunu+4p Template Mean_Wmu4p Entries 2066 Mean 442.6 RMS 80.14 / ndf 2 c 52.65 / 45 Prob 0.202 Constant 0.2461 17.03 Slope 0.01643 -0.5316 Expo 0.004151 0.5172 Wmunu+4p Template [GeV] t t M 0 200 400 600 800 1000 1200 Events/10GeV 1 10 10 2 Wmunu+4p Template Mean_Wmu4p Entries 2066 Mean 442.6 RMS 80.14 / ndf 2 c 52.65 / 45 Prob 0.202 Constant 0.2461 17.03 Slope 0.01643 -0.5316 Expo 0.004151 0.5172 Wmunu+4p Template Figure 7{8: W+4p template (m uon sample) PAGE 84 69 [GeV] t t M 0 200 400 600 800 1000 1200 Events/10GeV 0 50 100 150 200 250 300 QCD Template Mean_QCD Entries 2975 Mean 450.7 RMS 87 / ndf 2 c 31.51 / 48 Prob 0.9684 Constant 0.1829 16.06 Slope 0.01002 -0.3666 Expo 0.003672 0.5583 QCD Template [GeV] t t M 0 200 400 600 800 1000 1200 Events/10GeV 1 10 10 2 QCD Template Mean_QCD Entries 2975 Mean 450.7 RMS 87 / ndf 2 c 31.51 / 48 Prob 0.9684 Constant 0.1829 16.06 Slope 0.01002 -0.3666 Expo 0.003672 0.5583 QCD Template Figure 7{9: QCD template PAGE 85 70 [GeV] t t M 0 200 400 600 800 1000 1200 Events/10GeV 0 10 20 30 40 50 60 70 80 90 SM WW Template Mean_WW Entries 584 Mean 437.2 RMS 74.6 / ndf 2 c 20.66 / 29 Prob 0.8711 Constant 0.4107 12.44 Slope 0.01037 -0.13 Expo 0.01085 0.6985 SM WW Template [GeV] t t M 0 200 400 600 800 1000 1200 Events/10GeV 1 10 10 2 SM WW Template Mean_WW Entries 584 Mean 437.2 RMS 74.6 / ndf 2 c 20.66 / 29 Prob 0.8711 Constant 0.4107 12.44 Slope 0.01037 -0.13 Expo 0.01085 0.6985 SM WW Template Figure 7{10: WW template PAGE 86 71 [GeV] t t M 0 200 400 600 800 1000 1200 Events/10GeV 0 20 40 60 80 100 SM Wenu+2b+2p Template Mean_We2b2p Entries 951 Mean 450.3 RMS 86.04 / ndf 2 c 36.22 / 38 Prob 0.5518 Constant 0.2065 8.775 Slope 0.0007963 -0.007978 Expo 0.01391 1.055 SM Wenu+2b+2p Template [GeV] t t M 0 200 400 600 800 1000 1200 Events/10GeV 1 10 10 2 SM Wenu+2b+2p Template Mean_We2b2p Entries 951 Mean 450.3 RMS 86.04 / ndf 2 c 36.22 / 38 Prob 0.5518 Constant 0.2065 8.775 Slope 0.0007963 -0.007978 Expo 0.01391 1.055 SM Wenu+2b+2p Template Figure 7{11: W+2b+2p template (electron sample) PAGE 87 72 [GeV] t t M 0 200 400 600 800 1000 1200 Events/10GeV 0 20 40 60 80 100 120 140 SM Wmunu+2b+2p Template Mean_Wmu2b2p Entries 1266 Mean 445 RMS 91.64 / ndf 2 c 34.51 / 42 Prob 0.7873 Constant 0.461 53.72 Slope 0.2214 -21.62 Expo 0.001413 0.1362 SM Wmunu+2b+2p Template [GeV] t t M 0 200 400 600 800 1000 1200 Events/10GeV 1 10 10 2 SM Wmunu+2b+2p Template Mean_Wmu2b2p Entries 1266 Mean 445 RMS 91.64 / ndf 2 c 34.51 / 42 Prob 0.7873 Constant 0.461 53.72 Slope 0.2214 -21.62 Expo 0.001413 0.1362 SM Wmunu+2b+2p Template Figure 7{12: W+2b+2p template (moun sample) PAGE 88 73 [GeV] t t M 0 200 400 600 800 1000 1200 Events/10GeV 0 20 40 60 80 100 Wenu+4p(diff Q^2) Template Mean_WQ2 Entries 989 Mean 447.5 RMS 87.05 / ndf 2 c 36.86 / 41 Prob 0.6553 Constant 0.4082 21.59 Slope 0.06207 -1.971 Expo 0.004154 0.3602 Wenu+4p(diff Q^2) Template [GeV] t t M 0 200 400 600 800 1000 1200 Events/10GeV 1 10 10 2 Wenu+4p(diff Q^2) Template Mean_WQ2 Entries 989 Mean 447.5 RMS 87.05 / ndf 2 c 36.86 / 41 Prob 0.6553 Constant 0.4082 21.59 Slope 0.06207 -1.971 Expo 0.004154 0.3602 Wenu+4p(diff Q^2) Template Figure 7{13: W+4p template with alternativ e Q 2 scale (electron sample) PAGE 89 74 [GeV] t t M 300 400 500 600 700 800 900 1000 1100 1200 0 0.02 0.04 0.06 0.08 0.1 0.12 Background templates t SM t W->enu + 4p W->munu + 4p QCD WW Background templates Figure 7{14: All Standard Mo del bac kground templates used in the analysis PAGE 90 75 [GeV] t t M 300 400 500 600 700 800 900 1000 1100 1200 0 0.02 0.04 0.06 0.08 0.1 0.12 Background templates W + 4p W + 2b + 2p Background templates Figure 7{15: W+2b+2p template vs W+4p template. W+2b+2p w as ignored since the exp ected con tribution is at the lev el of 1-2% and the template is v ery similar to the W+4p template PAGE 91 76 GeV 0 200 400 600 800 1000 1200 Events/10GeV 0 50 100 150 200 250 300 t t M Mean_X0_450 Entries 2975 Mean 444.4 RMS 51.89 t t M GeV 0 200 400 600 800 1000 1200 Events/10GeV 0 50 100 150 200 250 300 350 400 t t M Mean_X0_500 Entries 4725 Mean 479.6 RMS 63.63 t t M GeV 0 200 400 600 800 1000 1200 Events/10GeV 0 50 100 150 200 250 300 350 t t M Mean_X0_550 Entries 4400 Mean 511.3 RMS 95.32 t t M GeV 0 200 400 600 800 1000 1200 Events/10GeV 0 50 100 150 200 250 300 350 t t M Mean_X0_600 Entries 4975 Mean 558.7 RMS 91.18 t t M GeV 0 200 400 600 800 1000 1200 Events/10GeV 0 50 100 150 200 250 300 t t M Mean_X0_650 Entries 5000 Mean 597 RMS 103.8 t t M GeV 0 200 400 600 800 1000 1200 Events/10GeV 0 50 100 150 200 250 t t M Mean_X0_700 Entries 5000 Mean 630.2 RMS 119.1 t t M GeV 0 200 400 600 800 1000 1200 Events/10GeV 0 20 40 60 80 100 120 140 160 180 200 220 240 t t M Mean_X0_750 Entries 5000 Mean 664.6 RMS 132.7 t t M GeV 0 200 400 600 800 1000 1200 Events/10GeV 0 20 40 60 80 100 120 140 160 180 200 220 t t M Mean_X0_800 Entries 5000 Mean 693.3 RMS 146.4 t t M GeV 0 200 400 600 800 1000 1200 Events/10GeV 0 20 40 60 80 100 120 140 160 180 t t M Mean_X0_850 Entries 5000 Mean 722.4 RMS 163.2 t t M GeV 0 200 400 600 800 1000 1200 Events/10GeV 0 20 40 60 80 100 120 140 160 180 t t M Mean_X0_900 Entries 5000 Mean 751.2 RMS 172.9 t t M Figure 7{16: Signal templates PAGE 92 CHAPTER 8 SENSITIVITY STUDIES In this c hapter w e will presen t the algorithm used for establishing lo w er and upp er limits for signal cross-section times branc hing ratio at an y desired condence lev el (CL). W e used a Ba y esian approac h whic h w as shared with other CDF analyses. The main idea and suggestions for the implemen tation can b e found in [ 27 28 ]. 8.1 General Presen tation of the Limit Setting Metho dology F or generalit y w e will assume that the observ ed data quan tities are con tained in a v ector n = ( n 1 ; n 2 ; : : : ; n nbins ), whic h in our case w ould corresp ond to the bin con ten t of the M t t histogram. The mo deling of the data con tains one unkno wn parameter and w e w an t to b e able to mak e a probabilistic statemen t ab out that parameter once w e lo ok at the data. In other w ords w e w ould lik e to obtain a poster ior probabilit y distribution for the parameter. W e will call this parameter b ecause in our particular case it corresp onds to the signal cross-section times branc hing ratio. It is often the case that other parameters are in v olv ed, and their v alues are kno wn with some uncertain t y W e will assume their v alues are normally distributed with the uncertain t y b eing the standard deviation. W e will denote these parameters = ( 1 ; 2 ; : : : ) and call them nuisanc e p ar ameters W e will formalize our prior kno wledge of the n uisance parameters and b y in tro ducing the prior probabilit y densit y ( ; ). In our case this can b e factorized as a pro duct of Gaussians for the n uisance parameters and a at distribution for 77 PAGE 93 78 The Ba y es theorem connects the lik eliho o d of the measuremen t (prior probabilit y) to the p osterior densit y of and after the measuremen t: p ( ; j n ) = L ( n j ; ) ( ; ) =p ( n ) (8-1) where p ( n ) is the marginal probabilit y densit y of the data p ( n ) = Z d Z d L ( n j ; ) ( ; ) (8-2) In these equations p ( ; j n ) stands for the p osterior densit y and L ( n j ; ) stands for the prior densit y W e are not in terested in the n uisance parameters so w e in tegrate o v er them p ( j n ) = Z d p ( ; j n ) (8-3) to obtain the sough t p osterior probabilit y densit y for the parameter of in terest F rom this p osterior p ( j n ) w e can extract the information w e need, lik e the most probable v alue, upp er and lo w er limits at an y condence lev el, etc. 8.2 Application to This Analysis In our analysis the data n w e observ e is the binned M t t sp ectrum, the parameter of in terest is the resonan t t t pro duction cross section times branc hing ratio, X 0 Â¢ B R ( X 0 t t ), and the n uisance parameters are: the in tegrated luminosit y acceptances, and cross-sections. In order to build the lik eliho o d (prior densit y) w e need normalized M t t templates for eac h pro cess. W e will use the notation T j with j 2 f s; b g for the binned signal and bac kground templates, and T j i for the i th bin of the j th template. PAGE 94 79 Giv en the ab o v e denitions w e can write the exp e cte d n um b er of ev en ts in the i th bin of the sp ectrum as i = Z Ldt Â¢ X j 2f s; b g j j T j i = s A s T s i + X j 2f b g N j T j i (8-4) where w e separated the signal con tribution from the bac kgrounds and w e dened the auxiliary v ariables A s = R Ldt Â¢ s (also called ee ctive ac c eptanc e ) and N j = R Ldt Â¢ j j with j 2 f b g the total exp ected n um b er of ev en ts for eac h bac kground, after ev en t selection. The prior lik eliho o d can b e written: L ( n j ; ) = Y i 2f nbins g P ( n i j i ) = Y i 2f nbins g ( s A s T s i + P N b j T j i ) n i n i e Â¡ s A s T s i Â¡ P N b j T j i (8-5) As w e already p oin ted out, w e ma y not kno w exactly A s and the exp ected n um b er of ev en ts from bac kground. It is customary to tak e as priors for these parameters a truncated (to p ositiv e v alues) Gaussian to represen t our prior kno wledge 1 F or the signal cross section s w e use a at prior. 8.2.1 T emplates As p oin ted out in Eq. 8-5 in order to build the lik eliho o d function w e need to kno w the template distributions for the signal and for the bac kgrounds. Giv en the limited statistics a v ailable for the samples w e decided to t them and use the smo othed t distributions as templates; this pro cedure remo v es unph ysical empt y bins or bumps. As already men tion in Chapter 5, w e consider as p ossible bac kground con tributions the follo wing pro cesses: 1 Giv en that the total eciency is often the pro duct of sev eral eciencies, the log-normal prior is often used to o. PAGE 95 80 Standard Mo del t t W e + 4 partons W + 4 partons W e + 2 partons + 2 b W + 2 partons + 2 b Dib osons WW, WZ, ZZ QCD (from data) / ndf 2 c 45.17 / 43 Prob 0.3815 Constant1 20.5 137.7 Mean1 2.4 603.9 Sigma1 2.97 22.85 Constant2 19.5 132.4 Mean2 5.6 577.9 Sigma2 6.44 57.79 Constant3 11.0 86.6 Mean3 32.9 465.7 Sigma3 15.4 143.4 GeV t t M 0 200 400 600 800 1000 1200 0 50 100 150 200 250 300 Mean X0_600 / ndf 2 c 45.17 / 43 Prob 0.3815 Constant1 20.5 137.7 Mean1 2.4 603.9 Sigma1 2.97 22.85 Constant2 19.5 132.4 Mean2 5.6 577.9 Sigma2 6.44 57.79 Constant3 11.0 86.6 Mean3 32.9 465.7 Sigma3 15.4 143.4 Mean X0_600 / ndf 2 c 48.09 / 56 Prob 0.7649 Constant 0.13 14.65 Slope 0.0053 -0.2152 Expo 0.0033 0.6148 GeV t t M 0 200 400 600 800 1000 1200 0 50 100 150 200 250 300 350 400 450 Mean SMtt / ndf 2 c 48.09 / 56 Prob 0.7649 Constant 0.13 14.65 Slope 0.0053 -0.2152 Expo 0.0033 0.6148 Mean SMtt Figure 8{1: Signal and bac kground examples. The signal sp ectrum on the left ( M X 0 = 600 GeV =c 2 ) has b een t with a triple Gaussian. The bac kground sp ectrum from Standard Mo del t t has b een t with the exp onen tial-lik e function. Fit range starts at 400 GeV =c 2 The M t t histograms are t with an exp onen tial-lik e function f ( x ) = Â¢ e Â¢ x in the region ab o v e 400 GeV =c 2 The signal histogram is t with a double or triple Gaussian, or a truncated double Gaussian and a truncated exp onen tial distribution 2 An example is sho wn in Fig 8{1 All templates can b e found at the end of the previous c hapter. 2 This set of the tting functions guaran tees a t with go o d 2 probabilit y PAGE 96 81 W e discussed the bac kgrounds in Chapter 5, and w e will remind the reader that w e decided it is safe to absorb the small W + 2 partons + 2 b con tributions in to the W + 4 partons templates. Similarly the W Z and Z Z con tributions are absorb ed in the Z Z template b y increasing b y 20% the nominal WW cross section. 8.2.2 T emplate W eigh ting Equation 8-4 sho ws that in order to build the lik eliho o d w e need to kno w the n um b er of bac kground ev en ts N j for eac h bac kground t yp e. T able 8{1: Acceptances for bac kground samples. Sample Ev en t Selection Reconstruction and 400 GeV =c 2 cut T otal acceptance SM t t 0.045 0.72 0.032 WW 0.0014 0.60 0.0008 W( e ) 0.0076 0.66 0.0050 W( e ) 0.0072 0.65 0.0047 QCD 0.0070 0.71 0.0050 In general w e estimate the cross-section, acceptance and in tegrated luminosit y in order to get this n um b er, but since the cross sections for the pro cesses p p W + nj and m ultijets (QCD) are not kno wn with go o d precision w e decided to estimate the n um b er of ev en ts from these bac kgrounds based on the total n um b er of ev en ts seen in the data: N T O T C D F = Z L dt Â¢ ( s A s + t t A t t + W W A W W ) + N W e 4 p + N W 4 p + N QC D (8-6) with the constrain ts N W e 4 p = A W e 4 p = N W 4 p = A W 4 p ; N W l 4 p = 10 Â¢ N QC D (8-7) The relativ e w eigh ts for W e 4 p W 4 p bac kgrounds ha v e b een set suc h that they ha v e the same n um b er of ev en ts b efore the ev en t selection and reconstruction b ecause the (unkno wn) cross sections are considered to b e the same. PAGE 97 82 The relativ e w eigh t b et w een QCD and W + 4 p has b een set to 10% as discussed in Chapter 5 and established in this analysis [ 29 ]. Acceptances used in calculations are listed in T ables 8{1 and 8{2 Cross-sections are listed in section 5.4, T able 5{3 T able 8{2: Acceptances for resonance samples. M X 0 ( GeV =c 2 ) Ev en t Selection Reconstruction and 400 GeV =c 2 cut T otal 450 0.047 0.86 0.040 500 0.051 0.93 0.048 550 0.055 0.94 0.051 600 0.057 0.97 0.055 650 0.059 0.97 0.057 700 0.062 0.97 0.060 750 0.062 0.98 0.060 800 0.063 0.98 0.061 850 0.063 0.97 0.061 900 0.061 0.98 0.059 8.2.3 Implemen tation After building the lik eliho o d for a giv en observ ation n according to Eq. 8-5 w e need to calculate the p osterior densit y for s according to Equations 8-1 8-2 and 8-3 In practice w e do not divide b y p ( n ) in Eq. 8-1 since that is only a global normalization factor w e can apply at the end. In this w a y w e do not need Eq. 8-2 an y more and w e can rewrite Eq. 8-1 in a simplied and more explicit form: p ( s ; A s ; N b j n ) = L ( n j s ; A s ; N b ) ( s ; A s ; N b ) (8-8) T o obtain the p osterior probabilit y densit y for s only w e carry out the in tegration on the n uisance parameters A s and N b using a Mon te Carlo metho d. F ollo wing the suggestions in [ 28 ] on page 20, w e implemen t the \Sample & Scan" metho d. W e rep eatedly (1000 times) sample the priors ( A s ) and j ( N j ), whic h are truncated Gaussians with resp ectiv e widths of A s and N j Then w e scan (400 bins) the s up to some v alue where the p osterior is negligible. A t eac h PAGE 98 83 scan p oin t w e add to the corresp onding bin in a histogram of s a w eigh t equal to L ( n j s ; A s ; N b ) Â¢ ( s ; A s ; N b ). This yields the p osterior densit y for s 8.2.4 Cross Section Measuremen t and Limits Calculation Ha ving calculated the signal cross section p osterior densit y w e can extract limits and \measure" the cross section. W e dene as our estimator for the cross section and therefore as our measuremen t the most probable v alue of the distribution. This c hoice is supp orted b y man y linearit y tests w e run b oth with fak e signal templates (simple Gaussians) and with real X 0 templates. [pb] Xo s 0 1 2 3 4 5 [pb] Xo s 0 1 2 3 4 5 [pb]Xos 0 1 2 3 4 5 Lum=1000pb-1 [pb] Xo s 0 1 2 3 4 5 [pb] Xo s 0 1 2 3 4 5 [pb]Xos d -0.2 -0.15 -0.1 -0.05 -0 0.05 0.1 0.15 0.2 Lum=1000pb-1 Xo Mass 800 Xo Mass 900 [pb] Xo s 0 0.5 1 1.5 2 2.5 3 3.5 [pb] Xo s 0 0.5 1 1.5 2 2.5 3 3.5 [pb]Xos 0 0.5 1 1.5 2 2.5 3 3.5 Lum=1000pb-1 Xo Mass 450 Xo Mass 500 Xo Mass 550 Xo Mass 600 Xo Mass 650 Xo Mass 700 Xo Mass 750 Xo Mass 800 Xo Mass 850 Xo Mass 900 [pb] Xo s 0 0.5 1 1.5 2 2.5 3 3.5 [pb] Xo s 0 0.5 1 1.5 2 2.5 3 3.5 [pb]Xos d -0.2 -0.15 -0.1 -0.05 -0 0.05 0.1 0.15 0.2 Lum=1000pb-1 Figure 8{2: Linearit y tests on fak e (left) and real (righ t) templates. As test fak e signal templates w e used Gaussians with 60 GeV =c 2 widths and means of 800 and 900 GeV =c 2 W e used also real templates with masses from 450 to 900 GeV =c 2 The top plots sho w the input v ersus the reconstructed cross section after 1000 pseudo exp erimen ts at in tegrated luminosit y R L = 1000 pb Â¡ 1 Bottom plots sho w the deviation from linearit y in expanded scale, with red-dotted lines represen ting a 2% deviation Figure 8{2 sho ws the results of the tests with fak e Gaussian signal templates of 800 and 900 GeV =c 2 masses and 60 GeV =c 2 width and with real M t t templates for X 0 masses from 450 to 900 GeV =c 2 at an in tegrated luminosit y equal to PAGE 99 84 R L = 1000 pb Â¡ 1 The reconstructed cross section agrees v ery w ell with the input v alue, sho wing only a small relativ e shift of ab out 2%. Ho w ev er our measuremen t is meaningless as long as it is consisten t with the n ull h yp othesis, b eing only a statistical uctuation. Therefore the k ey quan tities to extract are the upp er and lo w er limits (UL, LL) on the cross-section at a giv en condence lev el. This is done b y nding an in terv al dened b y limits LL and UL, whic h satisfy: R U L LL p ( j n ) R 1 0 p ( j n ) = (8-9) and p ( LL j n ) = p ( U L j n ) (8-10) with the desired condence lev el, for example 0.95 for 95%. hxsec Entries 400000 Mean 2.598 RMS 0.3435 Underflow 0 Overflow 0 Integral 6.41e-55 pb s 0 1 2 3 4 5 6 7 8 9 10 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 -55 x10 Cross-section posterior p.d.f. hxsec Entries 400000 Mean 2.598 RMS 0.3435 Underflow 0 Overflow 0 Integral 6.41e-55 < 3.225 at 95% CL s Cross-section posterior p.d.f. Figure 8{3: Example p osterior probabilit y function for the signal cross section for a pseudo exp erimen t with input signal of 2 pb and resonance mass of 900 GeV =c 2 The most probable v alue estimates the cross section, and 95% condence lev el (CL) upp er and lo w er limits are extracted. The red arro w and the quoted v alue corresp ond to the 95% CL upp er limit In this w a y w e can extract LL and U L for eac h pseudo exp erimen t or for data. Figure 8{3 sho ws an example of p osterior for a pseudo exp erimen t with input signal of 2 pb M X 0 = 900 GeV =c 2 and total in tegrated luminosit y R L = 1000 pb Â¡ 1 PAGE 100 85 Before lo oking at the data w e need to kno w what are the exp ected limits without an y signal presen t and what are their uctuations for certain in tegrated luminosities. F or these purp oses w e ran man y (1000) pseudo exp erimen ts for eac h M X 0 and in tegrated luminosit y and w e lled histograms with the most lik ely v alue, LL and U L from eac h pseudo exp erimen t. The median of the U L histogram is considered the exp ected upp er limit in the absence of an y signal. W e also dene 68% and 95% CL in terv als around the cen tral v alue in order to get a feeling of the exp ected uctuations in the upp er limits. W e also ran similar series of pseudo exp erimen ts w ith signal in order to see what are our c hances of observing a non-zero LL in a giv en scenario. More sp ecically w e computed the probabilit y of observing a non-zero LL for a giv en resonance mass, in tegrated luminosit y and signal cross-section. This quan tit y is v ery useful in assessing the p o w er of the algorithm and what signal cross-sections are realistically p ossible to observ e at an y in tegrated luminosit y 8.2.5 Exp ected Sensitivit y and Disco v ery P oten tial Figure 8{4 sho ws the distribution of the exp ected upp er limit ( U L ) at 95% CL for v arious masses and t w o in tegrated luminosit y scenarios, R L = 319 ; 1000 pb Â¡ 1 Figure 8{5 sho ws the p o w er of the algorithm in distinguishing signal from bac kground. On the x axis w e ha v e input signal cross-section and on the y axis the fraction is the probabilit y of observing a non-zero LL at 95% CL for R L = 1000 pb Â¡ 1 This plots do not include shap e systematics, or systematic eects that lead to c hange in the shap e of the templates. W e will explore the treatmen t of shap e systematics in the next c hapter. PAGE 101 86 ] 2 [GeV/c Xo M 450 500 550 600 650 700 750 800 850 900 ] 2 [GeV/c Xo M 450 500 550 600 650 700 750 800 850 900 ) t t 0 BR(X [pb] Xos 0 0.5 1 1.5 2 2.5 3 3.5 4 Expected Upper Limits CDF Run 2 preliminary -1 Lum = 319 pb -1 Lum = 1000 pb Figure 8{4: Upp er limits at 95% CL. Only acceptance systematics are considered in this plot. [pb] 0 X s 0 0.5 1 1.5 2 2.5 3 3.5 [pb] 0 X s 0 0.5 1 1.5 2 2.5 3 3.5 0 Fraction of PE w/ LL 0 0.2 0.4 0.6 0.8 1 Lower Limits, Lum=1000pb-1 Xo Mass = 450 Xo Mass = 500 Xo Mass = 550 Xo Mass = 600 Xo Mass = 650 Xo Mass = 700 Xo Mass = 750 Xo Mass = 800 Xo Mass = 850 Xo Mass = 900 Figure 8{5: Probabilit y of observing a non-zero lo w er limit v ersus input signal cross section at R L = 1000 pb Â¡ 1 Only acceptance systematics are included in this plot PAGE 102 CHAPTER 9 SYSTEMA TICS W e distinguish b et w een t w o kinds of systematic uncertain ties, acceptance and cross-section systematics, and shap e systematics. The rst one do es not aect the shap e of the templates and it is implicitly accoun ted for b y the uncertain ties in the n uisance parameters. Shap e systematic uncertain ties not only aect the acceptances but also the template shap es, therefore they m ust b e handled in a dieren t w a y 9.1 Shap e Systematics A c hange on Jet Energy Scale, initial and nal state radiation, parton distribution function, etc., mo dies the signal and bac kgrounds acceptances as w ell as their templates. T o incorp orate these systematics uncertain ties w e adopt the same approac h describ ed in [ 30 ]. 9.1.1 Jet Energy Scale After applying the energy correction algorithm to jets w e are left with some residual uncertain t y to the Jet Energy Scale (JES). The eect on the measured X 0 cross section is ev aluated b y applying a Â§ 1 shift on the JES and then running the full reconstruction on signal and bac kground samples; the resulting c hange in the reconstructed, or measured, cross section as a function of the cross section itself is then in terpreted as the uncertain t y on the X 0 cross section. The pro cedure consists in generating pseudo exp erimen ts with \shifted" templates and acceptances and analyzing them with correct templates and 87 PAGE 103 88 acceptances 1 The pro cedure is applied for t w o in tegrated luminosit y scenarios R L = 319 ; 1000 pb Â¡ 1 for 17 signal cross sections X 0 = 0 : 125 ; 0 : 25 ; 0 : 375 ; 0 : 50 ; 0 : 75 : : : 3 : 75 pb Â¡ 1 and v e input signal masses M X 0 = 450 ; 500 : : : 900 GeV =c 2 The functional dep endence of the shift v ersus cross section is t with a linear function X 0 = 0 + 1 Â¢ X 0 for eac h mass and for b oth p ositiv e and negativ e JES shifts. Results of the ts for R L = 1000 pb Â¡ 1 are rep orted in T able 9{1 [pb] Xo s 0 0.5 1 1.5 2 2.5 3 3.5 [pb] Xo s 0 0.5 1 1.5 2 2.5 3 3.5 [pb]Xos d -0.6 -0.4 -0.2 0 0.2 0.4 0.6 SYS=JES, Lum=1000pb-1 Xo Mass 450 Xo Mass 500 Xo Mass 600 Xo Mass 700 Xo Mass 800 Xo Mass 900 Figure 9{1: Cross section shift due to JES uncertain t y for R L = 1000 pb Â¡ 1 The shift represen ts the uncertain t y on the cross section due to JES, as a function of cross-section 9.1.2 Initial and Final State Radiation T o in v estigate the systematic eect of the initial and nal state radiation (ISR and FSR) uncertain ties on the template shap e w e follo w ed a similar metho d to the one describ ed in the previous section. W e applied the M t t reconstruction algorithm to ocial CDF samples with less or more radiation, corresp onding to a + or c hange. Then w e generated pseudo exp erimen ts with shifted (new) 1 This to mimic the approac h to analysis of the r e al data. PAGE 104 89 T able 9{1: Linear t parameters describing the uncertain t y due to JES systematic; JESand JES+ lab els designate a + or v ariation in energy scale. The uncertain t y on cross-section is parametrized with X 0 = 0 + 1 Â¢ X 0 M X 0 J E S Â¡ 0 J E S Â¡ 1 J E S + 0 J E S + 1 450 0.044 0.048 -0.024 -0.057 500 0.009 0.065 -0.187 -0.076 600 0.024 0.057 -0.090 -0.067 700 0.030 0.047 -0.036 -0.048 800 0.018 0.049 0.002 -0.058 900 0.016 0.038 0.002 -0.050 templates and acceptances and just lik e b efore analyzed them using the unshifted (original) templates and acceptances. The parametrizations of these uncertain ties are presen ted in T ables 9{2 and 9{3 [pb] Xo s 0 0.5 1 1.5 2 2.5 3 3.5 [pb] Xo s 0 0.5 1 1.5 2 2.5 3 3.5 [pb]Xos d -0.6 -0.4 -0.2 0 0.2 0.4 0.6 SYS=ISR, Lum=1000pb-1 Xo Mass 450 Xo Mass 500 Xo Mass 600 Xo Mass 700 Xo Mass 800 Xo Mass 900 [pb] Xo s 0 0.5 1 1.5 2 2.5 3 3.5 [pb] Xo s 0 0.5 1 1.5 2 2.5 3 3.5 [pb]Xos d -0.6 -0.4 -0.2 0 0.2 0.4 0.6 SYS=FSR, Lum=1000pb-1 Xo Mass 450 Xo Mass 500 Xo Mass 600 Xo Mass 700 Xo Mass 800 Xo Mass 900 Figure 9{2: Cross section shift due to ISR (left) and FSR (righ t) uncertain ties for R L = 1000 pb Â¡ 1 9.1.3 WQ 2 Scale T o accoun t for the uncertain t y on the correct Q 2 scale for the W+jets pro duction w e calculate the shift in the reconstructed cross section for a dieren t PAGE 105 90 T able 9{2: Linear t parameters describing the uncertain t y due to ISR mo deling. The uncertain t y in cross section is parametrized with X 0 = 0 + 1 Â¢ X 0 M X 0 I S R Â¡ 0 I S R Â¡ 1 I S R + 0 I S R + 1 450 0.05 0.00 -0.18 0.03 500 0.18 -0.00 -0.11 -0.06 600 0.08 -0.02 -0.09 -0.05 700 0.02 0.02 -0.05 -0.04 800 0.01 0.00 -0.01 -0.01 900 0.02 0.01 -0.01 -0.00 T able 9{3: Linear t parameters describing the uncertain t y due to FSR mo deling. The uncertain t y in cross section is parametrized with X 0 = 0 + 1 Â¢ X 0 M X 0 F S R Â¡ 0 F S R Â¡ 1 F S R + 0 F S R + 1 450 0.06 0.01 -0.15 -0.03 500 0.08 0.01 -0.14 0.03 600 0.04 -0.01 -0.02 0.00 700 0.00 0.02 -0.01 -0.01 800 0.01 0.00 -0.03 -0.02 900 -0.00 0.01 -0.01 -0.01 c hoice of Q 2 scale using another CDF ocial systematic sample. The same tec hnique is used. The shifts are sho wn in Figure 9{3 and the corresp onding parametrizations of these uncertain ties are presen ted in T able 9{4 T able 9{4: Linear t parameters describing the uncertain t y due to WQ 2 scale, The uncertain t y in cross section is parametrized with X 0 = 0 + 1 Â¢ X 0 M X 0 W Q 2 0 W Q 2 1 450 -0.20 0.02 500 -0.15 0.03 600 0.01 -0.00 700 0.03 -0.00 800 0.04 -0.01 900 0.03 -0.01 PAGE 106 91 [pb] Xo s 0 0.5 1 1.5 2 2.5 3 3.5 [pb] Xo s 0 0.5 1 1.5 2 2.5 3 3.5 [pb]Xos d -0.6 -0.4 -0.2 0 0.2 0.4 0.6 SYS=WQ2, Lum=1000pb-1 Xo Mass 450 Xo Mass 500 Xo Mass 600 Xo Mass 700 Xo Mass 800 Xo Mass 900 Figure 9{3: Cross section shift due to WQ 2 scale uncertain t y for R L = 1000 pb Â¡ 1 9.1.4 P arton Distribution F unctions Uncertain t y One w a y to estimate the eect of uncertain ties in the parton distribution functions (PDF) is to rew eigh t the ev en ts according to a new set of PDFs and in v estigate the eect. In this case w e c hanged eac h of the 20 PDF eigen v alues up and do wn b y their errors and th us obtained 40 shifted templates for eac h unshifted template. The o v erall acceptance v ariation is of the order of 1%, whic h is clearly co v ered b y the prior uncertain t y on acceptance. The remaining eect if an y is due to template shap e c hanges. Ho w ev er, w e w eren't able to see an y dierence and a Kolmogoro v-Smirno test applied b et w een the cen tral template and the shifted templates returned 1.0 in all cases, therefore w e consider the PDF uncertain ties to b e negligible for our searc h. 9.1.5 Ov erall Shap e Systematic Uncertain ties Since w e consider eac h shap e systematic uncertain t y as indep enden t and Gaussian-lik e, w e can calculate the total shift due to all these eects b y adding PAGE 107 92 in quadrature the v arious shifts( X 0 ) for an y giv en v alue of the assumed signal cross-section (on the x axis). Figure 9{4 sho ws the total shifts for all the signal masses M X 0 = 450 : : : 900 GeV =c 2 at an in tegrated luminosit y of R L = 1000 pb Â¡ 1 [pb] Xo s 0 0.5 1 1.5 2 2.5 3 3.5 4 [pb] Xo s 0 0.5 1 1.5 2 2.5 3 3.5 4 [pb]Xos d -0.6 -0.4 -0.2 0 0.2 0.4 0.6 SYS=ALL, Lum=1000pb-1 Xo Mass 450 Xo Mass 500 Xo Mass 550 Xo Mass 600 Xo Mass 650 Xo Mass 700 Xo Mass 750 Xo Mass 800 Xo Mass 850 Xo Mass 900 Figure 9{4: T otal shap e systematic uncertain t y v ersus signal cross section. 9.2 Eect of Shap e Systematics T o incorp orate the shap e systematics in to the Ba y esian mac hinery w e considered the uncertain t y on cross section X 0 ( X 0 ) as a gaussian uncertain t y on eac h p oin t of the p osterior probabilit y densit y function. More explicitly w e con v olute the p osterior obtained in the previous c hapter with this parametrization of the cross-section shifts due to shap e systematics as a function of signal cross-section itself: P R O B S Y S ( X 0 ) = P R O B X 0 = Z 1 0 G ( X 0 Â¡ 0 ; X 0 ( 0 )) P R O B ( 0 ) Â¢ d 0 (9-1) In Eq. 9-1 G ( x 0 ; ) stands for a tr uncated Gauss distribution of mean x 0 and standard deviation b ecause in p erforming the calculation w e ha v e to pa y PAGE 108 93 atten tion to the nite lo w er b ound (zero) in the in tegration, that is, the resulting con v oluted function has to b e zero for non-ph ysical negativ e cross-sections. T o obtain suc h a result the con v oluting function has to b e a normalized truncated Gaussian. F rom a more in tuitiv e p oin t of view w e can think that w e apply this con v olution on the p osterior iterativ ely for eac h shap e systematic eect, ho w ev er the op eration of con v olution satises ( f g ) h = f ( g h ) whic h means w e can rst com bine all shap e systematic eects in one function (whic h is done b y adding in quadrature the shifts) and then con v oluting that com bined shift function with the p osterior as obtained using the pro cedure describ ed in the previous c hapter. Figure 9{5 sho ws the eect of smearing (con v olution) on one p osterior distribution function obtained from a pseudo exp erimen t. The most probable v alue mo v es a bit a w a y from zero and the 95% CL on the cross section shifts to a higher v alue, as exp ected (the sensitivit y should depreciate due to systematics, so w e should see higher upp er limits). pb Xo s 0 0.5 1 1.5 2 2.5 3 likelihood 0 0.1 0.2 0.3 0.4 0.5 -29 x10 Cross-section posterior p.d.f. CDF preliminary < 1.215 at 95% CL s < 1.283 at 95% CL s Cross-section posterior p.d.f. Figure 9{5: P osterior probabilit y function for the signal cross section. The smeared (con v oluted) probabilit y in green, including shap e systematics, sho ws a longer tail than the original (blac k) distribution. As a consequence the U L quoted on the plot is shifted to higher v alues with resp ect to the one calculated based on the original p osterior PAGE 109 94 9.3 Exp ected Sensitivit y with Shap e Systematics After applying the smearing pro cedure due to shap e systematics w e calculated the exp ected sensitivit y (upp er limits) for v arious resonance masses and t w o luminosit y scenarios. These can b e seen in Fig. 9{6 whic h sho ws the exp ected sensitivit y for the t w o in tegrated luminosit y scenarios R L = 319 ; 1000 pb Â¡ 1 Figure 9{7 sho ws the p o w er of the algorithm, as dened in the previous c hapter, after applying the shap e systematics. ] 2 [GeV/c Xo M 450 500 550 600 650 700 750 800 850 900 ] 2 [GeV/c Xo M 450 500 550 600 650 700 750 800 850 900 ) t t 0 BR(X [pb] 0Xs 0 0.5 1 1.5 2 2.5 3 3.5 4 Expected Upper Limits CDF Run 2 preliminary -1 Lum = 319 pb SYS -1 Lum = 319 pb -1 Lum = 1000 pb SYS -1 Lum = 1000 pb Figure 9{6: Upp er limits at 95% CL. The plots sho w the results for t w o luminosit y scenarios, including or excluding the con tribution from shap e systematic uncertain ties PAGE 110 95 [pb] 0 X s 0 0.5 1 1.5 2 2.5 3 3.5 [pb] 0 X s 0 0.5 1 1.5 2 2.5 3 3.5 0 Fraction of PE w/ LL 0 0.2 0.4 0.6 0.8 1 -1 Lower limit, L=1000pb Xo Mass = 450 Xo Mass = 500 Xo Mass = 550 Xo Mass = 600 Xo Mass = 650 Xo Mass = 700 Xo Mass = 750 Xo Mass = 800 Xo Mass = 850 Xo Mass = 900 Figure 9{7: Probabilit y of observing a non-zero lo w er limit ( LL ) v ersus input signal cross section for R L = 1000 pb Â¡ 1 PAGE 111 CHAPTER 10 RESUL TS Ww e rst lo ok ed at the data in the summer of 2005 when CDF had a v ailable for analysis 320 pb Â¡ 1 of data gathered since 2002. Just six mon ths later another 360 pb Â¡ 1 of data b ecame a v ailable and it w as added added to the analysis, pro viding b etter limits. 10.1 First Results AllEvts Entries 212 Mean 458.7 RMS 98.12 [GeV] t t M 300 400 500 600 700 800 900 1000 1100 1200 Events/10GeV 0 5 10 15 20 25 30 AllEvts Entries 212 Mean 458.7 RMS 98.12 CDF Preliminary Btag Evts Entries 73 Mean 465.6 RMS 84.38 [GeV] t t M 300 400 500 600 700 800 900 1000 1100 1200 Events/10GeV 0 2 4 6 8 10 Btag Evts Entries 73 Mean 465.6 RMS 84.38 CDF Preliminary CDF PreliminaryFigure 10{1: Reconstructed M t t in 320 pb Â¡ 1 of CDF Run 2 data. The plot on the righ t sho ws ev en ts with at least one SECVTX tag In the rst c h unk of data w e found 215 ev en ts passing our ev en t selection. W e ran the M t t reconstruction algorithm and the resulting sp ectrum is sho wn in the left plot of Figure 10{1 Three ev en ts w ere not reconstructed, whic h means there w ere no a v ailable solutions satisfying the W and top mass constrain ts (the algorithm forces the t w o top quarks on shell, together with the W that deca ys leptonically). 96 PAGE 112 97 T able 10{1: Exp ected n um b er of ev en ts assuming no signal. WW and QCD n um b ers are deriv ed based on the total n um b er of ev en ts observ ed in the searc h region ab o v e 400 GeV =c 2 Sample exp ected # of ev en ts for 320 pb Â¡ 1 SM t t 65.9 WW 3.8 W( e ) 36.9 W( e ) 34.1 QCD 7.3 The righ t plot in Figure 10{1 sho ws ev en ts with at least one b-tagged jet; ho w ev er w e do not presen t results for this subsample. A more in teresting plot (Figure 10{2 ) sho ws the 148 ev en ts found in the searc h region ab o v e the 400 GeV =c 2 cut, together with the Standard Mo del exp ectation. Ev en though w e ha v e ] 2 [GeV/c t t M 300 400 500 600 700 800 900 1000 1100 1200 ] 2 [GeV/c t t M 300 400 500 600 700 800 900 1000 1100 1200 2events/20GeV/c 0 5 10 15 20 25 30 35 40 45 -1 CDF Run 2 preliminary, L=320pb CDF data, Nev=212 4j W+ QCD 6.7 pb t SM t Diboson (NLO) Figure 10{2: Reconstructed M t t in 320 pb Â¡ 1 of CDF Run 2 data, after the 400 GeV cutquite a go o d agreemen t b et w een data and the Standard Mo del, there seem to b e few extra ev en ts in the 500 GeV =c 2 region. But b efore addressing that issue in PAGE 113 98 more detail w e w ould lik e to presen t the \result" of our analysis whic h, together with one p ossible theoretical in terpretation, is sho wn in Figure 10{3 The bands dene 68% and 95% co v erage in terv als on the exp ected upp er limit. In other w ords, due to limited statistics our deriv ed upp er limits from 1000 pseudo exp erimen ts ha v e non-negligible uctuations. The cen tral v alue is the median of the histogram of upp er limits from the 1000 pseudo exp erimen ts, as men tioned b efore, and the bands are dened b y in tegrating half the in terv al on b oth sides i.e. 34% of the area on eac h side of the median in the case of the 68% band. In the absence of an y signal w e exp ect the actual upp er limits to b e consisten t with the exp ected upp er limits. F or a resonance mass of 500 GeV =c 2 the data do esn't t v ery w ell, but the deviation is equiv alen t to a 2 uctuation whic h is not that unlik ely This is consisten t with the qualitativ e statemen t w e made b efore regarding the 500 GeV =c 2 region, based on the shap e of the M t t sp ectrum. The blac k line ] 2 [GeV/c Xo M 450 500 550 600 650 700 750 800 850 900 ] 2 [GeV/c Xo M 450 500 550 600 650 700 750 800 850 900 ) [pb] t t 0 BR(X Xos 0 1 2 3 4 5 6 7 8 9 Simulated 95% CL upper limits: Median Central 68% coverage band Central 95% coverage band Observed 95% CL upper limits: CDF data Theory: Z' =1.2% M Z' G Leptophobic Z', -1 CDF Run 2 preliminary, L=320pb Figure 10{3: Resonan t pro duction upp er limits from 320 pb Â¡ 1 of CDF Run 2 data on the same plot represen ts the predicted signal cross-section according to the PAGE 114 99 leptophobic top color-assisted tec hnicolor theoretical mo del used in the Run 1 analysis. According to this mo del, w e could exclude resonances with masses b elo w 700 GeV =c 2 at 95% condence lev el. F ollo wing the h yp othesis that a small resonance con tribution is presen t in the data w e p erformed an additional Kolmogoro v-Smirno test on the M t t distribution assuming rst that there is no signal and then adding a 2 pb signal con tribution coming from a 500 GeV =c 2 resonance. The particular signal cross-section w as c hosen based on the most lik ely cross-section returned b y our sensitivit y mac hinery The results of the tests are sho wn in Figures 10{4 and 10{5 The data is consisten t with the Standard Mo del-only h yp othesis at the 15% lev el and with the Standard Mo del plus a 500 GeV =c 2 resonance at the 70% lev el. The exp ected M t t shap e with suc h a signal presen t is sho wn in Figure 10{6 10.2 Final Results After observing quite in in teresting M t t sp ectrum when the data w as lo ok ed at for the rst time w e eagerly w aited to add more data and see whether the p eak around 500 GeV =c 2 remains, is enhanced or diminished. In Jan uary 2006 w e added another 360 pb Â¡ 1 of data and pro duced similar plots: the M t t sp ectrum v s the Standard Mo del exp ectation, sho wn in Figure 10{7 and the upp er limits plot sho wn in Figure 10{8 T able 10{2: Exp ected n um b er of ev en ts assuming no signal. WW and QCD n um b ers are deriv ed based on the total n um b er of ev en ts observ ed in the searc h region ab o v e the 400 GeV =c 2 Sample exp ected # of ev en ts for 680 pb Â¡ 1 SM t t 147.7 WW 8.1 W( e ) 69.0 W( e ) 63.7 QCD 13.7 PAGE 115 100 ] 2 [GeV/c tt M 300 400 500 600 700 800 900 1000 1100 1200 2# events/10GeV/c 0 2 4 6 8 10 12 14 16 18 20 22 KS Test KS distance = 0.082KS probability = 15.1% -1 CDF Run 2 preliminary, L=319pb CDF data (148 Evts) MC SM only KS Test 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0 100 200 300 400 500 KS distance distribution KS distance distribution Figure 10{4: Kolmogoro v-Smirno (KS) test assuming only the Standard Mo del. The KS distance distribution from pseudo exp erimen ts is sho wn in the righ t plot; the arro w indicates the KS distance b et w een data and the Standard Mo del template ] 2 [GeV/c tt M 300 400 500 600 700 800 900 1000 1100 1200 2# events/10GeV/c 0 2 4 6 8 10 12 14 16 18 20 22 KS Test KS distance = 0.047KS probability = 70.7% -1 CDF Run 2 preliminary, L=319pb CDF data (148 Evts) (MPV) 0 SM+X KS Test 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0 100 200 300 400 500 KS distance distribution KS distance distribution Figure 10{5: Kolmogoro v-Smirno (KS) test assuming signal with a mass of 500 GeV =c 2 and a cross-section equal to the most lik ely v alue from the p osterior probabilit y The KS distribution from pseudo exp erimen ts is sho wn in the righ t plot; the arro w indicates the KS distance b et w een data and the Standard Mo del + signal template. As it can b e seen in these plots the agreemen t b et w een the Standard Mo del and the data is again quite go o d, and the p eak around 500 GeV =c 2 is diminished signican tly A new Kolmogoro v-Smirno test p erformed b et w een the data and PAGE 116 101 ] 2 [GeV/c t t M 300 400 500 600 700 800 900 1000 1100 1200 ] 2 [GeV/c t t M 300 400 500 600 700 800 900 1000 1100 1200 2events/20GeV/c 5 10 15 20 25 30 35 40 45 -1 CDF Run 2 preliminary, L=320pb CDF data, Nev=212 Xo (500GeV) 4j W+ QCD 6.7 pb t SM t Diboson (NLO) Figure 10{6: M t t sp ectrum in data vs. Standard Mo del + 2 pb signal con tribution from a resonance with a mass of 500 GeV =c 2 the exp ected Standard Mo del shap e returned a less in teresting probabilit y of 56% (Figure 10{9 ). The upp er limits based on the full dataset a v ailable are listed in T able 10{3 F or the same theoretical mo del men tioned b efore and according to Figure 10{8 w e can exclude resonance masses b elo w 725 GeV =c 2 th us considerably extending the Run 1 CDF and D0 limits of 480 GeV =c 2 and resp ectiv ely 560 GeV =c 2 10.3 Conclusions W e ha v e searc hed for resonance pro duction of t t pairs using a matrix elemen t based metho d to reconstruct the in v arian t mass distribution of t t candidates. The searc h w as p erformed in a blind fashion; the data w as lo ok ed at only when the reconstruction and searc h algorithms w ere established, the treatmen t PAGE 117 102 ] 2 [GeV/c t t M 300 400 500 600 700 800 900 1000 1100 1200 ] 2 [GeV/c t t M 300 400 500 600 700 800 900 1000 1100 1200 2events/20GeV/c 10 20 30 40 50 60 70 80 90 -1 CDF Run 2 preliminary, L=680pb CDF data, Nev=447 4j W+ QCD 6.7 pb t SM t Diboson (NLO) Figure 10{7: Reconstructed M t t in CDF Run 2 data, 680 pb Â¡ 1 ] 2 [GeV/c Xo M 450 500 550 600 650 700 750 800 850 900 ] 2 [GeV/c Xo M 450 500 550 600 650 700 750 800 850 900 ) [pb] t t 0 BR(X Xos 0 1 2 3 4 5 6 7 8 9 -1 CDF Run 2 preliminary, L=680pb Simulated 95% CL upper limits: Median Central 68% coverage band Central 95% coverage band Observed 95% CL upper limits: CDF data Theory: Z' =1.2% M Z' G Leptophobic Z', Figure 10{8: Resonan t pro duction upp er limits in CDF Run 2 data, 680 pb Â¡ 1 PAGE 118 103 ] 2 [GeV/c tt M 300 400 500 600 700 800 900 1000 1100 1200 2events/10GeV/c 0 5 10 15 20 25 30 35 40 45 KS distance = 0.038 KS probability = 56% -1 CDF Run 2 preliminary, L=680 pb CDF data (302 Evts) SM only Figure 10{9: Kolmogoro v-Smirno test results are sho wn together with the reconstructed M t t using 680 pb Â¡ 1 and the corresp onding Standard Mo del exp ectation template of systematics w as understo o d and the exp ected limits for pure Standard Mo del computed. No indication of resonan t pro duction w as found, and w e set new, b etter signal cross-section times branc hing ratio limits. Assuming resonance pro duction according to a leptophobic top color-assisted tec hnicolor mo del w e exclude resonance masses b elo w 725 GeV =c 2 This is the b est curren t limit in suc h searc hes. PAGE 119 104 T able 10{3: Exp ected and observ ed upp er limits on signal cross-section deriv ed from a dataset with an in tegrated luminosit y of 680 pb Â¡ 1 Mass ( GeV =c 2 ) Exp ected UL (pb) Observ ed UL (pb) 450 2.7324 1.6652 500 1.8203 1.8236 550 1.1440 1.2640 600 0.7741 0.6913 650 0.5827 0.5801 700 0.4553 0.5851 750 0.3804 0.6099 800 0.3167 0.5602 850 0.2933 0.5357 900 0.2685 0.5171 PAGE 120 105 [pb] 0 X s 0 1 2 3 4 5 6 7 likelihood 0 0.005 0.01 0.015 0.02 0.025 CDF Run 2 preliminary, L=682pb -1 < 1.558 at 95% CL s < 1.663 at 95% CL s [pb] 0 X s 0 1 2 3 4 5 6 7 8 likelihood 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 CDF Run 2 preliminary, L=682pb -1 < 1.660 at 95% CL s < 1.820 at 95% CL s [pb] 0 X s 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 likelihood 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 CDF Run 2 preliminary, L=682pb -1 < 1.138 at 95% CL s < 1.263 at 95% CL s [pb] 0 X s 0 0.5 1 1.5 2 2.5 3 likelihood 0 0.005 0.01 0.015 0.02 0.025 CDF Run 2 preliminary, L=682pb -1 < 0.615 at 95% CL s < 0.690 at 95% CL s [pb] 0 X s 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 likelihood 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 CDF Run 2 preliminary, L=682pb -1 < 0.530 at 95% CL s < 0.580 at 95% CL s [pb] 0 X s 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 likelihood 0 0.002 0.004 0.006 0.008 0.01 0.012 CDF Run 2 preliminary, L=682pb -1 < 0.550 at 95% CL s < 0.585 at 95% CL s Figure 10{10: P osterior probabilit y distributions for CDF data and masses b et w een 450 and 700 GeV. PAGE 121 106 [pb] 0 X s 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 likelihood 0 0.002 0.004 0.006 0.008 0.01 0.012 CDF Run 2 preliminary, L=682pb -1 < 0.585 at 95% CL s < 0.610 at 95% CL s [pb] 0 X s 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 likelihood 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 CDF Run 2 preliminary, L=682pb -1 < 0.540 at 95% CL s < 0.560 at 95% CL s [pb] 0 X s 0 0.2 0.4 0.6 0.8 1 1.2 1.4 likelihood 0 0.002 0.004 0.006 0.008 0.01 CDF Run 2 preliminary, L=682pb -1 < 0.517 at 95% CL s < 0.536 at 95% CL s [pb] 0 X s 0 0.2 0.4 0.6 0.8 1 1.2 1.4 likelihood 0 0.002 0.004 0.006 0.008 0.01 0.012 CDF Run 2 preliminary, L=682pb -1 < 0.502 at 95% CL s < 0.517 at 95% CL s Figure 10{11: P osterior probabilit y distributions for CDF data and masses b et w een 750 and 900 GeV. PAGE 122 APPENDIX CHANGE OF V ARIABLES AND JA COBIAN CALCULA TION SKETCH W e will w ork in the massless limit appro ximation for the 6 nal state particles. Let us denote b y p 1 and p 2 the momen ta of the t w o W daugh ter quarks and b y p 3 and p 4 the momen ta of the t w o b quarks suc h that p 1 p 2 and p 3 are the deca y pro ducts of one top quark. Then let p l b e the momen tum of the (c harged) lepton and p the momen tum of the neutrino. Similarly ~ n 1 ~ n 2 ~ n 3 ~ n 4 ~ n l are the corresp onding unit v ectors and w e will also use p x p y and p z for the comp onen ts of the neutrino momen tum. The in tegration required is of the form R dp 1 dp 2 dp 3 dp 4 d 3 p but w e w ould rather in tegrate o v er the new v ariables M 2 W 1 M 2 W 2 M 2 T 1 M 2 T 1 and ~ P T 6 whic h are the squares of the W and top masses and the 6 b o dy transv erse momen tum. The initial set con tains 7 real v ariables while the new set con tains only 6 v ariables so in fact w e ha v e to k eep one of the initial v ariables and that will b e p 1 The relation b et w een the old and new v ariables is giv en b elo w: M 2 W 1 = 2 p 1 p 2 (1 Â¡ ~ n 1 Â¢ ~ n 2 ) (1) M 2 W 2 = 2( p l p Â¡ ~ p l Â¢ ~ p ) (2) M 2 T 1 = M 2 W 1 + 2 p 3 p 1 (1 Â¡ ~ n 3 Â¢ ~ n 1 ) + 2 p 3 p 2 (1 Â¡ ~ n 3 Â¢ ~ n 2 ) (3) M 2 T 2 = M 2 W 2 + 2 p 4 p l (1 Â¡ ~ n 4 Â¢ ~ n l ) + 2 p 4 ( p Â¡ ~ n 4 Â¢ ~ p ) (4) 107 PAGE 123 108 ~ P T 6 = p 1 ~ n T1 + p 2 ~ n T2 + p 3 ~ n T3 + p 4 ~ n T4 + p l ~ n Tl + ~ p T (5) W e will compute the Jacobian of the transformation using the iden tit y: Z dp 1 dp 2 dp 3 dp 4 d 3 p = Z [ Z ( M 2 W 1 Â¡ 2 p 1 p 2 (1 Â¡ ~ n 1 Â¢ ~ n 2 )) Â¢ ( M 2 W 2 Â¡ 2( p l p Â¡ ~ p l Â¢ ~ p )) Â¢ ( M 2 T 1 Â¡ M 2 W 1 Â¡ 2 p 3 p 1 (1 Â¡ ~ n 3 Â¢ ~ n 1 ) Â¡ 2 p 3 p 2 (1 Â¡ ~ n 3 Â¢ ~ n 2 )) Â¢ ( M 2 T 2 Â¡ M 2 W 2 Â¡ 2 p 4 p l (1 Â¡ ~ n 4 Â¢ ~ n l ) Â¡ 2 p 4 ( p Â¡ ~ n 4 Â¢ ~ p )) Â¢ 2 ( ~ P T 6 Â¡ p 1 ~ n T1 Â¡ p 2 ~ n T2 Â¡ p 3 ~ n T3 Â¡ p 4 ~ n T4 Â¡ p l ~ n Tl Â¡ ~ p T ) Â¢ dM 2 W 1 dM 2 W 2 dM 2 T 1 dM 2 T 2 d 2 ~ P T 6 ] dp 1 dp 2 dp 3 dp 4 d 3 p (6) and switc hing the order of the in tegrals, that is in tegrate o v er the old v ariables rst and use the prop ert y R ( f ( x )) dx = P i 1 j f 0 ( x i0 ) j where x i0 are all solutions for the equation f ( x ) = 0. First w e do the p 2 in tegral via the rst delta function whic h yields a factor of 1 2 p 1 (1 Â¡ ~ n 1 Â¢ ~ n 2 ) (7) and the solution p 2 = M 2 W 1 2 p 1 (1 Â¡ ~ n 1 Â¢ ~ n 2 ) (8) whic h is to b e used in all subsequen t calculations ev en though w e w on't do it explicitly here. Next w e do the p 3 in tegral via the third delta function whic h yields another factor of 1 2 p 1 (1 Â¡ ~ n 3 Â¢ ~ n 1 ) + 2 p 2 (1 Â¡ ~ n 3 Â¢ ~ n 2 )) (9) and the solution p 3 = M 2 T 1 + M 2 W 1 2 p 1 (1 Â¡ ~ n 3 Â¢ ~ n 1 ) + 2 p 2 (1 Â¡ ~ n 3 Â¢ ~ n 2 )) (10) PAGE 124 109 whic h again m ust b e replaced in all subsequen t calculations. Next w e do the d ~ p T in tegrals using the fth delta function. The factor is 1 and the solution is ~ p T = ~ P T 6 Â¡ p 1 ~ n T1 Â¡ p 2 ~ n T2 Â¡ p 3 ~ n T3 Â¡ p 4 ~ n T4 Â¡ p l ~ n Tl (11) whic h is less trivial than it lo oks since ~ p T dep ends on the y et to b e in tegrated v ariable p 4 so it can't b e treated as a constan t when w e will do the in tegration o v er p 4 No w w e do the p z in tegral using the second delta function in whic h ~ p T is replaced with the expression ab o v e. The resulting factor is p 2 j p l p z Â¡ p p zl j (12) W e ha v e t w o solutions for the p z and these can b e written in a compact form as p z = an zl Â§ p a 2 Â¡ ( ~ n Tl ) 2 ( ~ p T ) 2 ( ~ n Tl ) 2 (13) with a = M 2 W 2 2 p l + ~ n Tl Â¢ ~ p T (14) Lik e for ~ p T p z also dep ends on p 4 and no w w e will turn to this last in tegral whic h is ev aluated using the fourth delta function. But here w e ha v e to replace the explicit expressions for ~ p as a function of p 4 W e can simplify the expressions if w e notice that from the leptonic W mass constrain t w e can express p as p = a + n zl p z (15) and the expression inside the delta function can b e rewritten as M 2 T 2 Â¡ M 2 W 2 Â¡ 2 p 4 ( p l + a Â¡ ~ n 4 Â¢ ~ p l Â¡ ~ n T4 Â¢ ~ p T + ( n zl Â¡ n z4 ) p z ) (16) PAGE 125 110 Then the deriv ativ e with resp ect to p 4 reads Â¡ 2( p l + a Â¡ ~ n 4 Â¢ ~ p l Â¡ ~ n T4 Â¢ ~ p T + ( n zl Â¡ n z4 ) p z ) Â¡ 2 p 4 ( Â¡ ~ n Tl Â¢ ~ n T4 + ( n zl Â¡ n z4 ) @ p z @ p 4 + ( ~ n T4 ) 2 ) (17) where w e used @ ~ p T @ p 4 = Â¡ ~ n T4 whic h is used to ev aluate @ a @ p 4 as w ell. The last step is nding @ p z @ p 4 This follo ws from basic calculus since p z = p z ( a; ~ p T ), but the expressions b ecome length y without adding an ything new really so w e will not list them here. The explicit, n umerical calculation of the factor requires nding the solutions for p 4 giv en that the expression inside the delta function cancels. This leads to a fourth order equation. F ourth order equations can b e solv ed analytically Once the solutions are found all the factors are kno wn and their pro duct is equal to the Jacobian. In summary w e found the Jacobian for the c hange of v ariable dened ab o v e without explicitly computing it, that is without computing the determinan t of the matrix of the rst order deriv ativ es of the old v ariables with resp ect to the new ones. A sum o v er all solutions is implied, that is for a giv en set of new v ariables, t w o or four sets of old v ariables exist, eac h with its o wn n umerical v alue for the Jacobian. PAGE 126 REFERENCES [1] J.F.Donogh ue, E. Golo wic h, B.R. Holstein, \Dynamics of the Standard Mo del", Cam bridge Monographs on P article Ph ysics, Nuclear Ph ysics and Cosmology CUP 1996 (reprin ted) [2] G. Arnison et al. [UA1 Collab oration], Phys. L ett. B122 103(1983). [3] P Bagnaia et al. [UA2 Collab oration], Phys. L ett. B122 476 (1983). [4] G. Arnison et al. [UA1 Collab oration], Phys. L ett. B126 398 (1983). [5] P Bagnaia et al. [UA2 Collab oration], Phys. L ett. B129 130 (1983). [6] C.T. Hill, Phys. L ett B345 483 (1995); C.T. Hill and S.J. P ark e, Phys. R ev. D49 4454 (1994) [7] R.M. Harris, C.T. Hill, and S.J. P ark e, F ermilab Rep ort No. F ermilab-FN-687; hep-ph/9911288, 1999 [8] T. App elquist, H.C. Cheng, and B.A. Dobrescu, Phys. R ev. D64 035002 (2001) [9] H.C. Cheng, K. T. Matc hev, and M. Sc hmaltz, Phys. R ev. D66 056006 (2002) [10] G.Burdman, B.A. Dobrescu, E. P on ton, \Resonances from Tw o Univ ersal Extra Dimensions", hep-ph/0601186 [11] T. Aolder et al. [CDF Collab oration], Phys. R ev. L ett. 85 2062 (2000) [12] V.M. Abazo v et al. [D0 Collab oration], Phys. R ev. L ett. 92 221801 (2004) [13] F.Ab e et al. [CDF Collab oration], Nucl. Instrum. Metho ds Phys. R es. 271 387 (1998) [14] T.K. Nelson for the CDF Collab oration, \The CDF La y er 00 Detector", F ermilab preprin t, FERMILAB-CONF-01-357-E (2001). [15] F. Ab e et al. [CDF Collab oration], Phys. R ev. D45 1448 (1992). [16] A. Bhatti and Florencia Canelli, \Jet Energy Corrections at CDF", CDF Note 7543 [17] D. Acosta et al. [CDF Collab.], Phys. R ev. D72 052003 (2005) 111 PAGE 127 112 [18] J. Bellinger, Ken Blo om, W. Da vid Dagenhart, Andreas Korn, Sla v a Krutely o v, Victoria Martin and Mic hael Sc hmitt, \A Guide to Muon Reconstruction for Run 2", CDF Note 5870 [19] R. Erbac her, Y. Ishiza w a, B. Kilminster, K. Lannon, P Lujan, T. Maki, B. Mohr, J. Nielsen, E. P alencia, S. Rapp o ccio et al. \Ev en t Selection and t t Signal Acceptance of the Win ter 2005 T op Lepton + Jets Sample", CDF Note 7372 [20] E. Halkiadakis, C. Ha ys, M. T ecc hio and W. Y ao \A Con v ersion Remo v al Algorithm for the 2003 Win ter Conferences", CDF Note 6250. [21] A. T aard, \Run I I Cosmic Ra y T agger", CDF Note 6100 [22] V.Barger, J. Ohnem us and R.J.N. Phillips, \Spin Correlation Eects in the Hadropro duction and Deca y of V ery Hea vy T op Quark P airs", Univ Wisconsin at Madison, MAD/PH/413 [23] E. Bo os, V. Bunic hev, M. Dubinin, L. Dudk o, V. Ilyin, A. Kryuk o v, V. Edneral, V. Sa vrin, A. Semeno v and A. Sherstnev, Nucl. Instrum. Meth. A534 (2004) 250 [24] T. Sjostrand, P Eden, C. F rib erg, L. Lonn blad, G. Miu, S. Mrenna and E. Norrbin, Computer Physics Commun. 135 (2001) 238 [25] G. Corcella, I.G. Kno wles, G. Marc hesini, S. Moretti, K. Odagiri, P Ric hardson, M.H. Seymour and B.R. W ebb er, JHEP 0101 (2001) 010 [26] M.L. Mangano, F. Piccinini, A. D. P olosa, M. Moretti and R. Pittau, JHEP 07 001 (2003) [27] Luc Demortier, \A F ully Ba y esian Computation of Upp er Limits for P oisson Pro cesses", CDF Note 5928 [28] Luc Demortier, \A F ully Ba y esian Computation of Upp er Limits for the CDF Higgs Searc h", T alk giv en at CDF Statistics Committee Meeting, July 23, 2004 [29] K. Lannon, Ric hard Hughes, Brian Winer, Ev elyn Thomson, Robin Erbac her, Rob Roser, John Con w a y and Ben Kilminster \Measuremen t of the Cross-Section for top pair pro duction using ev en t kinematics", CDF Note 7753 [30] J. Con w a y R. Erbac her, R. Hughes, A. Lath, R. Marginean, R. Roser, E. Thomson and B. Winer, \Searc h for t 0 W q Using Lepton Plus Jets Ev en ts", CDF Note 6888 PAGE 128 BIOGRAPHICAL SKETCH V alen tin Necula w as b orn in Campulung, Arges Coun t y Romania, on Decem b er 4th 1973. After graduating from high sc ho ol in 1992 he w as accepted in the Computer Science Departmen t of the P olytec hnic Univ ersit y of Buc harest. In 1995 he also enrolled in the Ph ysics Departmen t of the Univ ersit y of Buc harest. He graduated with a B.Sc. in Computer Science in 1997 and a B.Sc. in Ph ysics in 1999, en tered the Ph ysics Graduate Departmen t at Univ ersit y of Florida in 1999 and mo v ed to F ermilab in 2001 for researc h within the CDF collab oration under the sup ervision of Prof Guenakh Mitselmakher and Prof Jacob o Konigsb erg. 113 |