### UNIVERSITY OF PISA

The School of Graduate Studies in Basic Sciences "GALILEO GALILEI"

### PhD Thesis:

### Measurement of the associated

### production of direct photons and jets

### with the Atlas experiment at LHC

### Candidate

### Supervisor

### Dr. Michele

### Cascella

### Prof. Vincenzo

### Cavasinni

## Contents

1 Theory 8

1.1 The Standard Model . . . 8

1.1.1 Weak interactions . . . 9

1.1.2 The Higgs mechanism . . . 10

1.1.3 Beyond the Standard Model . . . 12

1.2 Quantum Chromo Dynamics (QCD) . . . 13

1.2.1 Strong force . . . 15

1.2.2 The QCD improved parton model . . . 16

1.2.3 The QCD evolution equations . . . 17

1.2.4 Hadron-hadron collisions and the factorization theorem . 19 2 The direct photon channel 22 2.1 Introduction . . . 22

2.2 The theory . . . 24

2.2.1 Cross section comparison . . . 25

2.3 The gluon density in the proton . . . 26

2.3.1 Motivation . . . 26

2.4 Kinematics . . . 26

2.4.1 Physics . . . 27

3 The Atlas Detector 30 3.1 LHC . . . 30

3.2 The Atlas experiment . . . 33

3.2.1 The magnetic system . . . 34

3.2.2 Inner detector . . . 35

3.2.3 Calorimeters . . . 36

3.2.4 The muon spectrometer . . . 37

3.2.5 Trigger and data acquisition . . . 39

3.2.6 Luminosity measurement . . . 40

4 The Atlas Calorimeters system 42 4.1 Calorimetry . . . 42

4.1.1 Principles of electromagnetic calorimetry . . . 42

4.1.2 Calorimeter response to hadrons (non compensation) . . . 46 1

4.2 The liquid Argon central electromagnetic calorimeter . . . 47

4.3 The Atlas hadronic calorimeter . . . 51

4.3.1 The barrel and extended barrel hadronic calorimeter . . 52

4.3.2 The hadronic End-Cap calorimeter . . . 54

4.4 The forward calorimeter . . . 55

4.5 The calorimeters at the test beam . . . 56

4.5.1 The LAr test beam . . . 56

4.5.2 The TileCal test beam . . . 58

4.6 Commissioning of the Atlas calorimetric system . . . 59

4.6.1 Commissioning of the LAr calorimeter . . . 59

4.6.2 Commissioning of TileCal . . . 61

5 Validation of the Monte Carlo simulation 65 5.1 The TileCal test beam . . . 65

5.1.1 TileCal, the Atlas hadronic calorimeter . . . 65

5.1.2 The test beam setup . . . 65

5.2 Energy reconstruction . . . 68

5.2.1 Calibration of the experimental data to the electromag-netic scale . . . 69

5.3 Data selection . . . 69

5.3.1 Calorimetric separation of electrons and hadrons . . . 69

5.3.2 Cerenkov counter . . . 72

5.3.2.1 Cerenkov performance . . . 73

5.3.3 Muon identication . . . 75

5.3.4 Characteristics of the particle beam at dierent energy . . 75

5.4 The Monte Carlo simulations . . . 76

5.4.1 The Geant4 simulation . . . 76

5.4.1.1 The Geant4 physics lists . . . 77

5.4.1.2 Geant4 version 7.0 and 8.1 . . . 78

5.4.1.3 Geant4 in Atlas . . . 78

5.4.2 The Fluka Monte Carlo . . . 79

5.4.2.1 Atlas and Fluka . . . 79

5.4.3 The output of the simulation program . . . 81

5.5 Preparation of the Monte Carlo samples . . . 81

5.5.1 Calibration of the simulations . . . 81

5.5.2 Electronic noise and photostatistics . . . 81

5.5.3 Correction for the TileCal periodic structure . . . 82

5.6 Systematics and uncertainties . . . 83

5.6.1 Uncertainties on beam composition . . . 83

5.6.2 Eect of the calorimetric selection . . . 83

5.7 Results of the comparison of the Monte Carlo simulations with the experimental data . . . 83

5.7.1 Response and resolution . . . 83

5.7.2 Longitudinal and lateral shower shape . . . 85

6 Jets 89

6.1 Jet algorithms . . . 89

6.1.1 Iterative cones . . . 90

6.1.2 Sequential recombination (kT algorithms) . . . 91

6.1.3 Jet algorithms in Atlas . . . 92

6.2 Jet Calibration . . . 94

6.2.1 The reference scale . . . 94

6.2.2 Calibration methods at Atlas . . . 94

6.2.3 The global calibration . . . 95

6.2.4 The local calibration approach . . . 99

6.2.5 Jet calibration and energy scale uncertainty in the rst data101 7 Photons 103 7.1 Photon and electron identication at trigger level . . . 103

7.2 Photon identication . . . 104

7.2.1 Photon trigger selection . . . 104

7.2.2 Electron and photon reconstruction . . . 104

7.2.3 Background rejection . . . 105

7.2.4 Selection performance . . . 108

8 Photon+jet cross section 111 8.1 Collision data and Monte Carlo simulation . . . 111

8.1.1 Experimental data . . . 111 8.1.2 Simulated data . . . 112 8.2 Event selection . . . 113 8.3 Selection eciency . . . 115 8.4 Background . . . 115 8.4.1 The QCD background . . . 115 8.4.2 Electron background . . . 117

8.5 Cross sections measurements . . . 117

8.5.1 Uncorrected cross section . . . 119

8.5.2 Systematic uncertainties . . . 119

8.5.3 Total systematic uncertainty . . . 121

8.6 Results . . . 122

9 Jet calibration and the photon+jet channel 125 9.1 Collision data and Monte Carlo simulation . . . 125

9.2 Event selection . . . 125

9.3 Validation of the jet energy scale . . . 126

9.3.1 Measurements of the average response . . . 128

9.3.2 Systematic uncertainties . . . 130

9.3.3 Results . . . 135

9.4 Measurement of the calorimeter response . . . 138

9.4.1 E0 _{. . . 138}

## Introduction

The Large Hadron Collider (LHC) is a 27 km long circular particle accelerator located at CERN near Geneva, Switzerland. After a troubled start in September 2008 the LHC began its rst period of data taking in November 2009 by colliding protons with an energy in the center of mass of √s = 900 GeV and later, in December, with a collision energy of 2.36 TeV.

After the winter pause, LHC has restarted operations in February 2010
accel-erating the two proton beams to an energy of 3.5 TeV to achieve a center-of-mass
energy of 7 TeV with a peak luminosity of 2.1 1032_{cm}=2_{s}=1_{.}

The aim of this run is to deliver at least 1 fb−1_{of integrated luminosity by the}

end of 2011 and several fb−1 _{by the end of the year 2012. A year-long upgrade}

shut-down will be necessary to let LHC reach the design energy of√s = 14TeV
and the design luminosity of 1034_{cm}=2_{s}=1_{.}

The current theoretical framework, which describe the physics at subatomic length scales is called Standard Model. The SM has shown a remarkable agree-ment with the huge amount of experiagree-mental results gathered in the last 50 years. Nonetheless important questions remain unanswered: why are there three gen-erations of quarks and three avors of leptons, and what is the explanation of their weak charged current mixings and CP violation? What generate particle masses? Can all particle interactions be unied in a simple gauge group? How can we t gravity into this picture?

Several Standard Model extensions like the Higgs boson, super symmetry or extra-dimensions have been proposed to answer these questions. Most of these theories, including the Standard Model predict some new phenomena at the TeV mass scale.

LHC will explore an energy range never accessed before in experimentally controlled conditions (the Tevatron proton-anti-proton collider, the LHC prede-cessor, has a collision energy of 1.96 TeV). The high energy physics community is hopeful that it will play a fundamental role in exploring possible extensions of the Standard Model but also in investigating its precise predictions. The results obtained at LHC could also shed light in some fundamental cosmological questions such as, for example, the dark matter origin.

Four detector sit on the LHC interaction points. Atlas and CMS are two general purpose experiments, Alice is designed to observe lead ions collisions and LHCb is an asymmetric detector especially designed to study b meson production in proton-proton collisions.

The Atlas experiment was designed to accommodate the wide spectrum of possible physics signatures in the TeV mass scale. The measurements planned by the Atlas collaboration range from the already explored areas of the SM (jet production, b and t quark physics, the τ lepton, the W and Z bosons and direct photon production) to its possible extensions (the Higgs boson) and from that to even more exciting possibilities (for instance: new vector bosons, new leptons or quark generations, quark internal structure and super symmetric particles).

Regardless of the complexity of the underlying physics scenario the basic objects that compose the observable nal state are just a few: photons, elec-trons, muons, light hadrons generally grouped in jets, secondary vertices that provide heavy avor tagging and missing transverse energy that hints at an invisible particle that did not interact in the detector (or at a malfunctioning calorimeter).

Many of these studies, from the top mass and the jet cross section to the Higgs and super symmetry searches require precise measurement of the jet char-acteristics. The experimental requirements, especially for a precise jet energy scale determination, are often very demanding. Typically, an absolute system-atic uncertainty of better than 1% is desirable for precision physics like the measurement of the top quark mass, and the reconstruction of some super sym-metric nal states.

The Atlas calorimeters are intrinsically non-compensating (they have a higher response for electrons and photons than for hadrons) and non-uniform in η due to the use of dierent technology and the presence of dead material and non instrumented regions.

The Atlas calorimeters have been calibrated using electrons during the test beam periods. These periods also served as a test bench to tune the Monte Carlo simulation to closely reproduce the detector response to known particles: elec-trons, pions, protons and muons. These simulations were then used to compute the corrections necessary to calibrate the response to jets from the electromag-netic scale to the jet energy scale in order to correct the linearity response and improve the energy resolution.

After the start of the data taking several physics channels will be exploited to assess the quality of the calibrations and, if necessary, to obtain correction factors. Dierent physics channels allow us to study dierent jet properties in various kinematic and geometrical regions.

Di-jet and multi-jet events will be used to cross-check the relative response across dierent pseudo-rapidity and pT regions. Z+jet, photon+jet and the

hadronic decay of the W will provide a reference for the absolute scale. The photon+jet channel (being the one with largest cross section) is the rst candidate to check the absolute scale of the jet calibration. In events with photon and a recoiling jet, the transverse momentum balance can be exploited to estimate the jet energy using the measurement on the photon energy, whose scale is much better under control. The main background to this channel is given by QCD events where one jet is misidentied as a photon.

In this work we show the analysis of the rst of 38 pb=1 of integrated lumi-nosity collected in 2010. An appropriate sample of events is selected, with the

photon in the central region, |ηγ| < 1.4to cross check the Monte Carlo based

calibration up to pT ' 250GeV with a combined statistic and systematic

uncer-tainty of 2%. The jet-jet background contamination will be studied by selecting a sample of events with enriched background to estimate and subtract its eect on the photon+jet sample.

The reliability of the jet calibration is an essential ingredient for most physics, both known and unknown, in the Atlas experiment. A method to validate the absolute scale of the Monte Carlo derived jet calibration is very important to provide the Atlas community with one of the fundamental tools to obtain out-standing physics results.

The work on the determination of the signal eciencies and background contribution to the photon+jet channel is also at the base of the a measurement of the cross section for the associated production.

Following a similar measurement by the D0 collaboration the photon+jet cross section will be measured in four dierent conditions depending on the rapidity of the jet (central vs. forward jets) and the relative position of the jet and the photon (same side vs. opposite side of the detector). Besides testing of the theoretical prediction this measurement can prove important in probing the gluon content of the proton.

The rst chapter of this work introduces the theoretical framework of the Standard Model and some of its possible extensions. In the second chapter the details of the prompt photon theory are reviewed together with a short summary of the previous experimental measurements.

Chapters 3 and 4 review the Atlas experimental apparatus with a particu-lar focus on the calorimeters system. Following the introduction of the Atlas calorimeter, chapter 5 presents a contribute of the candidate to validate two Monte Carlo simulations (Geant4 and Fluka) using the experimental data ac-quired during the test beam periods of the TileCal hadronic calorimeter.

Chapter 6 and 7 describe the reconstruction of photon and jet in the Atlas detector highlighting the performance and problematics relative to the energy resolution and linearity. The chapter on jet reconstruction builds upon the no-tions introduced in chapter 1 to introduce the concept of jet algorithms together with a discussion of their characteristics.

The last two chapters report the candidate's work on the associated
produc-tion of a photon and a jet. The measurement of the dierential cross secproduc-tion
d3_{σ/dp}γ

Tdy

γ_{dy}jet_{(split up in four dierent regions) is shown in detail in chapter}

8 while chapter 9 presents the use of the photon-jet balancing in the transverse plane to measure the uncertainty over the jet energy calibration.

## Notation

The Atlas coordinate system is dened by the triplet [R, φ, z] where z is the pointing along the beam line, R is the transverse distance from the interaction point, and φ is dened such that the x axis points from the interaction point to the center of the LHC ring. We also dene the angle θ = arctan(R/z).

In this reference frame we can dene some physical quantities that will ap-pear often in the rest of the work: pT = ~p · sin θ, the transverse momentum of

the particle, and the pseudo-rapidity η = − ln tan(θ/2).

The distance ∆R between two objects (i.e. jets or particles) is dened as
∆R =p(∆η)2_{+ (∆φ)}2.

The Mandelstam variables for a 2 → 2 scattering are dened as follows s = (p1+ p2)2= (p3+ p4)2 (1)

t = (p1− p3)2= (p2− p4)2 (2)

u = (p1− p4)2= (p2− p3)2 (3)

where p1 and p2are the four-momenta of the incoming particles and p3 and p4

are the four-momenta of the outgoing particles; ˆs, ˆt and ˆu are used to the refer to the Mandelstam variables of a pair of partons inside an hadron.

### Chapter 1

## Theory

### 1.1 The Standard Model

The modern description of the elementary particles and their interactions is based on a gauge invariant theory known as the Standard Model (SM) of particle physics [1, 2, 3].

In the last 30 years the Standard Model has enjoyed an impressive success, its experimental tests became more and more precise over the last two decades and no signicant deviation from the predictions of this theory was found [4].

In the Standard Model, particles are divided into two families: fermions, with half-integer spin and bosons, with integer spin. Bosons are the carriers that propagate the three interactions described by the SM.

Electromagnetism is responsible for the interaction between charged par-ticles via a massless boson: the photon (γ); the theory of the electromagnetic interactions: the Quantum Electro Dynamics (QED) is associated with the U(1) symmetry group.

The weak force is responsible for the interactions between particles with
weak isospin and is mediated by three massive vector bosons: the W+_{, W}−

and the Z0_{. The theory of the weak interactions is based on the SU(2)}

L group

of symmetry. The carriers of the weak interaction are very massive, making it short-ranged.

The Quantum Chromo Dynamics or QCD, the theory that describes the strong force, is associated with the SU(3) symmetry group and the carriers of this force are 8 massless bosons called gluons. The strong charge is called color, gluons are colored particles themselves and thus self interacting, this leads to phenomena such as connement and asymptotic freedom.

Gravity is not described in the SM. This is not a problem for the description of the elementary particles, since gravity is completely negligible at the sub-atomic level and it would become important only at the Planck energy scale. However, this is a strong hint that the Standard Model cannot be the nal theory of fundamental physics.

Left handed doublets Right handed singlets
Leptons νe
e
L
νµ
µ
L
ντ
τ
L eRmuR τR
| ~T | = 1
2, Y = −
1
2 | ~T | = 0, Y = −1
Quarks u
d
L
c
s
L
t
b
L uR dRcR sR bR tR
| ~T | = 1
2, Y =
1
6 | ~T | = 0, Y (ur, cr, tr) =
2
3
Y (dr, sr, br) = −1_{3}

Table 1.1: Classication of fermions predicted by the SM divided into leptons and quarks. The weak isospin ~T and hyper-charge Y are given for each group.

Fermions are classied by the charges they carry and, consequently, by the interactions they undergo. Color carrying fermions are called quarks, while those with zero color charge are called leptons.

The quarks have been given the names up, down, charm, strange, top and bottom (sometimes referred to by their initials: u, d c, s, t, b), and the charged leptons are called electron (e), muon (µ) and tau lepton (τ). Each charged lepton has an uncharged companion, a neutrino (ν).

Since the weak interaction couples with left handed particles only, all fermions are divided into isospin 1

2 left-handed doublets and isospin 0 right-handed

sin-glets. Three quark doublets, as well as three lepton doublets have been observed, while there are six quark singlets and three lepton singlets. Right-handed neu-trinos may not exist at all since in the framework of the SM there would be no way for them to interact with other particles.

The electromagnetic and weak interactions have been unied in a single Electroweak (EW) theory which describes the interactions of quarks and leptons. The EW theory is built using a Lagrangian invariant under transformations belonging to the SU(2) × U(1) symmetry group. Each particle is identied using the quantum numbers (||~T||, T3) for the SU(2)L group and Y for the

U (1)group, ~T is called weak isospin and Y hyper-charge. The Standard Model gauge symmetry is:

GSM= SU (3)C⊗ SU (2)I⊗ U (1)Y (1.1)

where SU(3)C describes the color symmetry of strong interactions, SU(2)I

de-scribes the weak isospin I of the unied electroweak interactions and U(1)Y the

invariance under hypercharge transformation.

### 1.1.1 Weak interactions

The weak interaction is mediated by massive vector particles, the charged W±

and the neutral Z0 _{bosons, which couple to all the quarks and leptons. The}

former couples to each pair of quarks and leptons with a universal coupling strength αw , since they all belong to the doublet representation of SU(2).

weak propagator

αw

Q2_{− M}2
W

(1.2) the Fourier transform of this gives the weak potential

VW =

αw

r e

−rMW (1.3)

that proves that the weak interaction is restricted to a short range ∼ 1/MW.

The weak and the electromagnetic interactions have been successfully unied into a SU(2) ⊗ U(1) gauge theory by Glashow, Salam and Weinberg. Accord-ing to this theory the weak and the electromagnetic interactions have similar coupling strength

αw= α/ sin2θw (1.4)

where sin2_{θ}

wrepresents the mixing between the two gauge groups. This

param-eter can be dparam-etermined from the relative rates of charged and neutral current
(W± _{and Z}0 _{exchange) neutrino scattering, giving}

sin2θw' 1/4 → αw' 1/32 (1.5)

which shows that the weak interaction is in fact stronger than the
electromag-netic interaction, its relative weakness at low energy is due to the additional
M_{W}2 term in the propagator. This suppression is a transient phenomenon, which
goes away at a high energy scale (Q2_{∼ M}

W ), where the rates of weak and EM

interaction becomes similar.

The rst direct prediction of this theory were the mass of the W and Z bosons which can be obtained from the µ decay amplitude αw/MW2 . The observed µ

decay rate gives

GF = π √ 2 αw M2 W = 1.17 × 10−5GeV−2 (1.6) from which we nd

MW ' 80GeV → MZ = MW/ cos θw' 91GeV (1.7)

### 1.1.2 The Higgs mechanism

One of the hardest problem in the Standard Model is how to generate the mass of the weak gauge bosons (as well as those of quarks and leptons) without breaking the gauge symmetry of the Lagrangian, which is essential for a renormalizable eld theory.

The Higgs mechanism does it by adding a scalar (spin 0) eld φ to the Lagrangian density, for simplicity's sake we will show an example of this for an EM-like Lagrangian

L = (∂µ− ıeAµ) φ?(∂µ+ ıeAµ) φ −µ2φ?φ − λ(φ?φ)2 −

1 4FµνF

Figure 1.1: The Mexican hat potential.

where Aµ is a photon-like eld and Fµν = ∂µ− ıeAµ the corresponding eld

tensor. The middle term represents the scalar mass and self-interaction, while the rst one represents scalar kinetic energy and gauge interaction.

Both the scalar mass and self interaction terms are invariant under the trans-formation

φ → eıα(x)φ (1.9)

that generates U(1). The kinetic term contains ∂µφand is not invariant under

this phase transformation, but can be made so redening Aµ→ Aµ+

1

e∂µα(x) (1.10)

We cannot give mass to Aµby adding an explicit mass term M2AµAµbecause

it would not be phase-invariant. The mass term of the scalar eld µ2_{φ}?_{φ} _{is}

invariant, we can make Aµ acquire a mass by making it interact with φ.

If µ2_{< 0}_{the scalar potential}

V = µ2φ?φ − λ(φ?φ)2 (1.11)

has a minimum for |φ| = p−µ2_{/2λ = v}. V is the so-called Mexican hat

poten-tial, it has a rotational symmetry, but the ground state breaks this symmetry. Since perturbative expansions must be carried out around a point of min-imum energy (stable equilibrium), we move the origin to this point and do perturbation theory with the redened eld

h(x) = φ(x) − v (1.12)

which represents the physical scalar (the Higgs boson). If we substitute this in the Lagrangian we nd that Aµ has gained a mass term:

with mass M = ev. The scalar eld h also has a mass: Mh=p−µ2=

√ 2λ

e M (1.14)

In the Higgs model an analogous mechanism is used to give mass to the W and Z bosons.

### 1.1.3 Beyond the Standard Model

To date no signicant discrepancy between the theory and the experimental ob-servations has been found. Nonetheless, there are numerous reasons for which physicists believe the SM cannot be the nal theory of elementary particle in-teractions. Some of the most pressing unresolved questions are:

Massive Neutrinos The combined data from SNO and Super-Kamiokande neutrino observatories provides a strong evidence that neutrinos of one avor oscillates into another one. This implies that neutrinos are massive and that the cross-coupling is related to a mixing matrix. This is not described by the SM, although it could be incorporated through an ad-hoc extension.

Origin of Three generations There is no theoretical constraint on the num-ber of generations, ordinary matter just requires the rst generation, the extra generations may point to a higher group structure.

Dark Matter Astronomical observations have shown the universe contains a signicant amount of matter which does not emit observable radiation. This is expected to be massive non-baryonic weakly interacting matter but no such particle type exists within the SM.

Origin of CP Violation This phenomenon has been observed and incorpo-rated into the SM by the introduction of the CKM matrix with a non zero phase, however the physical the values of the couplings and the phase are all free parameters in the SM.

Possibly Absent Higgs All the elementary particles of the SM have been discovered except the Higgs Boson. Global ts to measured electroweak data suggest a Higgs mass smaller than 100 GeV, a region already excluded by direct search. If the Higgs is not discovered, some other mechanism for EW symmetry breaking must be found.

The Hierarchy Problem As it stands, the SM Higgs boson, which is a scalar eld, is subject to loop corrections to its self-energy which are quadrati-cally divergent. This means that without extreme ne tuning, the Higgs mass would be expected to run to a scale 30 orders of magnitude greater than the O(TeV) scale needed for compatibility with existing Electroweak measurements. A theoretical mechanism to keep the Higgs mass at the same hierarchy as the rest of EW physics would improve the model.

Unication Precision data from LEP and SLC show the coupling constants of the weak strong and electromagnetic interactions do not unify at the same point at high energy scale unless some yet undiscovered phenomenon modify their behavior.

Gravity A theory which aims at accounting for all interactions should also describe gravity.

To address some of these questions, new theories have been proposed which would include SM physics but also explain phenomena outside its reach. These theories are collectively referred to as Physics Beyond the Standard Model. Supersymmetry The SUSY model assumes a symmetry exists between fermions

and bosons, predicting the existence of spin-0 partners to the known lep-tons and quarks (referred to as sfermions) as well as spin-1/2 partners to bosons (called gauginos). It is a maximal extension of the Lorentz group and provides a solution to the hierarchy problem because contributions from super-particle automatically cancels the diverging loop corrections. SUSY can also change the coupling constant evolution so that all the interactions unify at the same scale.

Technicolor This theory provides a dynamical mechanism for breaking elec-troweak symmetry and so is a replacement for the Higgs mechanism. It assumes the existence of technifermions which carry technicolor charge and interact strongly.

Large Extra Dimensions Recent theoretical work has explored the possible existence of large extra dimensions of O(mm) theories. These works pos-tulate that while the SM is conned to a four-dimensional plane, gravity is allowed to propagate in all dimensions. This reduces the Planck scale to the electroweak scale, which allows quantum gravity to be observed at O(TeV), and solves the hierarchy problem.

### 1.2 Quantum Chromo Dynamics (QCD)

The concept of quark was rst introduced by Gell-Mann in his Eightfold Way that organized the mesons and spin-1/2 baryons into octets and spin-3/2 baryons in a decuplet. This model treated the strong force as invariant under avor transformations: octets and decuplets were representations of the underlying symmetry group.

Shortly after Greenberg introduced the notion of color charge to explain how quarks could coexist inside some hadrons in otherwise identical quantum states without violating the Pauli exclusion principle.

The parton model was born in the late 60's to explain some features in deep
inelastic scattering experiments. Leptons were scattered o a nucleon and it was
observed that the cross section from such events was only weakly dependent on
the magnitude of the squared four-momentum transfer (Q2_{).}

Figure 1.2: Graphical representation of Gell-Mann's Eightfold Way [5].

The interpretation of this dependence was that, rather than scattering co-herently o a proton as a whole, the electron scatters incoco-herently o point-like particles inside the proton. These were generically labeled partons and were subsequently identied by Feynman as the quarks familiar to us today.

In the initial version of the model the proton is assumed to consist of a set of partons moving parallel to the direction of the proton. Each parton has a fractional electric charge.

From this simple description and using parallels with QED, some remarkable predictions can be made. The structure of the proton is described by two independent structure functions F1 and F2, these functions, initially dependent

on two scalars: Q2 _{and x, were postulated by Bjorken to be independent of Q}2

at large Q (Bjorken scaling). Here x is the fraction of the proton momentum carried by the struck parton

x = Q2/P · q (1.15)

where P is the four momentum of the initial proton and q is the four vector of the photon mediating the scatter.

The hypothesis that quarks can be thought of as free particles, is of course a simplication. Additionally, it was observed that only about half of the proton's momentum was accounted for by the quarks. A large faction of the proton's momentum was carried by something not seen directly in the DIS experiments. The carrier of this missing momentum was identied to be the gluon.

### 1.2.1 Strong force

There are three color charges in QCD, red, blue and green1_{. These color charges}

are analogous to the (single) electric charge in QED, with one exception: in QED the electromagnetic force is mediated by the photon, which itself carries no electromagnetic charge, in QCD the force is mediated by gluons, which do carry a color charge.

The fact that the gluon can self-couple and the photon cannot, gives rise to the dierent behaviors of the two forces as a function of Q: whilst the coupling constant of the electromagnetic force α(Q) increases with Q, that of the strong force αs(Q)decreases.

It is because of this behavior of the strong force that for suciently high Q, quarks and gluons inside a proton can be treated as eectively free particles, this is known as asymptotic freedom.

The more immediate consequence, however, is that quarks and gluons are conned to exist only inside colorless objects known collectively as hadrons.

When particles with color charge are separated, force between the two parti-cles increases and the energy in the eld between them grows to the point that a color/anti-color pair can be produced in the vacuum.

1_{The term color was chosen because the abstract property to which it refers has three}
charges that can be compared to the primary colors in that when an object contains all of
them in the same proportions it becomes colorless (white).

Each hadrons contains such numbers of quarks so as to possess no over-all color charge, as a consequence we can classify hadrons in two families: baryons, made of three quarks or three anti-quarks (either qRqGqB or ¯qRq¯Gq¯B)

and mesons, made of q − ¯q pairs (qRq¯R, qGq¯G orqBq¯B). So far no substantial

evidence has ever been gathered for other colorless combinations of quarks such as tetraquarks (¯qq¯qq) or penta-quarks (qqq¯qq).

### 1.2.2 The QCD improved parton model

In the Bjorken model the probing particle sees the quark inside the nucleon target as quasi-free, because their QCD interaction time is much longer than τγ ∼ 1/Q, the duration of the virtual photon interaction. As a result, the

structure functions are proportional to the density of partons with fraction x of the nucleon momentum, weighted with the squared charge. The quark charges were derived from the data on the electron and neutrino structure functions F ∼ 2F1∼ F2/x:

Fep = 4/9u(x) + 1/9d(x) + ...; Fen= 4/9d(x) + 1/9u(x) + ...

Fνp= Fνn¯ = 2d(x) + ...; Fνn= Fνp¯ = 2u(x) + ... (1.16)

where u(x), d(x) are the parton number densities in the proton (with fraction
x of the proton longitudinal momentum), which, in the scaling limit, do not
depend on Q2_{. The normalization of the structure functions and the parton}

densities are such that the charge relations hold:
ˆ 1
0
[u(x) − ¯u(x)]dx = 2,
ˆ 1
0
[d(x) − ¯d(x)]dx = 1,
ˆ 1
0
[s(x) − ¯s(x)]dx = 0
(1.17)
Also it was proved by experiment that at values of Q2 _{of a few GeV}2_{, in the}

scaling region, about half of the nucleon momentum, given by the momentum

sum rule: _{ˆ}
1
0
[X
i
(qi(x) + ¯qi(x)) + g(x)]xdx = 1 (1.18)

is carried by neutral partons (gluons). These results are maintained in QCD at the leading order and allow to conrm the quantum numbers of the quarks. Also, the observation that R = σL/σT → 0 implies that the charged partons

have spin 1/2.

In QCD there are log scaling violations induced by αs(Q). The parton rules

just introduced can be summarized in the formula F (x, t) = ˆ 1 x dyq0(y) y σpoint( x y, αs(Q)) + o( 1 Q2) (1.19)

Before QCD corrections σpoint= e2δ(x/y − 1)and F = e2q0(x)(here we use

Figure 1.4: Diagrams that contribute to σpoint.

quark). QCD modies σpoint at order αs via diagrams such as gluon loops and

soft gluon emission, some of those diagrams are shown in gure 1.4.

Note that the integral is from x to 1, because the energy can only be lost by radiation before interacting with the photon (which eventually has to nd a fraction x). From a direct computation of the diagrams one obtains a result of the following form:

σpoint(z, αs(Q)) ' e2[δ(z − 1) +

αs

2π(t · P (z) + f (z))] (1.20)
For y > x the correction arises from diagrams with real gluon emission. The
log arises from the virtual quark propagator. Actually the log should be read as
log Q2_{/m}2_{because in the massless limit a genuine mass singularity appears. But}

in correspondence to the initial quark we have the (bare) quark density q0(y)

that appears in the convolution integral. This is a non perturbative quantity that is determined by the nucleon wave function. So we can factorize the mass singularity in a redenition of the quark density: we replace q0(y) → q(y, t) =

q0(y) + ∆q(y, t)with

∆q(x, t) = αs 2πt ˆ 1 x dyq0(y) y · P ( x y) (1.21)

where the factor of t stands for log Q2_{/km}2_{, with k a constant.}

### 1.2.3 The QCD evolution equations

The eective parton density q(y, t) that we have dened is now scale dependent.
In terms of this scale dependent density we have the following relations, where
we have also replaced the xed coupling with the running coupling, as it can be
proved to be appropriate:
F (x, t) =
ˆ 1
x
dyq(y, t)
y e
2_{[δ(}x
y− 1) +
αs(Q)
2π f (
x
y))] = e
2_{q(x, t) + o(α}
s(Q)) (1.22)

d
dtq(x, t) =
αs(Q)
2π
ˆ 1
x
dyq(y, t)
y · P (
x
y) + o(αs(Q)
2_{)} _{(1.23)}

We see that in lowest order we reproduce the naive parton model formulae for the structure functions in terms of eective parton densities that are scale dependent. The evolution equations for the parton densities are written down in terms of kernels (the "splitting functions") that can be expanded in powers of the running coupling. At leading order, we can interpret the evolution equation by saying that the variation of the quark density at x is given by the convolution of the quark density at y times the probability of emitting a gluon with fraction x/y of the quark momentum.

Up to this point we have implicitly restricted our attention to non-singlet
(under the avor group) structure functions. The Q2 _{evolution equations }

be-come non diagonal as soon as we take into account the presence of gluons in the target. In fact the quark which is seen by the photon can be generated by a gluon in the target.

d dtqi(x, t) = αs(Q) 2π [qi⊗ Pqq] + αs(Q) 2π [g ⊗ Pqg] (1.24)

where we introduced the shorthand notation: [q ⊗ P ] = [P ⊗ q] = ˆ 1 x dyq(y, t) y · P ( x y) (1.25)

the convolution, like the ordinary product, is commutative. At leading order, the interpretation of the equation is simply that the variation of the quark density is due to the convolution of the quark density at a higher energy times the probability of nding a quark in a quark (with the right energy fraction) plus the gluon density at a higher energy times the probability of nding a quark (of the given avor i) in a gluon.

The evolution equation for the gluon density, needed to close the system, can be obtained by suitably extending the same line of reasoning to a gedanken probe sensitive to color charges, for example a virtual gluon. The resulting equation is of the form:

d dtg(x, t) = αs(Q) 2π [ X i (qi+ ¯qi) ⊗ Pgq] + αs(Q) 2π [g ⊗ Pgg] (1.26) These are the QCD evolution equations for parton densities, also called DGLAP (Dokshitzer, Gribov, Lipatov, Altarelli and Parisi) or GLAP or AP equations. The explicit form of the splitting functions in lowest order can be directly derived from the QCD vertices so that they are a characteristics of the theory and does not depend on the particular process yielding the parton density Pqq = 4 3[ 1 + x2 (1 − x)+ +3 2δ(1 − x)] + o(αs) (1.27)

Pgq=
4
3
1 + (1 − x)2
x + o(αs) (1.28)
Pqg=
1
2[x
2_{+ (1 − x)}2_{] + o(α}
s) (1.29)
Pgg= 6[
x
(1 − x)+
+1 − x
x + x(1 − x)] +
33 − 2nf
6 δ(1 − x) + o(αs) (1.30)
where the "+" distribution is dened as:

ˆ 1 0 f (x) (1 − x)+ dx = ˆ 1 0 f (x) − f (1) 1 − x dx (1.31)

The δ(1 − x) terms arise from the virtual corrections to the lowest order tree
diagrams. Their coecient can be simply obtained by imposing the validity of
charge and momentum sum rules. In fact, from the request that the charge sum
rules are not aected by the Q2 _{dependence one derives that}

ˆ 1 0

Pqq(x)dx = 0 (1.32)

which can be used to x the coecient of the δ(1−x) terms of Pqq. Similarly,

by taking the t-derivative of the momentum sum rule and imposing its vanishing for generic qi and g, one obtains:

ˆ 1 0 [Pqq(x) + Pgq(x)]xdx = 0, ˆ 1 0 [2nfPqg(x) + Pgg(x)]xdx = 0 (1.33)

Beyond leading order, a precise denition of parton densities should be spec-ied. Once the denition of parton densities is xed, the coecients that relate the dierent structure functions to the parton densities at each xed order can be computed [6, 7, 8].

### 1.2.4 Hadron-hadron collisions and the factorization

### the-orem

At present DIS still remains very important for quantitative studies and tests
of QCD. The scaling violations are clearly observed by experiment and their
pattern is very well reproduced by QCD ts at NLO. These ts provide an
impressive conrmation of a quantitative QCD prediction. The measurement
of quark and gluon densities in the nucleon, as functions of x at some reference
value of Q2 _{is performed in DIS processes. At the same time one measures}

αs(Q2)and the DIS values of the running coupling can be compared with those

The parton densities dened and measured in DIS are instrumental to com-pute hard processes initiated by two colliding hadrons via the Factorization The-orem which is based on diagrammatic techniques. Suppose you have a hadronic process of the form h1+ h2→ C + Xwhere hi are hadrons and C is some

trig-gering particle or pair of particles which specify the large scale Q2 _{relevant for}

the process, in general somewhat, but not much, smaller than s. For example, in pp or p¯p collisions, C can be anything from a W or a Z to a pair of quarks. By X we mean a totally inclusive collection of nal state hadrons. The FT states that for the total cross-section or some other suciently inclusive distribution we can write, apart from power suppressed corrections, the expression:

σ(s, τ ) =X

AB

ˆ

dx1dx2p1A(x1, Q2)p2B(x2, Q2)σAB(αs(Q), x1x2s, τ ) (1.34)

here τ = Q2_{/s} _{is a scaling variable, p}

iY are the densities for a parton of type Y

inside the hadron i, σAB is the partonic cross-section for parton A+parton B →

C + X0, computable in perturbation theory. Here by X' we denote a totally inclusive collection of quarks and gluons. This result is based on the fact that the mass singularities that are associated with the initial legs are of universal nature, so that one can reproduce the same modied parton densities, by absorbing these singularities into the bare parton densities.

A cuto µF is introduced at which all soft gluon eects are absorbed into the

parton distribution, while process above are considered part of the hard scatter-ing. The singularities are removed, but parton distributions become dependent upon the factorization scale.

If the nal product is hadronic in nature we would have to take into account the chances of the nal product C arising from fragmentation and introduce another scale M, the fragmentation scale, to absorb gluon emission in the frag-mentation function.

Once the parton densities and αs are known from other measurements, the

prediction of the rate for a given hard process is obtained with little ambiguity (e.g from scale dependence or hadronization eects). The QCD evolution equa-tions are essential in order to evolve the measured parton densities from one scale Q to a dierent one. The next-to-leading order calculation of the partonic cross-section is needed in order to correctly specify the scale and in general the denition of the parton densities and of the running coupling in the leading term.

What the factorization theorem does is to separate the problem into a per-turbative hard scatter and two non-perper-turbative parts (the parton distributions and the fragmentation functions) that must be experimentally measured.

Another cuto comes from the fact that we do not perform calculation for the emission of real quark and gluons to all orders and innities that were present in the integration of parton momenta, specically those present in quark and gluon loops, no longer cancel. The calculated process must be regulated through a process called renormalization. The eect of this is to introduce yet another articial scale µR into which these innities are absorbed.

The residual scale and renormalization scheme dependence is often the most important source of theoretical error, the expression for the cross section be-comes: σ(s, τ ) =X AB ˆ dx1dx2p1A(x1, Q2, µF)p2B(x2, Q2, µF)σAB(αs(Q), x1x2s, τ, µR) (1.35) These scales are often chosen to be the same (to avoid introducing an un-physical hierarchy) and are commonly set to the pT of one of the nal state

### Chapter 2

## The direct photon channel

### 2.1 Introduction

There are two rst order (ααs) diagrams that contribute to the prompt photon

production: the Compton scattering (qg → qγ) and the q¯q annihilation (see g. 2.1a and 2.1b). In these events the photon and the parton are produced approximately back-to-back in the transverse plane.

Some higher order sub-processes, which contribute to the direct photon pro-duction, are shown in Fig. 2.1c. These are the processes with extra gluons radiated from initial or nal state partons, or processes with gluon loops as correction to the original leading order processes.

Because of the abundance of gluons with a low fraction of the proton mo-mentum (x < 0.1), Compton scattering is the dominant QCD mechanism con-tributing to prompt photon production over most of the kinematical region in p-p collisions.

Another contribution to the direct photon cross section comes from the gg → γgprocess, it is third order in αsand its magnitude is negligible with respect

to the others. q g q γ q g γ q q ¯ q q ¯ q g γ g γ q g q γ q g γ q g g q ¯ q γ q ¯ q g γ

Figure 2.1: Direct photon production sub-processes: Compton scattering (a), annihilation (b), some NLO diagrams (c), bremsstrahlung (d).

Figure 2.2: Some results on direct photons by D0: the measurement of the dierential cross section dσ/dηγdηjet (left) and the comparison of the latter to

the NLO theoretical prediction (right).

High momentum photons can also be produced by bremsstrahlung, the ra-diation of fairly high pT photons from nal state quarks in a dijet event (g.

2.1d). Bremsstrahlung events are not considered to be prompt photons as they do not come directly from the hard interaction vertex, but the existence of this production mechanism aects the way in which direct photons are measured and modeled theoretically.

Bremsstrahlung photons tend to be collinear with the quark from which they are radiated. The isolation criterion, routinely used in collider physics to reduce hadronic background, has also the eect of suppressing bremsstrahlung events. This is based on the expectation that direct photons are fairly isolated in the detector while photons radiated o nal state partons usually have quite a few hadrons in their vicinity coming from fragmentation products of the outgoing parton[9].

Early studies of prompt photon production were carried out at the ISR
collider [10, 11, 12]. Subsequent studies, for example [13, 14, 15], further
es-tablished prompt photons as a useful probe of parton interactions. More recent
measurements at hadron colliders were performed at the Tevatron, in p¯p
col-lisions at a center-of-mass energy √s = 1.96 TeV. The measurement by the
D0 Collaboration [16] is based on 326 pb=1 _{and covers a pseudorapidity range}

|ηγ| < 0.9and a transverse energy range 23 < ET < 300 GeV (see gure 2.2),

while the measurement by the CDF Collaboration [17] is based on 2.5 fb−1

and covers a pseudorapidity range |η| < 1.0 and a transverse energy range of 30 < ET < 400 GeV. Both D0 and CDF measure an isolated prompt photon

cross section in agreement with next-to-leading order (NLO) perturbative QCD calculations, with a slight excess seen by CDF in their data between 30 and 50 GeV (see gure 2.3). Recent measurements of the inclusive prompt photon

Figure 2.3: Measurement of the inclusive cross section dσ/dpTdηγ at CDF and

comparison with the theoretical prediction.

production cross section have also been performed in HEP collisions, both in photoproduction and deep inelastic scattering, by the H1 [18, 19] and ZEUS [20, 21] collaborations.

### 2.2 The theory

The lowest order form for the direct photon cross section d2_{σ}

direct/dpTdη

de-pends on the parton density functions F (x, pT)

d2_{σ}
direct
dpTdη
= X
i,j=q,g
ˆ
dx1dx2Fi(x1, pT)Fj(x2, pT)
d2_{σ}
i,j→γ
dpTdη
(2.1)
The parton density functions Fi, Fj are convoluted with the partonic cross

section d2_{σ}

i,j/dpTdη [22].

The bremsstrahlung contribution d2_{σ}

brem/dpTdηcan be expressed as follows

d2_{σ}
brems
dpTdη
= X
i,j,k=q,g
ˆ
dx1dx2Fi(x1, pT)Fj(x2, pT)Dγ/k(z, pT)
d2_{σ}
i,j
dpTdη
(2.2)
using the general factorized cross sections d2_{σ}

i,j with all the possible two body

QCD sub-processes. The parton-parton cross sectionsd2_{σ}

with the parton density functions Fi, Fj and with the fragmentation functions

D(z, pT)that give the probability to radiate a photon from the nal state [23].

Dγ/k depends on z = pγ/pk the longitudinal momentum fraction of the

parent proton carried by the bremsstrahlung photon
Dγ/q= e2q
α
2πz[1 + (1 − z)
2_{] ln} p
2
T
Λ2
(2.3)
where eq is the quark charge and Λ the QCD scale parameter. Of course Dγ/g,

the probability for a gluon to radiate a photon, is zero.

### 2.2.1 Cross section comparison

Comparing the leading order Compton and Annihilation contributions with the dominant processes for the 2 → 2 jet cross section.

gq → γq : −πααse
2
q
3ˆs2
_{ˆ}_{t}
ˆ
s+
ˆ
s
ˆ
t
(2.4)
q ¯q → γg : 8πααse
2
q
9ˆs2
ˆu
ˆ
t +
ˆ
t
ˆ
u
(2.5)
gg → gg : 9πα
2
s
2ˆs2
3 − ˆtˆu
ˆ
s3 +
ˆ
sˆt
ˆ
u2 +
ˆ
sˆu
ˆ
t2
(2.6)
In the central region (η ∼ 0) the Mandelstam variables are such that ˆs/2 =
−ˆt = −ˆu. The ratio of the Compton to the Annihilation processes is given by
the ratio of the gluon and antiquark distributions together with the factors in
equations 2.4 and 2.5.

As for the QCD background we will neglect all but the main contribution: the gg → gg process. At LHC αs ∼ 17α, quark charges contribute another

factor 10 to the cross section and the relative parton densities a factor 5. This lead us to expect a jet rate at least three orders of magnitude larger than for direct photons.

The bremsstrahlung contribution is dominant for transverse momenta of the order of 10 GeV but falls o more rapidly than the others.

Since fragmentation is essentially a collinear process, photons resulting from parton fragmentation are usually accompanied by hadrons. This helps keep-ing under control the contribution of this channel to the prompt photon cross section. The isolation requirement that is commonly used to keep the QCD background under control also rejects most of the photons coming from frag-mentation.

### x

_{b}

### x

_{a}

### p

_{T}

### , η

_{c}

### p

_{T}

### , η

_{d}

Figure 2.4: Generic 2 body scattering.

### 2.3 The gluon density in the proton

### 2.3.1 Motivation

Many physics signatures at the LHC involve a gluon in the initial state and will require knowledge of its PDF. It is therefore vital for both the understanding and interpretation of these Standard Model processes and the new physics searches, that the contribution from this parton can be accurately described. An example of interest is the production of a light Higgs in the low mass region (100-200 GeV) via the process gg → H → γγ. Uncertainties arising from the gluon PDF dominate the error on gluon-gluon luminosity leading to a production uncertainty of this light Higgs of around ±5% at the LHC [24]. Z, W and γ all have similar uncertainties in their quark-gluon production channels arising from the behavior of the gluon PDF. In addition, backgrounds to many of these and other processes involve gluons in the initial state and it will therefore be a prerequisite to their accurate estimation that the gluon PDF be understood.

### 2.4 Kinematics

The kinematics of direct photons can be shown by considering the two body scatter (see gure 2.4)

pa= √ s 2 (xa, 0, 0, xa) (2.7) pb= √ s 2 (xb, 0, 0, xb) (2.8)

pc= pT(cosh(ηc), cos(φ), sin(φ), sinh(ηc)) (2.9)

where xa andxb are the fractions of the initial state hadron momentum

car-ried by each initial parton and pT , φ, ηc and ηd are the transverse momentum,

azimuthal angle and pseudorapidity of the scattered particles. Conserving en-ergy and longitudinal momentum gives

xa=
xT
2 (e
ηc_{+ e}ηd_{)} (2.11)
xb=
xT
2 (e
−ηc_{+ e}−ηd_{)} (2.12)
where xT = 2pT/
√

sis the transverse momentum fraction probed by the process. Given these quantities, it is then possible to calculate the momentum fraction of both interacting partons.

In the rest of this discussion we will assume that the incoming parton mo-mentum to be collinear with that of the colliding hadrons. In fact a simple argument based on the uncertainty principle would lead us to believe that the average transverse momentum kT is ∼ 0.3 GeV even if there is some

controver-sial evidence from inconsistencies in the direct photon cross section measured at Tevatron that this number may be larger by as much as an order of magnitude [25].

If the momenta of the two incoming partons are not the same the system will be boosted in the direction of the highest momentum parton. Due to the relative densities of the quark and gluon distributions, the majority of events will involve a high x quark and a low x gluon.

If only the photon is measured the minimum x involved in the scatter can still be calculated

xmin=

xTe−ηγ

2 − xTeηγ (2.13)

xmin is the minimum momentum fraction probed in the interaction, and it is

most likely the x value of the gluon. It will be at its smallest value at low pT

and high η and in this region the LHC will be sensitive to the gluon momentum
fraction in the range of 10−4_{− 10}−3_{.}

When both partons have large x values the interaction will probe
distribu-tions in the high x range. Since in this range (∼ 10−1_{) the gluon is relatively}

unconstrained, direct photons can oer a valuable probe of this high x behavior.

### 2.4.1 Physics

Studying the gluon x distribution is important for several key reasons. Firstly
the error on the LO parton density function (PDF) for the gluon, gure 2.6a,
is large with a minimum of 5% rising to 10% at low values of x, according to
CTEQ [26]. These errors dier from those of MRST [27], which are much larger
at values below 10=4_{. The large uncertainty arises because there have been}

no measurements of the gluon PDF in this region. The dierence between the results of the two groups arises because the CTEQ errors shown are extrapolated from the higher x region in a dierent way from MRST. The high x region is

(x) 10 log −7 −6 −5 −4 −3 −2 −1 0 ) 2 (G eV ) 2 (Q 10 log 0 1 2 3 4 5 6 7 8 9 Tracking acceptance |y|<2.5 DGLAP x acceptance @ 14TeV x acceptance @ 7TeV LHC HERA + Fixed Target

Figure 2.5: Phase space acceptance, log10(Q2)versus log10(x), for both the LHC

and HERA with xed target experiments. The dashed line represents the phase
space available from the LHC√s, with the solid lines showing the x obtainable
for a given Q2_{and η.}

more striking as above x = 0.1 the error increases very fast and reaches 100% around x = 0.7.

The vast amount of DIS data available places considerable constraints on any PDF t and is of particular importance in the low to mid x range. DIS data are used to determine the quark content of the proton and hence, via the momentum sum rules, the gluon content.

The second signicant constraint in PDF ts aects the mid to high x region and is provided by the inclusive jet data published by D0 and CDF [28, 29, 30]. These experiments measure jets in the range 50 < Q < 500 GeV corresponding to 0.01 < x < 0.5. Their inclusion in PDF ts has the eect of hardening the gluon content in this high x region.

Despite recent advances in the precision of both types of constraints, the gluon is still by far the least well constrained parton.

Figure 2.6 shows that there are large dierences at higher orders when
com-paring NLO and NNLO distributions1_{. Both distributions show no signs of the}

gluon distribution saturating at low x. If this does not happen then unitarity must eventually be violated with eects that could be probed at the LHC. The conclusion is that the gluon distribution has plenty to be studied over the full xrange.

The calculation of the parton distributions at scales relevant to the LHC

x
-5
10 _{10}-4 -3
10 _{10}-2 _{10}-1
% errorr
-30
-20
-10
0
10
20
30 CTEQ65E
MRST2001E
2
= 100 GeV
2
Q
x
-4
10 _{10}-3 _{10}-2 _{10}-1
PDF ratio to CTEQ6M
-1
-0.8
-0.6
-0.4
-0.2
-0
0.2
0.4
0.6
0.8
1 (MRST2002NLO-CTEQ6M)/CTEQ6M
(MRST2006NNLO-CTEQ6M)/CTEQ6M
2
= 100 GeV
2
Q

Figure 2.6: Gluon distribution: a) LO errors from the CTEQ65E PDF and b) the dierences between NLO(CTEQ6M), NLO(MRST2002NLO) and NNLO(MRST2002NNLO) [31].

lies on the QCD evolution equations, as the parton distributions have only been
experimentally measured at lower scales (see the dierence in Q2 _{going from}

HERA to the LHC in gure 2.5). The most commonly used evolution equations
are the DGLAP, others are BFKL [32, 33, 34, 35] and CCFM [36, 37, 38, 39].
The DGLAP evolution equations express the change of the parton densities with
log(Q2_{)} _{at xed x, the evolution being driven by splitting functions. One }

dif-ference between the DGLAP and the other evolution equations is the ordering of partons arising from these splittings. For DGLAP the partons are ordered in transverse momentum, whereas BFKL, orders by x, and CCFM, orders by θ. The LHC operates in a new area of phase space so comparing its results to predictions evolved from other experiments will test which of these evolution schemes is the most appropriate.

The kinematics of the two body scatter representing the direct photon pro-cess are described by equations 2.11 2.12 that relate the struck parton's momen-tum fraction (xa and xb) to the kinematic observables pT , ηc and ηd. These

equations have been used to predict the shape of several inclusive distributions, the distribution of ηγ and ηjet for instance, for dierent sets of PDFs. This

can be used to study the sensitivity of the direct photon production to dierent PDF sets.

### Chapter 3

## The Atlas Detector

### 3.1 LHC

The Large Hadron Collider [40] at CERN, shown in gure 3.1, is the largest and most powerful particle collider in operation, with a design center of mass energy a factor of seven larger than its closest competitor (the Tevatron [41]).

An important parameter, used to characterize an accelerator is the instant luminosity L; it determines, together with σ, the cross section, the event rate produced in a given process

Rev= σL (3.1)

Increasing the luminosity the number of interesting physical events produced raises, the price to pay is a higher bunch density and thus a higher pile-up or a larger number of bunches, resulting in a higher crossing frequency that is a challenge for the detectors.

On the other hand the cross section for interesting physics usually raises with energy. In fact the increase of parton density functions towards low x, especially due to the gluon contribution, means that as proton energy increases, so do inclusive cross sections for processes involving a signicant mass scale.

The maximum center of mass energy of a synchrotron is limited by the size
of the accelerator and the magnetic eld used to bend the protons path. LHC
has been built in the 27 km tunnel that hosted the LEP accelerator for more
than 10 years; it uses superconducting magnets, cooled with liquid helium at
1.9 K, to produce a bending eld of about 8 T resulting in the maximum design
energy of 14 TeV. To further increase the chances of observing interesting physics
phenomena a very challenging bunch crossing frequency of 40 MHz was chosen.
LHC can accelerate bunches of up to 1011 _{protons from 450 GeV to 7 TeV and}

collide them at four interaction points, eventually at the design luminosity of
1034_{cm}=2_{s}=1_{, corresponding to up to an average of 20 interactions per bunch}

crossing.

Figure 3.1: The LHC acceleration chain

Parameters pp P b82+P b82+

Beam energy (TeV) 7.0 7.0

Center of mass energy (TeV) 14 1262

Injection energy (GeV) 450 190.6

Bunch spacing (ns) 25 124.75

Particles per bunch 1 × 1011 _{6.2 × 10}7

R.M.S bunch length (m) 0.075 0.075

Number of bunches 2835 608

Design luminosity (cm−2_{s}−1_{)} _{10}34 _{1.8 × 10}27

Luminosity lifetime (h) 10 10

Dipole eld (T) 8.3 8.3

Figure 3.2: Total integrated luminosity with stable beams versus day. The existing accelerating machines at CERN will perform the rst stages of acceleration: rst the protons are accelerated up to 50 MeV in the proton linac, then to 1.8 GeV by the Proton Synchrotron Booster (PSB), after that the Proton Synchrotron (PS) drives them to the energy of 26 GeV, nally the SPS is used to reach the LHC injection energy of 450 GeV.

Sitting at the LHC collision points there are four detectors: Atlas, CMS, LHCb and ALICE; two additional experiments LHCf and TOTEM are located in the forward region of the Atlas and CMS interaction area. Atlas and CMS have been built as general purpose detectors designed to search for new physics. The rst high energy test collisions, at 900 GeV in the center-of-mass frame were recorded in November 2009, a few weeks later the rst collisions with an energy of 2.36 TeV were observed by the four experiments.

After the winter pause LHC has restarted operations in February
accelerat-ing the two proton beams to an energy of 3.5 TeV to achieve a center-of-mass
energy of 7 TeV. Except for an initial period dedicated to beam studies, the
ac-celerator has been operated most of the time in stable conditions with a steadily
increasing instant luminosity with the goal of achieving a peak instant
lumi-nosity of 1033_{cm}−2_{sec}−1 _{by 2012. Figure 3.2 shows how the daily integrated}

luminosity has been evolving during the 2010 run [42].

At the end of 2012, after a period of data taking of about two and a half years
(corresponding to a few fb−1 _{of integrated luminosity), the LHC will undergo}

a year long upgrade shutdown followed by a commissioning phase to reach the
design energy of 14 TeV and eventually the instant luminosity of 1034_{cm}−2_{s}−1

### 3.2 The Atlas experiment

Atlas (A Toroidal LHC ApparatuS) is the larger of the two general purpose detectors built on the LHC ring. The main features of the physics program of the Atlas experiment [43, 44] are the following:

SM physics Atlas will study the properties of the Electroweak W and Z bosons, as well as provide precision measurements of the top quark mass and cross section. A great eort will be dedicated to study QCD events aiming at improving our understanding of the strong interactions.

SM Higgs Boson The Higgs mechanism [45] provides an explanation of elec-troweak symmetry breaking, and in addition predicts the existence of a neutral massive particle, the Higgs boson. The search for the Higgs boson is one of the primary research objectives of Atlas at the LHC.

beyond the SM Another major aim of Atlas is to search for physics phenom-ena beyond the Standard Model. SUper SYmmetry (SUSY) [46] is perhaps the most accredited model to extend the SM, but many other have been proposed.

In order to accomplish these goals the 1999 Technical Design Report stated the following design criteria for the Atlas detector [46]:

Very good electromagnetic calorimetry for electron and photon identica-tion and measurement, complemented by full-coverage hadronic calorime-try for accurate jet and missing transverse energy measurements.

High-precision muon momentum measurements, with the capability to guarantee accurate measurements at the highest luminosity using the outer muon spectrometer alone.

Ecient tracking at high luminosity for high-pT lepton-momentum

mea-surement, electron and photon identication, τ -lepton and heavy avor identication, and full event reconstruction capability at lower luminosity. Large acceptance in pseudorapidity (η) with almost full azimuthal angle

(φ) coverage everywhere.

Triggering and measurements of particles at low-pT thresholds, providing

high eciencies for most physics processes of interest at LHC.

The overall detector layout is shown in gure 3.3. A thin superconducting solenoid generates a 2 T bending eld in the Inner Detector cavity and three superconducting air-core toroids outside the calorimeters, generate the magnetic eld for the muon spectrometer.

The Inner Detector (ID) wholly contained in the solenoidal magnetic eld provides pattern recognition, momentum and vertex measurements, and elec-tron identication. It employs a combination of discrete high-resolution semi-conductor pixel and strip detectors in the inner part of the tracking volume, and

Figure 3.3: The Atlas detector layout

continuous straw-tube tracking detectors with transition radiation capability in its outer part.

Atlas employ es a highly segmented liquid Argon (LAr) calorimeter with lead as absorber to measure the energy of electromagnetic particles both in the central (|η| . 1.7) and end-cap (|η| . 3.2) regions. The hadronic compartment is an iron-scintillator tile calorimeter in the central region plus a LAr-copper end-cap. These are complemented by a LAr-tungsten forward calorimeter.

Muon spectrometer surrounds the calorimeters. The air-core toroid system, with a long barrel and two inserted end-cap magnets, generates a large magnetic eld volume with strong bending power within a light and open structure that minimizes multiple-scattering. The muon instrumentation also includes as a key component trigger chambers with very fast time response.

### 3.2.1 The magnetic system

Several superconducting magnets provide the bending eld for the Atlas tracking apparatus and for the muon spectrometer.

The central solenoid generates a eld of 2 T in the tracker volume. The magnet has been designed to minimize the amount of dead material in front of the calorimeters to avoid worsening the photon and electron energy resolution. The magnetic eld for the muon spectrometer in the barrel region is provided by a system of eight coils (25 m long and 4.5 m tall) assembled radially; each coil is enclosed in its own cryostat. The peak magnetic eld produced in the barrel region is about 4 T.

The magnetic eld in the forward region of the muon system is obtained with the end-cap coils (5 m long); all eight magnets are housed inside the same cryogenic unit. The end-cap system is rotated by π/8 with respect to the barrel to optimize the bending power in the interface regions of the two systems.

Figure 3.4: Drawing showing the sensors and structural elements of the Atlas inner detector.

### 3.2.2 Inner detector

The Atlas tracker is designed to provide few high precision measurements close to the interaction point and a large number of lower precision measurements in the outer volume. A schematic of the Atlas inner detector is shown in gure 3.4.

Pixels and silicon micro-strips provide high precision tracking within a radius of 56 cm from the interaction point. A pixels layer can measure both φ and z coordinates of a hit while it takes two SCT layers to form one stereo strip (the angle between them is 40 mrad), allowing the measurement of both φ and (with lower precision) z. An average particle hits 3 pixel layers and 8 strip layers in the SemiConductor Tracker (SCT) for a total of 7 tracking points.

The SCT barrel is 160 cm long, covering up to |η| = 1; the pixels and SCT barrel layers are arranged in concentric cylinders around the beam axis, while in the SCT end-cap (up to |η| = 2.5) are arranged in disks perpendicular to the beam axis.

The Transition Radiation Tracker (TRT) is designed to measure an average of 36 tracking points and at the same time identify the transition radiation emitted by relativistic electrons to provide e/π separation.

The TRT structure consists of straw tubes arranged parallel to the beam axis in the barrel region and in wheels around the beam axis in the end-cap. The lower resolution of this technology is made up by the larger radius and the higher number of points measured.

The outer radius of the inner detector cavity is 115 cm, while the total length is 7 m. The layout provides full tracking coverage within |η| < 2.5, including

Figure 3.5: The Atlas electromagnetic and hadronic calorimeters. impact parameter measurement and vertexing for heavy avors and τ tagging. The expected precision for the detector is

σR−φ= 13 ⊕ 62 pT √ sin θ µm (3.2) σz= 39 ⊕ 90 pT √ sin θ µm (3.3)

The pixel detector, being closer to the interaction point, is the most exposed to radiation damage. An upgrade of the inner tracker after a few years of operations (depending on the luminosity prole) is already being planned.

### 3.2.3 Calorimeters

The Atlas calorimetric system is designed to enable high resolution energy mea-surement and good hermeticity (essential for Emiss

T measurements) over a large

pseudorapidity region |η| < 5.

The electromagnetic calorimeter uses Liquid Argon as active medium and lead as absorber material in the barrel (up to |η| < 1.7) and in the end-caps (1.5 < |η| < 3.2). An homogeneous presampler detector is placed just behind the cryostat wall in the region up to |η| = 1.8.

The central hadronic calorimeter is an iron scintillating tiles (TileCal) sub-divided in a barrel and two extended barrel (EB) sections, the central barrel covers up to |η| < 0.8 and the EB extends the coverage up to |η| < 1.7.

The hadronic end-cap (HEC) covers the same range as the EM end-cap and uses LAr as the active material and copper as absorber; the HEC and electromagnetic calorimeter are placed in the same cryostat.

Figure 3.6: The Atlas muon detector.

In the forward region, up to η ' 5, the system is completed by a very dense LAr-tungsten calorimeter with rod-shaped electrodes.

A more detailed description of the systems and results the performance of the central calorimeters, which are essential detectors for this work, will be presented in the next chapter.

### 3.2.4 The muon spectrometer

The muon detector, shown in gure 3.6, is built to be able to provide stand alone measurements of the muon momentum up to 1 TeV.

The magnetic eld provided by the superconducting air-core toroid mag-nets deects the muon trajectories that are measured by high precision tracking chambers. The magnetic eld in the |η| < 1.0 range is provided by the barrel toroids, while the region 1.4 < |η| < 2.7 is covered by the end-cap magnets. In the transition region between the two (1.0 < |η| < 1.4) the combined con-tributions of both the barrel and end-cap provide the necessary magnetic eld. The toroidal conguration with and air core produces a bending eld mostly orthogonal to the muon trajectory in the covered pseudorapidity range while minimizing the eect of multiple scattering.

The muon chambers in the barrel region are arranged in three cylindrical layers (stations), while in the end-cap they form three vertical walls. In the transition region an extra station is added to maximize the acceptance.

The azimuthal layout of the central region follows the magnet structure: there are 16 sectors. The so called Large sectors lie between the coils, and they overlap with the Small sectors, placed inside the coils themselves.

Region station I station E station M station O Barrel |η| < 1 MDT MDT RPC MDT RPC End-Caps 1 < |η| < 1.4 MDT TGC MDT 1.4 < |η| < 2 MDT TGC MDT TGC 2 < |η| < 2.4 CSC MDT 2.4 < |η| < 2.7 CSC MDT TGC

TRIGGER CHAMBERS PRECISION CHAMBERS

Technologies RPC TGC MDT CSC

# of channels 354K 440K 372K 67K

Area (m2_{)} _{3650} _{2900} _{5500} _{27}

Time resolution < 5 ns < 7 ns 500 ns <7 ns

Spatial resolution 5-10 mm 80 µm 60 µm

Table 3.2: Characteristics of the Muon spectrometer.

Figure 3.7: Layout of the Atlas muon chambers.

Dierent technologies have been selected to instrument dierent regions of the muon detector depending on the dierent granularity, aging properties and radiation hardness needed. A summary of these choices can be found in table 3.2 and gure 3.7.

To achieve the required momentum resolution (∆pT/pT ' 10% at 1 TeV/c)

the relative positioning of chambers must match the intrinsic resolution of the precision chambers. The required accuracy for the chamber in the same tower is 30 µm while for the relative positioning of dierent towers (essential for mass resolution) is in the millimeter range.

The relative positioning of the muon spectrometer and the inner tracker has been surveyed at the installation time and will be monitored with high-momentum muon tracks. Tube displacement in the MDT is monitored with a precision better than 10 µm using a laser system.