• Non ci sono risultati.

Measurement of WW/WZ associated production in the semileptonic final state with the ATLAS detector at \sqrt{s}=8 TeV.

N/A
N/A
Protected

Academic year: 2021

Condividi "Measurement of WW/WZ associated production in the semileptonic final state with the ATLAS detector at \sqrt{s}=8 TeV."

Copied!
226
0
0

Testo completo

(1)

Graduate course in physics

University of Pisa

PhD Thesis:

Measurement of the Standard Model W W {W Z

production in the semileptonic final state with

the ATLAS detector at

?

s “ 8 TeV.

Candidate:

Supervisor:

Margherita Spalla

Tarcisio Del Prete

(2)
(3)

Contents

Introduction 7

1 Semileptonic W W {W Z production at LHC 13

1.1 Theoretical background: the Standard Model of particle physics . . 13

1.1.1 The Electroweak interactions . . . 15

1.1.2 Particle masses . . . 17

1.1.3 The strong interaction . . . 18

1.1.4 Hints of perturbative calculation and uncertainties on theory prediction . . . 19

1.2 Phenomenology of diboson production in proton-proton collisions . 21 1.2.1 W W {W Z cross section in proton-proton collisions: calcula-tion strategy and sources of uncertainty . . . 23

1.2.2 Theoretical W W {W Z cross section . . . 24

1.3 Calculation of physics observable via Monte Carlo generators . . . . 26

2 The ATLAS experiment at LHC 30 2.1 Proton-proton collisions at LHC . . . 30

2.2 The ATLAS detector . . . 32

2.2.1 The Inner Detector . . . 33

2.2.2 The calorimeter system . . . 35

2.2.3 The Muon Spectrometer . . . 37

2.3 Trigger and Data Acquisition . . . 38

2.4 Validation of the data quality . . . 39

2.5 Measurement of the luminosity . . . 39

3 Reconstruction of physical objects 44 3.1 The event signature of interest . . . 44

3.2 Physical objects reconstruction in ATLAS . . . 44

3.3 Tracks and vertices . . . 45

3.4 Electrons . . . 45

(4)

3.4.2 Electron identification . . . 47

3.5 Muons . . . 49

3.5.1 Muon reconstruction . . . 49

3.5.2 Muon Energy scale and resolution . . . 51

3.6 Jets . . . 51

3.6.1 Jet reconstruction . . . 51

3.6.2 Jet calibration . . . 52

3.7 Missing transverse energy (ETmiss) . . . 54

4 Event Selection 56 4.1 Sources of background . . . 56

4.2 Physical objects selection . . . 58

4.2.1 Electron selection . . . 58 4.2.2 Muon selection . . . 58 4.2.3 Jet selection . . . 59 4.2.4 Overlap Removal . . . 59 4.3 Event selection . . . 60 4.3.1 Pre-selection . . . 60 4.3.2 W V Ñ `νjj selection . . . 61 5 Background modelling 66 5.1 W W {W Z signal . . . 68 5.2 W /Z + jets . . . 68

5.2.1 W /Z + jets: Monte Carlo simulation . . . 69

5.2.2 Correction to W /Z + jets simulation . . . 69

5.2.3 W +jets control region: charge subtraction. . . 74

5.3 tt and single-top production . . . 76

5.3.1 Monte Carlo simulation of top production . . . 76

5.3.2 Top control region . . . 76

5.4 QCD multijet . . . 77

5.4.1 Modified event selection . . . 79

5.4.2 Fit to the ETmiss spectrum . . . 79

5.4.3 Systematic uncertainties on multijet estimation . . . 81

5.4.4 QCD multijet control region . . . 82

5.5 Minor background: ZZ . . . 83

5.6 Results: pre-fit data/Monte Carlo comparison . . . 83

6 Fiducial Cross Section definition and Fit method 88 6.1 Fiducial phase space definition . . . 89

6.2 Expected fiducial cross section . . . 90

(5)

6.3.1 Dfid factor computation . . . 92

6.3.2 Contribution from other W W {W Z decay channels . . . 93

6.4 Fit method . . . 94

6.4.1 Systematic uncertainties parametrisation . . . 97

7 Systematic uncertainties 99 7.1 Systematic uncertainties on physical objects . . . 101

7.1.1 Jet uncertainties . . . 101

7.1.2 Other systematic sources: leptons, Emiss T and PileUp correction102 7.2 Systematic uncertainties on background modelling . . . 104

7.2.1 W /Z + jets background . . . 105

7.2.2 tt and single top background . . . 107

7.2.3 Multijet background . . . 109

7.2.4 ZZ background . . . 111

7.3 Systematic uncertainties on signal modelling . . . 111

7.3.1 Systematic uncertainties on signal shape and Dfid . . . 111

7.3.2 Systematic uncertainties on acceptance and σtheo fid . . . 112

8 Maximum Likelihood fit to data 117 8.1 The Asimov dataset . . . 117

8.2 Fit results . . . 118

8.3 Nuisance parameters . . . 122

8.3.1 Contributions of each systematic to the total uncertainty . . 122

8.4 Significance of the result . . . 126

8.5 Measured cross section . . . 127

9 An alternative method: data driven fit 130 9.1 Data driven fit . . . 131

9.1.1 Fit method . . . 131

9.1.2 Fit results and systematic uncertainties . . . 133

9.2 Cross section computation . . . 139

9.2.1 D1 fid calculation . . . 140

9.3 Top subtraction . . . 142

9.3.1 Top contribution from Monte Carlo . . . 142

9.3.2 Correction using top control region . . . 143

9.4 Measured cross section . . . 145

10 The boosted channel 150 10.1 Large-R jets reconstruction method . . . 151

10.2 Event Selection . . . 153

(6)

10.2.2 Selection of the boosted event . . . 154

10.2.3 Overlap between boosted and resolved channels. . . 154

10.2.4 Boosted fiducial phase space definition . . . 155

10.3 Background modelling . . . 156

10.3.1 Top modelling . . . 159

10.3.2 W ` jets modelling . . . 161

10.3.3 QCD modelling . . . 161

10.3.4 Pre-fit control plots . . . 162

11 Maximum Likelihood fit and results in the boosted channel 169 11.1 Large-R jets systematics . . . 170

11.1.1 Scale uncertainties . . . 170

11.1.2 Resolution uncertainties . . . 171

11.2 Systematic uncertainties effect in the boosted channel . . . 174

11.2.1 Physics objects systematics . . . 178

11.2.2 Background modelling . . . 178

11.2.3 Signal modelling . . . 179

11.3 Fit results . . . 180

11.3.1 Compatibility between electron and muon channel . . . 185

12 Test of Anomalous Triple Gauge Coupling 189 12.1 Theoretical framework . . . 190

12.1.1 Effective Field Theory . . . 190

12.2 aTGC measurement method . . . 191

12.2.1 Maximum Likelihood fit to data . . . 193

12.2.2 Confidence intervals . . . 194

Conclusions 201 A Smoothing of systematic templates 212 B The Stewart-Tackmann method 215 B.1 The Stewart-Tackmann method in the boosted channel . . . 217

C Anomalous Triple Gauge Coupling: additional information 218 C.1 aTGC effective Lagrangian . . . 218

C.2 Results in the effective Lagrangian framework . . . 219

C.3 Fit Method . . . 221

(7)

Introduction

The measurements of the two massive vector gauge bosons production (W W or W Z, hereafter ‘diboson’ production) represent an important test of the Standard Model (SM) of particle physics. Diboson cross section measurements provide both a powerful probe of the Electroweak sector of SM and an estimation of irreducible backgrounds in important Higgs boson analysis channels, for instance the associ-ated production of a Higgs and a gauge massive boson, with H Ñ bb decay. In addition, precision measurements of W W {W Z kinematics can be used to test the interactions between gauge bosons (W {Z{γ). The couplings between three gauge bosons are sensitive to beyond the SM physics contributing to the process through loop corrections (see subsection 1.1.1 and subsection 12.1.1), as well as to modified Higgs to vector bosons couplings. These new physics effects would show up as anomalies in interactions between gauge bosons, and can be searched for through precision W W {W Z studies.

These measurements are part of the effort of the ATLAS experiment in testing Standard Model predictions at high energy.

Multiboson production, namely the associated production of two or three gauge bosons, covers a wide range of cross sections. Many processes have been measured by the ATLAS and CMS experiment at the center of mass energies of 7, 8 and 13 TeV, as summarised in Figure 11 and Figure 2.

The best precision in W or Z pair production cross section measurements has been reached in leptonic decay channels (W Ñ `ν, Z Ñ ``). Despite the small branching fraction (BRpW Ñ `νq » 10%, BRpZ Ñ ```´

q » 3% [3]), the leptonic channels offer the most clear signature and largest signal to background ratio. The most recent leptonic W W and W Z measurements have been performed at ?

s “ 13 TeV by the ATLAS [4, 5] and CMS [6] experiments.

Constraints on physics beyond the SM (aTGC) have also been provided from the precise measurement of couplings between three gauge bosons in?s “ 8 TeV lep-tonic analysis [7–10].

1Deviations are visible in leptonic W W and W Z channels, with respect to Monte Carlo

modelling at NLO (Next-to-Leading Order) perturbative QCD calculation. Such effects are reduced when considering NNLO (Next-to-Next-to-Leading Order) Monte Carlos.

(8)

–ZZ∗→4ℓ –ZZ→ℓℓνν –ZZ→4ℓ ZZ –WZ→ℓνℓℓ WZ –WW→eµ,[njet= 1] –WW→eµ,[njet≥ 0] –WW→eµ,[njet= 0] WW WV→ℓνqq –Zγ→ννγ –[njet= 0] Zγ→ℓℓγ –[njet= 0] Wγ→ℓνγ γγ

ratio to best theory

0.6 0.8 1.0 1.2 1.4 1.6 NNLO QCD NLO QCD LHC pp√s= 7 TeV Data stat stat ⊕ syst LHC pp√s= 8 TeV Data stat stat ⊕ syst LHC pp√s= 13 TeV Data stat stat ⊕ syst

Diboson Cross Section Measurements Status: March 2017

ATLAS Preliminary Run 1,2 √s= 7, 8, 13 TeV

Figure 1: The ratio for several diboson total and fiducial production cross section measurements over best available theory prediction, corrected for leptonic branch-ing fractions. All theoretical expectations are shown usbranch-ing grey bars, hatched for NLO calculations and full for NNLO predictions. The dark-colour error bar rep-resents the statistical uncertainly. The lighter-colour error bar reprep-resents the full uncertainty, including statistical, systematics and luminosity uncertainties. Not all measurements are statistically significant yet. [1].

(9)

theo

σ

/ exp

σ

Production Cross Section Ratio:

0.5 1 1.5 2

CMS Preliminary March 2017

All results at: http://cern.ch/go/pNj7 γ γ 1.06 ± 0.01 ± 0.12 5.0 fb-1 (NLO th.) , γ W 1.16 ± 0.03 ± 0.13 -1 5.0 fb (NLO th.) , γ Z 0.98 ± 0.01 ± 0.05 5.0 fb-1 (NLO th.) , γ Z 0.98 ± 0.01 ± 0.05 19.5 fb-1 WW+WZ 1.01 ± 0.13 ± 0.14 -1 4.9 fb WW 1.07 ± 0.04 ± 0.09 -1 4.9 fb WW 1.00 ± 0.02 ± 0.08 -1 19.4 fb WW 0.96 ± 0.05 ± 0.08 2.3 fb-1 WZ 1.05 ± 0.07 ± 0.06 4.9 fb-1 WZ 1.02 ± 0.04 ± 0.07 -1 19.6 fb WZ 0.80 ± 0.06 ± 0.07 2.3 fb-1 ZZ 0.97 ± 0.13 ± 0.07 4.9 fb-1 ZZ 0.97 ± 0.06 ± 0.08 19.6 fb-1 ZZ 1.10 ± 0.04 ± 0.05 -1 35.9 fb 7 TeV CMS measurement (stat,stat+sys)

8 TeV CMS measurement (stat,stat+sys) 13 TeV CMS measurement (stat,stat+sys)

CMS measurements theory

(NLO)

vs. NNLO

Figure 2: Diboson cross section ratio comparison to theory: Theory predictions updated to latest NNLO calculations where available compared to predictions in the CMS papers and preliminary physics analysis summaries. [2].

(10)

The semileptonic channel, consisting of a W Ñ `ν decay (` “ e, µ) and a W Ñ qq1

or Z Ñ qq decay, offers complementary features with respect to the leptonic chan-nel. It has a much larger branching ratio (BRpW Ñ qq1

q » 67%, BRpZ Ñ qqq » 70% [3]) and provides a better reconstruction of the event kinematics in the case of W W production, as `ν`ν final state has one more neutrino with respect to `νqq1. On the other hand, it is affected by a much larger background contribution,

mainly due to the production of a W associated with jets and to the tt associated production. In addition, the presence of jets in the final state limits the expected precision.

Finally, in the semileptonic channel the separation between W W and W Z pro-duction is not feasible, due to the poor jet resolution (see for example 6.2(a)) and the smaller W Z contribution (about 20% of the total diboson signal after applying the event selection described in chapter 4).

Measurements of semileptonic W W {W Z cross section were performed at ?s “ 7 TeV by the ATLAS and CMS experiments [11, 12]. The final state was defined by a high-pT lepton, a neutrino and two jets.

An additional event topology has been introduced, in ?s “ 8 TeV analysis. In this case, a single jet of large radius is used to collect all the particles produced by the hadronising quarks from W {Z decay, instead of requiring two separate smaller jets.

This method is meant to detect W {Z bosons with large Lorentz boost, increasing the analysis efficiency at high transverse momentum (pT). As will be shown in

chapter 12, this topology results in a gain of sensitivity for aTGC searches. Recently the CMS experiment published an aTGC measurement in semileptonic W W {W Z channel [13], uniquely based on hadronic boosted bosons reconstructed as single large radius jets and providing competitive limits on physics beyond the SM.

In the context of the analysis presented in this thesis, the W W {W Z production cross section in both standard and boosted topologies has been measured, and limits on aTGC have been set in both channels.

In addition, it should be reminded that boosted topologies have been developed for the search of high mass resonances, decaying to boosted gauge or Higgs bosons. This effort has been complemented with the preparation of techniques based on the study of jet substructure, meant to cope with the large PileUp (section 2.1) contribution to jets of large radius. This analysis also constitutes a test of the use of boosted topologies on the measurement of a known resonance.

In this thesis, the measurement of W W {W Z associated production in?s “ 8 TeV proton-proton collisions with the ATLAS detector is presented. A brief overview of the necessary theoretical background and a description of the ATLAS experiment

(11)

at LHC are given in chapter 1 and chapter 2 respectively. The measurement is done in two separate channels:

• the resolved channel, in which the standard event signature (`ν and two jets2) is adopted;

• the boosted channel, in which the boosted topology (a charged lepton-neutrino pair and a single large jet) is exploited.

The analysis in the standard resolved channel is presented first: the methods adopted to reconstruct the physical objects of interest and the resolved event selection are discussed in chapter 3 and chapter 4, while chapter 5 discuss the modelling of different background contributions to the resolved final state.

The number of signal events is extracted through a Maximum Likelihood fit, mak-ing large use of Monte Carlo simulation. The cross section determination methods are discussed in chapter 6, the systematics entering the cross section measurement are presented in chapter 7 and the results in the resolved channel are shown in chapter 8. As an additional cross check, the W W {W Z cross section in the re-solved channel has been measured also with a data-driven method, alternative to the Monte Carlo-based fit already mentioned, this study is shown in chapter 9. The analysis of the boosted channel follow the same strategy of the resolved, ex-cept for the data-driven alternative fit which has not been done in this case. The event selection has been optimised for the different topology and kept identical to the resolved one where possible. In addition, the jet of large radius requires non trivial reconstruction techniques, introducing additional systematic uncertainties. The elements of physical object reconstruction and event selection specific for the boosted channel, as well as background modelling in the boosted analysis, are dis-cussed in chapter 10. The fit results and additional systematics in the boosted channel are shown in chapter 11.

In addition to the cross section measurement, aTGC have been searched for, in both resolved and boosted channel. This part of the analysis is presented in chap-ter 12.

The presented work is the result of a joint effort among an analysis team with contributions from different institutions. In the following I would like to detail the analysis aspects to which I gave a larger contribution.

• The preparation of the analysis framework for running the analysis on the Grid [14]. Initially designed for the resolved channel, it was successfully extended to the boosted.

2The name resolved comes from the fact that the two quarks from the W {Z decay are

(12)

• The definition and checks of the event selection (cut-flow) of the resolved analysis (chapter 4).

• The determination of the truth-level selection defining the fiducial phase space (section 6.1). The selection has been designed following the event selection applied to reconstructed objects.

• The computation of the expected fiducial cross section (section 6.2) and the Dfid factor (section 6.3) and determination of systematic uncertainties

affecting them (section 7.3). Both for resolved and boosted channels.

• Study the application of Stewart-Tackmann method for scale uncertainty computation in the resolved and boosted channel (subsection 7.3.2 and Ap-pendix B).

• Running the Maximum Likelihood fit as a cross check of other analyser’s work in the resolved channel (chapter 8).

• The data-driven alternative fit, correction for top contribution and system-atic evaluation (chapter 9).

• The evaluation of systematics associated to large-R jet mass and pT

resolu-tion (subsecresolu-tion 11.1.2). Test in top control region. • Evaluation of minor systematics in the boosted channel.

(13)

Chapter 1

Semileptonic W W {W Z production

at LHC

The Standard Model (SM) of particle physics is a relativistic quantum field theory describing all known fundamental particles and interactions (except gravity). It is confirmed by experimental results with very high precision and up to an energy scale of the order of TeV.

In this thesis, the W W {W Z associated production cross section is measured and anomalies in the coupling between three vector bosons (W , Z or γ) are searched for. A detailed discussion of SM theoretical construction is beyond the purposes of this work. This chapter focuses on how W {Z bosons’ physics is described within SM framework. In addition, it provides a brief summary of methods adopted to compute expected values of the observables of interest, as well as a description of the systematic uncertainties affecting the theoretical prediction.

1.1

Theoretical background: the Standard Model

of particle physics

For a wide overview of Standard Model and its experimental confirmation, see [3]. Its particle content is summarised in Figure 1.1. It consists of six quarks and six leptons (with their antiparticles), four gauge bosons (including W and Z) and one Higgs boson.

Quarks and leptons are fermion particles and constitute the matter content of SM. Gauge bosons are force carriers for the three fundamental interactions (electro-magnetic, weak and strong) included in the model. They are introduced following the principle of local gauge invariance: requiring the Lagrangian of a free particle to be invariant under a specific local gauge symmetry group implies the introduc-tion a new boson field in the model, or equivalently, an interacintroduc-tion to which the

(14)

Figure 1.1: Particle content of the Standard Model.

considered particle is sensitive.

As an example of this picture, we consider a simplified model: the Lagrangian for a free charged fermion ψ (first term of (1.1)) is invariant under the U p1q global gauge symmetry (associated to charge conservation). If we impose the invariance to be local, this results in the introduction of an interaction term (second term of (1.1)) with a new boson field Aµ, or gauge boson (here, the photon, carrying

electromagnetic interaction). Y is the generator of the U(1) symmetry group and q the coupling constant quantifying the strength of the interaction, it is a free parameter of the model and cannot be computed by the theory. The last term of (1.1) is the kinetic term for the Aµfield, its form depends on the symmetry group characteristics. L “ iψγµ Bµψ ` qY ψγµψAµ´ 1 4F µνF µν (1.1)

The gauge symmetry group adopted for SM is SU p3qC b SU p2qL b U p1qY.

It is the product of three sub-groups: SU p3qC accounts for strong interactions

between quarks, while SU p2qLb U p1qY is responsible for electroweak physics. As

the SU p3q field do not affect other particles apart for quarks, the SM Lagrangian can be treated as composed by separate terms, as in (1.2).

(15)

In which LQCD describes the QCD interaction, LEW the Electroweak interaction

and LH includes the Higgs boson Lagrangian terms and interactions between Higgs

field and the rest of SM particles, responsible for the particles masses.

Given the topic of this thesis, we will concentrate on the electroweak sector, only providing a fast summary of strong and Higgs physics.

1.1.1

The Electroweak interactions

The subgroup SU p2qLb U p1qY accounts for electroweak interactions. It is the

product of is the weak isospin group SU p2qL and the ypercharge group U p1qY.

SU p2qL distinguishes between particles with different chiral symmetry: the left

handed fermions, ΨL,i “

ˆνL,i

`´ L,i

˙

for leptons and ˆuL,i dL,i

˙

for quarks1, are doublets

under SU p2qL, while the right handed fermions ψR,iare SU p2qLsinglets [3]. U p1qY,

on the other hand, acts equally on right-handed and left handed particles. Imposing SU p2qL b U p1qY local gauge invariance, LEW takes the form:

LEW “ ÿ j „ iΨL,jγµpBµ´ i 2gτ aWa µ ´ ig1YLBµqΨL,j  (1.3) ` ÿ j “iψR,jγµpBµ´ ig1YRBµqψR,j ‰ ´ 1 4F µν Fµν´ 1 4G a,µν Ga,µν (1.4) The term gτaWa

µ, with a “ 1, 2, 3, is the interaction term induced by SU p2qL

symmetry, g is the weak coupling constant. The fields Wµ1 and Wµ2 are responsible for flavour changing weak interactions and can be rewritten as W˘

µ “ pWµ1 ¯

W2 µq{

?

2, where W˘ are the experimentally measured charged W bosons.

On the other hand, the W3

µ field, as well as the Bµfield introduced by U p1qY, only

couple to quark or leptons of the same flavour. It is convenient to ‘rotate’ Wµ3 and Bµ and rewrite the Electroweak Lagrangian as a function of the fields Aµ and Zµ,

as in:

Bµ “ cos θWAµ´ sin θWZµ

Wµ3 “ sin θWAµ` cos θWZµ (1.5)

where Aµ is the photon field and Zµ is the neutral current weak interaction.

If we also rewrite left-handed and right-handed fermion fields (ΨL,i and ψR,i) in

terms of the chirality projectors, ψL{R “ p1¯γ 5q

2 ψ, we finally obtain the Lagrangian

1The term d

i should be replaced by d1i “

ř

jVijdj, in which V is the

Cabibbo-Kobayashi-Maskawa matrix [3]. Since we are not going to discuss flavour mixing in quarks, we preferred not to explicitly write down this term for simplicity.

(16)

term (1.6). LEM “ i ř jψjγµBµψj` ´eřjQjψjγµAµψj` ´2cosθg W ř jψjγµpg j V ´ g j Aγ5qψjZµ` ´2?g 2 ř jΨjγ µ p1 ´ γ5qpT`Wµ`` T´Wµ´qΨj ´14FµνFµν ´14Ga,µνGa,µν (1.6)

Where is now possible to well distinguish the free fermion term (first line of (1.6)), the electromagnetic interaction term (second line) and the weak neutral (Z) and charged (W˘) currents (respectively third and fourth line of (1.6)).

Interaction between W and Z bosons

The purely gauge Lagrangian terms in the last line of (1.6) include kinetic terms for W , Z and γ bosons and interactions between gauge bosons. They have the form ´14Ga,µνG

a,µν, in which Ga,µν is expressed as in (1.7), being Wi,µ “

W1,µ, W2,µ, W3,µ. The kinetic term for the photon field is ´14FµνFµν.

The last term of (1.7) is a consequence of the non-Abelian nature of SU p2qL and

vanishes in the case of Fµν, as the U p1q

Y group is Abelian.

Ga,µν “ BµWa,ν´ BνWa,µ´ gabcWb,µWc,ν (1.7)

The gabcWµbWνc term originates Lagrangian interaction terms with three and four

Wi,µ fields, i.e. it implies the existence of interactions between three or four gauge bosons. They are commonly referred as Triple or Quartic Gauge Couplings, respectively. The Triple Gauge Coupling contributes to the W W {W Z production process. The present analysis then allows a precise measurement of this coupling. Such a precision measurement may detect anomalies due to new physics effects, usually called anomalous Triple (Quartic) Gauge Couplings or aTGC (aQGC). A variety of SM extensions predict deviations from the SM in the electroweak sector2. In general, any eventual new particle with a typical mass scale Λ much

2A complete review of models contributing to the aTGCs could not be easily found in

lit-erature. For instance, SM extensions as composite Higgs models, technicolour, warped extra dimensions, Two- Higgs-doublet models (2HDM) and Grand Unified Theories predict high mass resonances decaying into W or Z boson pairs (see for instance [15] and references therein). It is however less straightforward to state to which amount these models can contribute to aTGC corrections.

(17)

heavier than known particle masses, might affect the SM interactions through loop corrections. Commonly, the effects of heavy new physics are parametrized as an expansion in powers of 1{Λ, using an effective field theory (EFT) approach. Details of this formalism are provided in a dedicated chapter (chapter 12).

As discussed in [16], beyond the SM EFT components affect electroweak physics through the TGCs, the couplings of W {Z bosons to fermions and the electroweak gauge bosons propagators. However, new physics contributions to the latter two terms are already constrained by precision measurements of W and Z masses and on-shell decays at LEP [17,18]. TGCs, on the other hand, are sensitive to a subset of EFT components not yet constrained by the LEP results.

Since the EFT also includes terms dependent on the Higgs field (subsection 1.1.2), additional constraints on such contributions can be obtained from Higgs boson cross section measurements. For a discussion on the theoretical combination of these data, see [19].

A search for aTGC has been performed within this analysis, it is presented in chapter 12. EFT parametrization adopted for the measurement will be discussed therein.

1.1.2

Particle masses

The framework of electroweak interaction discussed in previous section, does not account for particle masses. Experiments clearly show that both W and Z bosons, as well as fermions, have to be massive, but simply adding a Lagrangian term to accounts for particle masses (e.g. of the form m2WW`

µWµ) would cause a variety

of theoretical problems.

A solution is provided by the Brout-Englert-Higgs-Kibble mechanism, based on the theory of spontaneous symmetry breaking. A detailed discussion on the derivation of Higgs mechanism is beyond the purpose of this thesis, for complete description see [3] and references therein.

In the Higgs mechanism, a complex scalar doublet Φpxq “ ˆφ1pxq ` iφ2pxq φ3pxq ` iφ4pxq

˙ is included in the model. In addition, a potential term V pΦq is added to the La-grangian: the form of V pΦq implies that Φ has a nonzero value at the energy minimum, i.e. a nonzero vacuum expectation value, conventionally expressed as ˆ

0 v{?2

˙

, with v “ a´µ2{λ. It can be shown that the introduction of a field with these properties, combined with the requirement of local SU p2qLb U p1qY

symmetry, can account for gauge bosons masses. The Higgs mechanism gives rise to mass terms in the Lagrangian and determines the boson masses in terms of the free parameters of the Higgs model: mW “ 12vg and mZ “ cos θmW

W where θW is the

(18)

in [3] and amount to:

mW “ 80.385 ˘ 0.015 (1.8)

mZ “ 91.1876 ˘ 0.0021 (1.9)

Fermion masses are accounted for by an additional gauge-invariant interaction term between Φ and fermion fields.

This model also predict the existence of one scalar boson H (the Higgs boson), confirmed by ATLAS and CMS measurements. Its mass is defined as mH “

a

´2µ2 and has been measured to be mH “ 125.09 ˘ 0.21 ˘ 0.11.

Interactions between the Higgs boson and W {Z bosons, as well as H and fermions, are also predicted.

It should be pointed out that one of the Higgs boson decay channels is H Ñ W W , which can possibly contribute to the final state accessed by this analysis.

1.1.3

The strong interaction

The SU p3qC symmetry group is associated to ‘colour’ charge conservation and is

responsible for strong interactions. It introduces eight gauge bosons, called gluons and generically indicated with g.

The theory of strong interactions is called Quantum Chromodinamics or QCD, for a detailed review see [3] and references therein. The QCD Lagrangian LQCD has

a shape similar in principle to the electroweak one, it is shown in (1.10) [3], where repeated index are summed over.

LQCD “ ÿ q ψq,apiγµBµδab´ gsγµtCabA C µ ´ mqδabqψq,b´ 1 4F A µνF A,µν (1.10)

ψq,a is the quark field, of flavour q and mass mq, the mass term is induced by the

interaction with the Higgs field and is added to LQCD for completeness. The index

a represent the colour index, it runs from 1 to 3, i.e. quarks come in three ‘colours’. The second term is the interaction term: AC

µ is the gluon field, C running from 1

to 8, and gsis the QCD coupling. gsis more often indicated as the strong coupling

αs“ g 2 s 4π.

Finally, FA

µνFA,µν is the pure gauge term, of a similar form as the electroweak one

(1.7). Since SU p3qC is non-Abelian, interactions between gluons are foreseen by

the model.

One basic element of QCD, known as confinement, is that quarks and gluons are not observables as free particles, but only as hadron constituents. For the purposes of this thesis, it should be reminded that the proton structure, in terms of interacting quarks and gluons, is described with empirical functions called Parton

(19)

Distribution Functions or PDF, which can be interpreted as the probability of having a quark (or gluon) of a given momentum within the proton.

Such an empirical description is needed because of the so-called running coupling. This effect is relevant for the description of physics processes of interest in this thesis, a brief explanation will be given in the following section.

1.1.4

Hints of perturbative calculation and uncertainties

on theory prediction

The observable of interest for this thesis is the cross section for associated pro-duction of a W W {W Z boson pair from proton-proton collisions at Large Hadron Collider (LHC). In this environment, W W {W Z production occurs through hard scattering between quarks or gluons. SM predicted observables for such scattering processes (typically, the cross section) are calculated using perturbative quantum field theory, based on perturbative expansion in orders of coupling constant. Per-turbation theory allows the calculation of expected observables up to a fixed order precision.

This approach involves the introduction of an arbitrary energy scale (renormali-sation scale or µR), causing the effect referred as running coupling.

In quantum field theory, the perturbative series in powers of the coupling, used to compute physical observables, is affected by divergences. The renormalisation method is introduced to solve such divergences in a general way. A detailed dis-cussion on this topic can be found in [20].

In normalized theory, the coupling, in power of which the perturbative expansion is done, is a function of an energy scale parameter µR (renormalisation scale). µR

is not a physical quantity: when summing all orders, the dependence on µR is lost

and physical observables remain independent of the renormalisation scale.

Anyway, the coupling itself depends on µR. Since the calculation is done up to

a fixed perturbative order, the cross section has a remaining dependence on µR.

Commonly, µR is taken equal to the energy scale of the considered process and a

systematic uncertainty associated to the arbitrary choice of this value has to be taken into account.

Taking µR close to the energy scale of a given process, implies that the strong

coupling (to be used in perturbative expansion) depends on the typical energy of the process of interest. The running coupling consists of such dependence of the coupling on energy.

The functional form of the αspµRq dependence varies with the specific field

the-ory: for QCD, αs diverges when the energy scale tends to zero. This implies that

high energy processes can be computed with perturbation theory, while this is not possible for low-energy ones.

(20)

In particular, the description of proton as a composite state of interacting quarks and gluons cannot be provided using perturbative QCD, because the involved mo-mentum range is too low. For this reason, the description is provided by empirical Parton Distribution Functions (PDF).

PDF

As previously mentioned, PDF can be roughly interpreted as the probability of having a proton constituent (quark or gluon, generically called parton) of given momentum within a proton. They cannot be computed theoretically and have to be measured from data.

The Dokshitzer-Gribov-Lipatov-Altarelli-Parisi equations (DGLAP) [21] provide a method to extrapolate PDF, measured at a given energy, to a different energy range. Data from electron-proton colliders (HERA [22]) and LHC are used to fit PDF sets which are then extrapolated to the desired energy scale (here, 8 TeV). An example of a set of PDF obtained in this way is shown in Figure 1.2. PDF

Figure 1.2: An example of parton distribution functions. The bands are x times the unpolarised parton distribution functions fipxq obtained in NNLO NNPDF3.0

(21)

are shown as functions of the fraction x of the proton momentum carried by the considered parton.

It is interesting to note that for low values of x, the PDF associated to gluons has much larger values than the one for quarks. Since the W W {W Z production at LHC (?s “ 8 TeV) is in general characterised by relatively low x, gluon-initiated processes are more likely to happen with respect to quark-initiated ones of the same cross section.

Various groups work on PDF fit and extrapolation, providing different PDF sets. In this analysis PDF sets are used both for theoretical cross section calculation and MonteCarlo modelling. The sets CT10 [24], CTEQ61L [25] and MSTW2008 [26] are adopted in Monte Carlo simulations used for the analysis. Details on how each set has been derived are provided in associated references.

1.2

Phenomenology of diboson production in

proton-proton collisions

At LHC (see chapter 2) proton-proton collisions take place at a center of mass energy of ?s “ 8 TeV (increased to 13 TeV in 2015/2016). Under these condi-tions, the major contribution to W W {W Z production is due to quark-antiquark scattering (qq1

Ñ W W {W Z). Leading Order (LO) Feynman diagrams are shown in Figure 1.3. Both s (1.3(a)), t (1.3(b)) and u-channel contribution (1.3(c)) are shown. The s-channel has a three-gauge bosons vertex, causing the sensitivity of

(a) (b) (c)

Figure 1.3: Tree-level Feynman diagrams for associated W W {W Z production. this process to anomalous Triple Gauge Couplings, discussed in subsection 1.1.1. Theoretical W W {W Z cross section adopted as reference in this analysis has been computed up to the Next-to-Leading Order (NLO) in perturbative QCD. NLO contributions are due to diagrams in which an additional gluon is emitted by ini-tial state quarks (e.g. of the form shown in 1.4(a)), as well as virtual gluon loop

(22)

(e.g. 1.4(b)). All diagrams up to order αsg2 are included, being g and αs

respec-tively the weak and strong couplings (for detailed discussion on the calculation of this process, see [27] and references therein). Uncertainty due to missing higher orders contributes to the uncertainty on theoretically predicted cross section. In addition to the dominant qq1

Ñ W W {W Z, the most relevant contribution to W W {W Z cross section is due to gluon-gluon initiated W W production, in-cluding both non resonant gg Ñ W W production and resonant, Higgs-mediated gg Ñ H Ñ W W {W Z production. First order diagrams for gluon-gluon initiated

(a) (b)

(c) (d)

(e)

Figure 1.4: Examples of NLO Feynman diagrams contributing to quark-antiquark initiated W W {W Z production (a),(b) and first order contributions to gluon-gluon initiated W W {W Z production (c),(d),(e)

W W production are shown in Figure 1.4 (c) and (d), while 1.4(e) shows the lowest order diagram for the resonant gg Ñ H Ñ W W production.

Technically, gg Ñ W W is a Next-to-Next-to-Leading Order (NNLO) process, of order α2

sg2. Nevertheless its contribution to the total cross section is increased

(23)

probability of gluon-initiated interactions.

Theoretical prediction for both resonant and non resonant gg Ñ W W cross section are available [28]. gg Ñ W W contribution is about 5% of total W W production cross section. However, it should be taken into account that, in this analysis, we measure W W {W Z cross section exclusively in the phase space region defined by the analysis acceptance (known as fiducial phase space, see chapter 6). The con-tribution of gg Ñ H Ñ W W {W Z in this phase space region was evaluated from Monte Carlo simulation and resulted to be negligible.

The gg Ñ W W process, on the other hand, has been included in cross section prediction using an approximated approach: the W W cross section σW W has been

defined as the sum of quark-antiquark and gluon-gluon initiated cross sections,

σW W “ σqq1ÑW W ` σggÑW W, while the Monte Carlo simulation (section 1.3) used

to evaluate the expected signal, only includes qq1 Ñ W W {W Z production.

1.2.1

W W {W Z cross section in proton-proton collisions:

calculation strategy and sources of uncertainty

The actual computation of cross section for W W {W Z production at hadron col-liders is based on equation (1.11) [3].

σpp1p2 Ñ W W {W Z ` Xq “ ř8 n“0α n spµ2Rq ř i,jş dx1dx2fi{p1px1, µ 2 Fqfj{p2px2, µ 2 Fq ˆˆσijÑW W {W Z`Xpnq px1x2s, µ2R, µ2Fq ` O ´ Λ2 M2 W ¯ (1.11)

The sum on the index n “ 0, ...8 runs on all perturbative orders of the strong coupling αS (in practice, the sum is stopped at a specific order, NLO in this case),

while the index i, j represents the kind of proton constituent that can contribute to the interaction (quark or gluon). ˆσpnqijÑW W {W Z`X is the cross section for the scattering of a quark or gluon pair at the total squared energy of x1x2s, producing

a final state W W {W Z ` X, computed at the n-th order. Here, s is the squared center of mass energy of the pp collision and x1,x2 represent the fraction of

mo-mentum carried by each constituent.

Functions fi{p1 and fj{p2 are the PDFs (subsection 1.1.4) for the interacting

par-tons of type i and j, within the colliding propar-tons 1 and 2.

Formula (1.11) is the result of an approximation in which the high-energy and low-energy components of the interaction are factorised: the low-low-energy part, namely interactions of initial state partons within the protons, is described by PDFs, while the hard scattering is computed with perturbative calculation at NLO.

(24)

by perturbative calculation. On the other hand, this approximation introduces an additional, non-physical energy scale, called factorisation scale or µF. Roughly

speaking, µF sets the limit between what processes are considered to be

‘high-energy’ and what is considered to be ‘low-‘high-energy’. As for the renormalisation scale discussed in subsection 1.1.4, µF is an arbitrary parameter emerging only

when using perturbative calculation. The dependence on µF should be taken into

account as a source of systematic uncertainty.

Additional sources of systematic uncertainty are due to the choice of renormalisa-tion scale (µR in (1.11)) and to uncertainties on the PDFs.

• PDF are affected by uncertainties due to both experimental measurements and extrapolation to ?s “ 8 TeV. They are provided by working groups determining PDF sets.

• Concerning renormalisation scale, for the theoretical computation of W W {W Z cross section, the common choice to put the renormalisation scale equal to the Z mass (µR “ MZ) has been adopted. A systematic uncertainty has

been introduced to take into account the arbitrariness of this choice. It is evaluated varying µR between 12MZ and 2MZ.

• To avoid adding a further arbitrary parameter to the model, the typical choice when setting the factorisation scale is to put µF “ µR “ MZ. As for

µR, the systematic uncertainty associated to µF is evaluated scaling µF by

a factor 1{2 and 2.

1.2.2

Theoretical W W {W Z cross section

Figure 1.5 shows the expected cross section of some gauge boson production pro-cesses at LHC, as a function of the center of mass energy. For this analysis, we adopt as theoretical reference the cross section computed with the Monte Carlo generator mc@nlo [29] for the quark-antiquark initiated processes (qq1 Ñ

W W {W Z). Cross sections for qq1-initiated processes are computed at NLO, for

details on adopted Monte Carlo generators, see section 1.3.

To correct for the gg Ñ W W contribution, the gg Ñ W W cross section presented in [30] has been used.

The semileptonic decay channel is defined by a W boson decaying to a electron-neutrino or muon-electron-neutrino pair, and the other W or Z boson decaying into a pair of quarks. Events in which the W decays to τ leptons are not considered in the definition of the decay channel of interest.

(25)

Figure 1.5: NLO electroweak boson production in pp-collisions. The decay branch-ing ratios of the W’s and Z’s into one species of leptons are included. For γγ and V γ we apply pT cuts of 25 and 10 GeV to photons respectively. [27].

decaying semileptonically3. Branching fractions are taken from [3]. We assume

the branching fractions for a W decaying into an electron or a muon to be iden-tical, as the effect of their difference is negligible with respect to other systematic uncertainties affecting the cross section measurement.

It should be pointed out that W Z cross section is roughly 20% of W W , while gg Ñ W W is only about 5% of the total W W cross section. The total value of the W W ` W Z Ñ lνqq before any kinematic selection is then given by (1.12). For actual comparison with the measured result, the expected cross section should be computed in the fiducial phase space, by including an acceptance term. Since this computation requires the discussion of adopted event selection, the actual ex-pected cross section in fiducial phase space will be presented later in this thesis (chapter 6).

σW W `W ZÑlνqq “ 28.2 `3.4´2.7% pscaleq ˘ 1.8% pPDFq pb (1.12)

3 [30] provides cross section computation for gg Ñ W W Ñ `ν`ν. Semileptonic cross section

has been obtained rescaling the result by the ratio of branching fractions for semileptonic and leptonic decays B`νqq{B`ν`ν, as taken from [3]

(26)

Process Cross section (pb) Scale uncertainty˚˚ PDF+α s uncertainty qq1 Ñ W W Ñ `νqq 23.3 `3.2 ´2.6% 1.8% qq1 Ñ W Z Ñ `νqq 4.87 `4.5´3.4% 1.8% gg Ñ W W Ñ `νqq 0.805 `7.1 ´9.7% -Branching fractions W Ñ lν p10.86 ˘ 0.09q% W Ñ qq1 p67.41 ˘ 0.27q% Z Ñ qq p69.91 ˘ 0.06q%

˚˚Scale uncertainty is inclusive of uncertainties due to renormalisation and factorisation scales.

Table 1.1: W W {W Z production cross sections at ?s “ 8 TeV.

1.3

Calculation of physics observable via Monte

Carlo generators

As showed in previous sections, the computation of physical observables predictions in hadron collisions involves the correction of divergences introduced by pertur-bative expansion, as well as modelling of non perturpertur-bative low-energy effects. In addition, in proton-proton collisions a very large number of particles is produced, making this kind of calculation a very challenging theoretical and computational problem.

A widely adopted solution is to use techniques for numerical integration based on pseudo-random number generators. They are generally known as Monte Carlo methods. A review of the main features of general-purpose event generators used at the LHC can be found in [31]. In the following we will recall some very basic points.

Monte Carlo approach separate the overall collision event in different steps: the high energy collision (hard scattering), the Initial State Radiation, the Final State Radiation and the underlying event. With Initial and Final State Radiation, one indicates low energy interactions taking place respectively before and after the hard scattering, while the underlying event includes all softer interactions taking place in parallel to the hard collision.

The main steps of the simulation procedure will be briefly described in the follow-ing.

• Monte Carlo simulation begins with the computation of the matrix element describing the hard scattering. The calculation is done up to a given order precision in perturbation theory.

(27)

• The momenta of the colliding constituents depend on the non perturbative PDFs (subsection 1.2.1) at the energy scale at which the hard scattering occurs: results of perturbative calculation are then convoluted with PDF as in (1.11) and integrated over the phase space.

• In addition, accelerated quarks and gluons undergo several gluon emissions. They are not included in fixed order calculation as they represent contribu-tions at higher order than the one considered in the computation of hard scattering process.

Such effects are modelled using an approximate method, in which successive gluon emissions are represented as a parton shower : a parton (i.e. a quark or a gluon) of high energy splits in two particles of lower energies with a given probability, which depends on the parton energy (a description of the parton shower approach can be found e.g. in [31]).

Parton shower schema is used for both partons entering the hard scattering and those produced in the final state. Normally, in Monte Carlo simulation, hard scattering calculation is performed first, then interfaced with parton shower simulation.

• With this description, the final state is composed by a number of quarks and gluons, loosing energy via perturbative radiation. When their energy becomes too low, QCD cannot be treated perturbatively and parton shower models are no longer valid. This last step is called hadronisation, as it de-scribes the creation of observable hadrons from quarks and gluons. Hadro-nisation is only described via empirical models, not directly derived from QCD.

As a very last step, decays of unstable hadrons are simulated, given their decay time and branching fractions.

• The combination of matrix element calculation, parton showers and hadro-nisation models describes the full scattering between quarks or gluons. How-ever, in the LHC environment, additional soft interactions take place at the same time as the hard scattering. Two types are considered: the underling event includes all the soft interactions of partons that have not been involved in the hard scattering process, while the PileUp, is due to additional proton proton interactions that overlap with the hard interaction.

As for hadronisation, the underlying event is composed by low-energy inter-actions and can be only modelled empirically.

(28)

to the simulated collision event, a more detailed definition of PileUp will be given when discussing the experimental aspects (chapter 2).

Adopted generators

A variety of general purpose Monte Carlo generators exist. In this analysis, Dif-ferent generators have been used to model difDif-ferent processes contributing to the final state of interest.

Some of the most common Monte Carlos provide simulation of hard scattering, par-ton showering, underlying event, hadronisation and subsequent unstable hadron decays. As an example, the sherpa generator [32] provides modelling of the full scattering process at LO. It is used in this analysis for modelling main background processes (W or Z produced in association with jets, see section 5.2) and for sys-tematic evaluation on signal.

Other adopted Monte Carlos, on the other hand, only model the hard scattering and have to be interfaced with other generators for parton showering. An example is the mc@nlo generator [29] used here to model the W W {W Z signal. It is an NLO matrix element generator, and is interfaced with herwig [33] and jimmy [34] for parton showering, hadronisation and underlying event. herwig includes the simulation of hard scattering, soft collisions and parton showering, while jimmy is specific for the modelling of multi-parton interactions.

Another NLO generator, powheg, is used here for the modelling of background from top quark production (section 5.3). It is interfaced with pythia [35, 36] or herwig. pythia could also be used to model the full interaction.

The generator used for each signal or background contribution in this analysis will be reminded when discussing the modelling of each contribution (chapter 5). For full details and reference on Monte Carlo generators adopted for LHC physics, see [31].

In this analysis, both NLO and LO generators have been used. It should be pointed out that, to provide sufficiently good modelling of LHC physics, at least NLO pre-cision is needed. However, due to the increasing complexity of NLO calculation, some processes are only modelled at LO.

LO simulation is only reliable for the shape of kinematic distributions, while the overall normalization is often badly reproduced due to large higher-order correc-tions. Usually, to provide sufficiently good modelling, the cross section is computed separately at higher order precision, then LO simulation is scaled to the obtained NLO cross section, correcting for mismodelling in normalization. In this analysis, the NLO generators mcfm [37] and top++2.0 [38] (specific for top production) are used for this purpose.

Different Monte Carlo generators are based on similar principles, but they are not identical as different physical and modelling choices are implemented. Differences

(29)

between predictions from different generators should be taken into account as sys-tematic uncertainties on any measurement that relies on Monte Carlo for signal or background description.

In addition, as pointed out in the previous section, QCD calculation of observ-ables in proton-proton collisions relies on the arbitrary choice of renormalisation and factorisation scales, as well as the adopted PDF set. This is true for Monte Carlo results as well and both renormalisation/factorisation scale and PDF uncer-tainties should be considered as further sources of systematic uncertainty on the measurement.

Interaction with the detector

Once the physical process has been generated, the Monte Carlo event must be processed to simulate the interaction with the ATLAS detector and reproduce the final observable as it is measured experimentally.

The final output is designed to be as close as possible to the real detector output, in order to be processed and reconstructed by the same software used for real data. The dedicated software GEANT4 is used for simulating the interaction with the de-tector. It provides full accurate simulation of the detector response, implementing the detector geometry as measured on the real detector. However, event processing with GEANT4 is very time-consuming . Since very large Monte Carlo statistics is needed for ATLAS physics analysis, a fast simulation software (Atlfast-II [39]) has been introduced. It provides a less accurate description of the detector, which is anyway sufficient in many cases.

In this analysis, both GEANT4 full simulation and Atlfast-II Monte Carlo samples have been used. Checks have been done that the use of Atlfast-II samples does not cause relevant mismodelling effects.

(30)

Chapter 2

The ATLAS experiment at LHC

This thesis is based on 20 fb´1 proton-proton collisions data, collected by the

AT-LAS experiment in 2012. Proton collisions take place at a center of mass energy of ?s “ 8 TeV within the Large Hadron Collider at CERN.

In this chapter the main features of the accelerator and detector apparatus are summarised. Details of how physical objects of interest for this analysis, as elec-trons muons or jets, are reconstructed in ATLAS will be discussed in the next chapter.

2.1

Proton-proton collisions at LHC

The Large Hadron Collider (LHC) is a 27 km circular accelerator designed to pro-vide proton-proton collisions at a center of mass energy ?s “ 14 TeV. Collisions between lead ions can also be provided at lower energy.

Proton beams are divided in consecutive bunches and bent by 1232 superconduc-tive dipole magnets, along two separate beam pipes [40]. Collisions occurs in four sites, where the four LHC main experiments are located: ATLAS, CMS, ALICE and LHCb. The ALICE [41] and LHCb [42] detectors are designed for specific physics studies (heavy ions physics and B-physics respectively), while ATLAS [43] and CMS [44] are general-purpose detectors with the aim of measuring the largest possible variety of energetic particles.

The main design goals were:

• the discovery of the theoretically predicted Higgs boson, completing the Stan-dard Model particle content;

• provide hints of the existence of physics beyond the SM, both through the search for new particles at never explored high energies and precision mea-surements of very rare SM processes.

(31)

A key parameter, for the machine potential to reach these objectives, is the lu-minosity. The instantaneous luminosity L quantifies the rate of colliding particles provided by the machine and depends on beam parameters according to the rela-tion (2.1): nb is the number of colliding bunches, f is the beam frequency, N is

the average number of particles per bunch and Σ describes the beam transverse effective size [45].

L “ nbf ¨

N2

Σ2 (2.1)

High instantaneous luminosity is necessary to detect very rare processes. Given a generic process X, with cross section σX at the LHC center of mass energy, its

event rate R is proportional to the instantaneous luminosity with the relation:

RX “ σX ¨ L (2.2)

Equivalently, the total number of events for the process X, occurring in a given time interval T , can be written in terms of the integrated luminosity Lint as shown

in (2.3). A summary of the technique used to measure the integrated luminosity in ATLAS is given in section 2.5.

NX “ σX ¨ Lint “ σX ¨

ż

T

Ldt (2.3)

2.1(a) shows the integrated luminosity collected by the ATLAS detector in all phases of data-taking up to November 2016. During LHC Run 1, ATLAS col-lected 5 fb´1 data at ?s “ 7 TeV in 2011 and 22.7 fb´1 at ?s “ 8 TeV in 2012.

After a one year long shutdown, during which the detector was upgraded, the LHC Run 2 started in 2015 at ?s “ 13 TeV. Run 2 data taking is currently ongoing. The very high luminosity needed for the physics of interest causes a very large flux of particles through the detector. During 2012 LHC run, on average 20.7 col-lisions occurred during each bunch crossing, a bunch crossing taking place every 50 ns. The distribution of the number of collisions per bunch crossing during 2012 data taking is shown in 2.1(b) [1].

In these conditions, many non-interesting collisions occurs in addition to the hard scattering of interest. The resulting additional radiation is called PileUp. A dis-tinction is made between ‘In Time PileUp’, consisting of particles from collisions in the same bunch crossing, and ‘Out of Time PileUp’, generated by particles from collisions in previous bunch crossing.

The detector should have specific characteristics to cope with such a large particle flux. Fast sensors and electronics are needed, as well as high granularity to dis-tinguish overlapping events. The detector’s components have to be radiation-hard and the largest possible coverage should be provided, to be able to detect the

(32)

(a) (b)

Figure 2.1: (a): Cumulative luminosity delivered to ATLAS during stable beams and for high energy p-p collisions, represented as a function of time. (b): Luminosity-weighted distribution of the mean number of interactions per crossing for 2012 full pp collisions dataset, details can be found in [46]. Plots from [1]

largest fraction of particles from the collision.

In addition, the high rate of events requires an efficient trigger system. The back-ground rejection should be large, in order to produce a sufficiently low output rate for data to be recorded.

Finally, the algorithms adopted to reconstruct physical objects from detector out-puts also have to be robust against the effects of a large PileUp.

2.2

The ATLAS detector

In this section, an overview of main ATLAS detector features is presented, all improvements added during the upgrade after Run 1 will not be discussed as they do not affect the data used for this analysis.

ATLAS (A Toroidal LHC ApparatuS) is a cylindrical-shaped general purpose par-ticle detector, composed of different detectors located at increasing radius. A schematic view of the ATLAS detector is shown in Figure 2.2, it is 44 m long and has a diameter of 25 m. Five regions are defined within the detector: the barrel region (i.e. the central cylindrical section along the z-axis) two lateral end-caps and the two forward regions, corresponding to the part of end-cap regions closer to the beam axis.

(33)

Figure 2.2: Cut-away view of the ATLAS detector. [43]

with the ˆz direction tangent to the beam pipe, ˆy upwards and ˆx pointing to the center of the LHC. Commonly, spherical coordinates are used: φ is the azimuthal angle in the xy-plane, while the pseudorapidity η “ ´ ln`tan `θ2˘˘ is used instead of the polar angle θ. A four-vector representing a physical object in the detector is commonly defined in terms of its energy E, its coordinates η and φ and the projection of its momentum in the transverse plane (or transverse momentum) pT.

Distances between physical objects are conventionally defined in the η ´ φ space: ∆R “a∆φ2` ∆η2.

Three main sub-detector systems constitute the ATLAS detector. A tracking sys-tem at the innermost radius (Inner Detector), a calorimeter syssys-tem in the middle and a muon-dedicated detector (Muon Spectrometer) at the largest radius. The whole system is within a magnetic field, providing the necessary track bend-ing for momentum measurements. Two main systems of magnets are present: a solenoid, placed between the Inner Detector and the calorimeters, and three sys-tems of 8 coils each generating a toroidal magnetic field, which are located one in the barrel and one in each of the two end-caps.

A detailed review of the ATLAS experiment can be found in [43]. In the following, we will describe the most relevant features of each sub-detector system.

2.2.1

The Inner Detector

(34)

Figure 2.3: Cut-away view of the ATLAS Inner Detector. [43]

tracking system, composed of three devices at increasing radius, and provides high resolution trajectory and momentum measurements. It covers the central region, up to |η| ă 2.5, and is inside a 2T solenoidal magnetic field.

Very high granularity is needed to resolve the very large track density close to the collision point, as well as low thickness to reduce particle energy loss and multiple scattering.

The innermost detector is the Pixel Detector, made up of three layers of silicon pixel detectors, each one providing a z ˆ φ position measurement with resolution σR´φˆ σz “ p10 ˆ 115qµm2.

At larger radius, a Silicon Microstrip Tracker (SCT) is composed by four layers of 6.4 cm long silicon strips with a pitch of 80 µm. Each layer is divided into two sub-layers: the strips of the second sub-layer have an angle of 40 mrad with respect to strips in the first sub-layer, in order to measure both z and φ coordinates. This system provides four more space points (in addition to the three points from the pixels) with a resolution of σR´φˆ σz “ p17 ˆ 580qµm2.

Both the precision tracking detectors (pixels and SCT) are arranged on concentric cylinders around the beam axis in the barrel region, while in the end-cap regions they are located on disks perpendicular to the beam axis.

The last device is a Transition Radiation Tracker (TRT), formed by straw tubes filled with gas mixture of 70% Xe, 27% CO2 and 3% O2; the straws are disposed

in 73 planes parallel to the beam axis in the barrel region while they are arranged radially in 160 wheels in the end-caps. The TRT provides a large number of hits

(35)

(on average 36 per tracks), but it is not sensitive to the z coordinate. Its resolution is σR´φ “ 130µm.

The TRT is also designed to identify transition radiation photons emitted by relativistic electrons, contributing to electron identification.

2.2.2

The calorimeter system

The ATLAS calorimeter system is designed to provide particle energy measure-ments. High resolution is needed for high energy jets and Missing Transverse Energy (ETmiss) measurements. These tasks require a large η coverage, in addition the detector thickness should be sufficient for good containment of electromagnetic and hadronic showers. Hermeticity is also essential for Emiss

T measurement and to

provide good acceptance on rare physics events.

Figure 2.4 shows the ATLAS calorimeter system. It covers up to |η| ă 4.9 and it is divided into two main sections: a central part, including barrel and end-caps sections, and a forward region, located at very large |η|.

All sections are provided with electromagnetic and hadronic calorimeters. The main features of central electromagnetic, central hadronic and forward calorime-ters will be summarised in the following.

(36)

The Electromagnetic calorimeter

The electromagnetic calorimeter is located at lower radius and measures the en-ergy of electrons and photons. It is divided into a barrel section at central η and two end-caps . Its total thickness is ą 22 radiation lengths in the barrel and ą 24 radiation lengths in the end-cap.

ATLAS electromagnetic calorimeter is a liquid argon (LAr) sampling detector with accordion-shaped kapton electrodes and lead absorber plates, providing a full cov-erage in φ.

Both barrel and end-cap calorimeters are segmented in depth. In the region cov-ered by the Inner Detector (|η| ă 2.5), the calorimeter layer at smaller radius is finely segmented in η, allowing accurate precision measurements. The central region (|η| ă 1.8) is also equipped with a presampler, an instrumented argon layer providing a measurement of the energy lost due to passive material in front of the electromagnetic calorimeters.

The energy resolution for electrons is ?σE

E “

10%

E ‘ 0.7%.

The Hadronic calorimeter

The hadronic calorimeter system is composed of different detectors in different η regions.

The central region (|η| ă 1.7) is covered by the Tile Calorimeter, a sampling calorimeter using steel as absorber and scintillating tiles as active material. Groups of scintillating tiles define a cell. The calorimeter is divided in 64 azimuthal mod-ules, each module is segmented in cells both in eta and in radial direction. In this way energy clusters (used as input to the reconstruction of jets, subsection 3.6.1) are measured both in direction and energy. Both sides of each scintillating tile are read out by wavelength shifting fibers into two separate photomultiplier tubes. The Tile calorimeter is divided in a barrel region (up to |η| „ 1) and two extended barrels (from |η| „ 1, up to |η| „ 1.7).

The end-cap (1.5 ă |η| ă 3.2) is covered by the Hadronic End-Cap calorimeter, it is a LAr-copper sampling calorimeter, located directly behind the end-cap electro-magnetic calorimeter. It is designed in the same way as the electroelectro-magnetic LAr calorimeter.

Globally, the hadronic calorimeter is 9.7 interaction lengths thick in the barrel and 10 interaction lengths in the end-caps. Its whole design energy resolution is

σE ?

E “

50%

E ‘ 3%.

The Forward calorimeter

The forward calorimeter covers the region 3.1 ă |η| ă 4.9, characterised by intense radiation. The forward calorimeter is a LAr sampling calorimeter: it is composed

(37)

by a first copper layer optimised for electromagnetic measurements and other two layers, made of tungsten, measuring mainly the energy of hadronic interactions. The forward calorimeter resolution on jet energy is ?σE

E “

100%

E ‘ 10%.

2.2.3

The Muon Spectrometer

The Muon Spectrometer is located outside of the calorimeter system and it is designed to provide high resolution measurement of muon momentum. A dedicated muon detector is required because muons behave as minimum ionising particles in the calorimeters and their energy cannot be satisfactorily measured.

In the Muon Spectrometer, muon trajectories are measured by three layers of tracking chambers, the trajectory bending needed for momentum measurement is provided by the magnetic field generated by the toroid magnets. Information from Muon Spectrometer and Inner Detector are combined in determining the muon momentum.

Different technologies have been selected to instrument various regions of the muon detector, depending on the granularity, ageing properties and radiation hardness needed. A schematic view of the ATLAS muon system is shown in Figure 2.5.

Figure 2.5: Cut-away view of the ATLAS muon system. [43]

(38)

providing the best momentum resolution, are disposed in three layers and cover the region |η| ă 2.7.

Cathode Strip Chambers have higher granularity and are used in the first layer over 2 ă |η| ă 2.7: they are multiwire proportional chambers with cathode planes segmented into strips in orthogonal directions. Cathode Strip Chambers have 40 µm resolution in the bending plane (R-η) and 5 mm in the transverse plane. Monitored Drift Tubes consist of three to eight layers of drift tubes, measuring the coordinates in the bending plane with an average resolution of 80 µm per tube. In addition, the region |η| ă 2.4 is covered by Resistive Plate Chambers and Thin Gap Chambers: they are designed to provide very fast response and are used as trigger system. They also provide a measurement of the muon coordinate in the φ direction, orthogonal to that determined by the Monitored Drift Tubes chambers. To allow good momentum resolution, a high-precision optical alignment system monitors continuously the positions and internal deformations of the chambers.

2.3

Trigger and Data Acquisition

The Trigger and Data Acquisition (TDAQ) systems are designed to select and store the physically interesting events. The proton-proton collision rate amounts to about 1 GHz and has to be reduced to the maximum affordable event recording rate, 200 Hz.

The ATLAS TDAQ is divided into three levels: Level-1 (L1), Level-2 (L2) and Event Filter (EF). The latter two are commonly defined as High Level Trigger (HLT).

The trigger system is configured via trigger menus: each menu defines a sequence of reconstruction and selection steps for specific objects and is often referred to simply as ‘a trigger’. Triggers used in this analysis will be presented in chapter 4. The L1 trigger is hardware-based and reduces the rate to about 75 kHz, with a maximum latency of 2.5 µs. It searches for high transverse-momentum muons, electrons, photons, jets, and τ -leptons decaying into hadrons, as well as large miss-ing and total transverse energy. In the case of the electron/photon and hadron/τ triggers, calorimetric isolation can be required.

L1 employs only a limited amount of the total detector information: trigger cham-bers in the Muon Spectrometer are used to detect high transverse momentum muons, while calorimeter selections are based on reduced granularity information from all the calorimeters.

In each event, the L1 trigger also defines one or more Regions-of-Interest (RoI’s), i.e. η and φ coordinates of those detector regions where L1 has identified interest-ing features.

(39)

elec-tronics and subsequently to the data acquisition. Part of this information is used by the L2, which is a software based trigger. The L2 selection is seeded by the RoI information provided by the L1 trigger; all the available detector data within the RoI’s are used at full granularity and precision, performing stricter selection cuts. L2 reduces the trigger rate to approximately 3.5 kHz, with an event processing time of about 40 ms.

Finally, the EF reduces the event rate to roughly 200 Hz. Its selections are im-plemented using information from the whole detector and applying offline analysis procedures within an average event processing time of the order of four seconds. Events passing the EF are stored and available for offline analysis.

2.4

Validation of the data quality

After being recorded, data have to be filtered to remove events affected by non-optimal data taking conditions, e.g. hardware problems affecting a component of the ATLAS detector at the time the event was recorded.

Data are clustered in temporarily-near groups, the basic unit being the luminosity block, whose typical duration is approximately one minute. Events belonging to the same luminosity block are supposed to be taken with constant and controlled conditions from the point of view of the accelerator, the detector and the trigger. A partial event reconstruction (about 10% of the event size) is performed and quality criteria are applied to exclude those luminosity blocks affected by serious data quality problems. Tighter data quality constraints are applied once the events are fully reconstructed to account for any detector problems.

Events passing this first step are available for full reconstruction and subsequent analysis, while the second step is applied as part of the analysis event selection.

2.5

Measurement of the luminosity

As already described in section 2.1, the luminosity is a key input for any cross section measurement. The integrated luminosity is needed to convert the number of events measured by a given analysis into a cross section, as shown in (2.3). The integrated luminosity for a given data sample is determined summing instan-taneous luminosity (defined in section 2.1) of all luminosity blocks (see previous section) constituting the sample. Luminosity blocks are assumed to contain data taken under uniform conditions, including luminosity. The instantaneous luminos-ity is measured by dedicated algorithms and averaged over each luminosluminos-ity block. The instantaneous luminosity in each luminosity block passing data quality cri-teria (section 2.4) is multiplied by the luminosity block duration to provide the

Riferimenti

Documenti correlati

To understand whether 10d interfered with virus attachment to target cells or virus-cell fusion events, inhibitory activity tests against the RSV mutant strain B1 cp-52—expressing

In conclusion, this case report could be helpful because a strategy of toxicological management of antihistamine overdose has been proposed and it could lead to new ques- tions

Thus, in ob/ob mice, the physiologic release of 2-AG by OX neurons, although capable of depressing both the excitatory and the inhibitory inputs (on the base of the effect of WIN

imaging techniques, research areas within holography such as data compression, image rendering, enhancement, and objective and subjective quality assessment require a high number

obtained from young smokers, used as in vivo surrogates of endothelial cells, showed lower concentrations of GSH and higher concentrations of oxPAPC than non-smokers. We explored

La seconda ondata di crisi, legata quindi al debito sovrano, ha provocato un peggioramento delle condizioni di raccolta e ha riproposto i problemi del sistema finanziario

Seventeen Asino Amiatina donkeys semi-extensively farmed in paddocks showed alopecic nummular, scaling areas mainly on head and neck.. Microsporum racemosum cultivated from the

In Figura 3 è rappresentata la schermata principale del pro- gramma nella quale vengono visualizzati i grafici relativi alla raccolta dati per l’analisi XRD e