• Non ci sono risultati.

Search for exotic diboson resonances in the semileptonic decay channel lvqq at √s = 13 TeV with the ATLAS detector

N/A
N/A
Protected

Academic year: 2021

Condividi "Search for exotic diboson resonances in the semileptonic decay channel lvqq at √s = 13 TeV with the ATLAS detector"

Copied!
123
0
0

Testo completo

(1)

Università di Pisa

Corso di Laurea Magistrale in Fisica

Tesi di Laurea

Search for exotic resonances in the `νqq Final State

of the Diboson channel in pp collision at

s = 13 TeV

at the ATLAS experiment.

Relatore:

Candidato:

Prof.ssa Chiara Roda

Marco Montella

(2)
(3)

Contents

1 Research Context 7

1.1 Physics at the ATLAS experiment at LHC . . . 8

1.2 Diboson Searches in LHC Run I . . . 9

1.3 The Heavy Vector Triplet Model . . . 11

2 The ATLAS experiment at the LHC 17 2.1 The Large Hadron Collider . . . 17

2.1.1 The Inner Detector . . . 19

2.1.2 The ATLAS Calorimeter . . . 21

2.1.3 The Muon Spectrometer . . . 25

2.1.4 The Trigger and Data Acquisition System . . . 26

3 Object Reconstruction 29 3.1 Electron Definition . . . 29 3.2 Muon Definition . . . 33 3.3 Jet Definition . . . 35 3.3.1 Small-R Jets . . . 35 3.3.2 Large-R Jets . . . 39 3.3.3 Track Jets . . . 46

3.4 Missing Transverse Energy Definition - ETmiss . . . 47

4 Analysis Overview 49 4.1 Analysis Purpose and Strategy . . . 49

4.2 Event Selection . . . 53

4.3 Signal and Control Region Definition . . . 56

4.4 Performance of Reconstructed Physics Objects . . . 58

4.4.1 Leptons . . . 59

4.4.2 Missing Transverse Energy . . . 61

4.4.3 FatJets . . . 61

5 Study On The Event Selection 67 5.1 The `νqq Event Selection . . . 67

5.1.1 Variables Of Interest . . . 68

5.1.2 Efficiency And Background Rejection . . . 68

5.1.3 Choice Of Working Points . . . 70

5.2 Tests for further background rejection . . . 75

5.3 Boson Tagging Performance . . . 78

5.3.1 Substructure Cut - D(β)2 . . . 79 3

(4)

5.3.2 Mass Window Optimization . . . 81

6 Studies for an Alternative Signal Region Definition 83 6.1 Issues With The Standard Signal Region . . . 83

6.2 Track Jets Driven Selection . . . 84

6.2.1 Track Assisted Mass . . . 85

6.2.2 Track Jet Topology . . . 87

6.2.3 Concluding Remarks . . . 89

7 Data/Montecarlo Comparison and Fit Results 95 7.1 Data / Montecarlo Comparison . . . 95

7.2 Fit Strategy . . . 102

7.3 Results . . . 103

7.3.1 Limits on the HVT Mass . . . 106

8 Conclusions 107

Appendices 109

A Simulated Samples Information 111

(5)

Introduction

Searches for resonances of massive gauge bosons are one of the major tools to unravel hints of physics signals beyond the Standard Model. Such resonances feature in a large number of Standard Model extensions, such as extended gauge models, warped dimensions and technicolour, and are generally predicted to occur at the TeV scale. The aim of the analysis covered in this thesis is to search for a possible resonance decaying to a pair of W bosons or into a W and a Z boson by investigating the invariant mass spectrum of the semileptonic (`νqq) final state using proton-proton collisions at√s = 13 TeV gathered at ATLAS in the 2015.

A previous search in the fully hadronic final state of the Diboson channel carried out by ATLAS on √s = 8 TeV data highlighted a mild (3.4σ significance) excess at values of the in-variant mass close to 2 TeV, piquing the interest of the physical community. The `νqq search on 2015 data is expected to slightly improve the sensitivity of the fully hadronic Run I analysis, and, covering the diboson final state with the second-highest branching fraction, is of primary importance as it can serve as a prompt and independent validation (or disproval) or the afore-mentioned excess.

While the main bulk of the analysis was carried out by the members of the analysis group in the months leading up to the 2015 End Of Year Event, this thesis reports on a number of original optimization and performance studies, conceived as a support to the main analysis, personally carried out in parallel to the work performed by the main analysis group. The structure and the contents of this thesis will now be briefly described.

In the first Chapter, I briefly introduce the reader to the experimental and historical context in which the analysis covered by this work of thesis is carried out. A summary of the previous results published by the ATLAS experiment, with a focus on the past searches in the Diboson channel, is given. I also provide a small summary of the main phenomenological features of the theoretical framework that I use as a benchmark model for my studies.

Chapter 2 is dedicated to the description of the experimental setup at ATLAS during the 2015 data taking, with a brief account of the purpose, experimental philosophy and critical design performance parameters being included for each of the main ATLAS sub-detectors.

Chapter 3 describes the experimental procedures employed in the reconstruction of the phys-ical objects and variables that I use for the analysis. Among these procedures, a particular prominence is given to those involving the hadronic event, with a special focus on the relatively innovative techniques used to reconstruct and handle large-R jets, the defining feature of the `νqq final state in the kinematic range targeted by the analysis.

The first of the original studies I carried out is reported on in Chapter 4, and aims at determining the reconstruction performance of the variables contributing to the calculation of the fitted observable as well as those featured prominently in the event selection. For each variable the ATLAS delivered performance is examined by the means of a comparison between

(6)

the truth and the reconstructed levels of the Montecarlo simulation. This study allowed me to gauge the closeness of the reconstructed event to the original simulation of the physical process ("truth" level).

A second original study, described in Chapter 5, was held with the purpose of achieving a better understanding of the event selection employed in the analysis, as well as to optimize the chosen working points for the kinematic and topologic cuts applied on candidate events fulfilling the pre-selection requirements. A particular focus was once again lent to the boson tagging procedure, which was unfolded in its two defining stages and analysed in detail.

The results of the two aforementioned studies prompted a preliminary investigation on the possibility to implement in the event selection the information on the hadronic part of the event reconstructed with the ATLAS Inner Detector. Within this survey, reported in Chapter 6, a set of track based variables describing the signal large-R jet were tested to assess whether a further enhancement of a possible excess in the invariant mass spectrum could be achieved by applying requirements on them. As the track-driven tagging of the substructure of large-R jets is a novel and poorly optimized field, a number of guidelines for future developments on the subject are tentatively suggested.

The final chapter of the thesis is dedicated to the two conclusive stages of the `νqq analysis: the comparison between the distribution of a select number of variables as reconstructed in data and in the simulated distributions and the final fit procedure. In the first section of the chapter, both the variables of primary interest to the analysis and the variables examined in Chapter 6 were checked for any possible disagreement with the various selections used in the analysis. The data/Montecarlo comparison was performed personally applying the standard analysis event selection on the provided datasets. In the second part of Chapter 7 I give an account of the procedure employed by the analysis team to perform the fit on the invariant mass distribution of the final state and the statistical tools used to extract the eventual results, which are then reported in the last section of the chapter and thesis.

(7)

Chapter 1

Research Context

Since the seminal studies on the nature of cosmic radiation carried out in the 1940s and the dis-covery of the first new particles produced by artificial means a decade later, Particle Physics has progressively established itself as a prime tool of investigation into the ultimate order and struc-ture of the physical world. Further experimental and theoretical advancement in the following decades, such as the proposal of a unified Electroweak theory in 1967 [1] and the experimental confirmation of the the existence of the nucleon constituents, eventually led to the theories of three of the four known fundamental interaction being joined into an overreaching theoretical framework, known as the Standard Model (Fig.1.1) of Particle Physics [2].

Figure 1.1: Schematic representation of the Standard Model of particle physics [3]. Since the Standard Model was finalised in the mid-1970s, important landmarks in contempo-rary Particle Physics, such as the discovery of the electroweak gauge bosons W/Z at CERN in 1983 and the observation of the top quark at Tevatron in 1995 have strengthened our confidence in the predictive power of such a framework.

While extremely successful, the Standard Model is not free from theoretical problems and questions still left unsolved. A major example of such unresolved issues is the observed mass for the Higgs boson (O(100GeV)). A relatively low value for MH is required in order to produce a

consistent model for the observed mass of the weak gauge boson, and yet this appears to be in 7

(8)

contrast with the calculated contributions of higher-order Feynman diagrams, which would bring the Higgs mass to values comparable to the energy scale where the description of the physical phenomena provided by the Standard Model is expected to break down, which is generally identified with the Planck Scale MP lanck∼ 1018 GeV.

This apparent discrepancy is however not a real inconsistency in the Standard Model, but simply a problem of naturalness, as it would require an exceptional fine tuning for all the higher-order contributions to the Higgs mass to cancel out reciprocally. The Higgs mass problem is known as the Hierarchy Problem [4], and through the years many possible solutions to it have been suggested by the physical community. Among them, the Supersimmetry [5] Model is of particular prominence, postulating the existence of a new generation of super-partners to the Standard Model particles which cancel out the loop contributions of each particle to the Higgs mass, preventing it from reaching the Planck Scale. Composite Higgs models [6], assuming the Higgs boson to be bound state of two undiscovered fermions, in analogy to the QCD representation of mesons, are another popular solution to the Hierarchy problem. While neither this nor the other theoretical and experimental unresolved points invalidate the SM as a valid tool for predicting phenomena within the microscopic world, it is widely speculated that, investigating higher energy scales, at some point a new wave of phenomena will arise that will require extensions to the Standard Model to be fully understood and fitted within a cohesive theoretical body.

Today, the ATLAS and CMS experiments at the Large Hadron Collider at CERN spearhead this effort to progress the understanding of the physics of fundamental interactions. Research held at these experiments covers an extensive field of searches for Beyond Standard Model physics parallel to a continued endeavour to further and extend our knowledge of the already established processes at energy scales and experimental conditions yet unexplored.

1.1

Physics at the ATLAS experiment at LHC

Since its inception, the ATLAS experiment has been designed to provide competitive measure-ment of the largest possible array of Standard Model phenomena accessible at proton-proton colliders, as well as to be a suitable experimental environment for potential breakthroughs into new physics.

Primary beam setup operations at the LHC began on 10 September 2008 and were halted 9 days later when a major quench accident significantly damaged the magnet lattice across LHC Sector 3 and 4, requiring the entire collider to undergo an extended period of forced inactivity to replace the damaged units and perform general maintenance operations. Activities were reprised on 20 November 2009 and stable beam collisions at a center of mass energy√s = 7 TeV were finally achived at the end of March 2010, marking the beginning of LHC’s first data taking run. Data taking was briefly halted in late 2011 to perform the operations needed in order to increase the collision energy to √s = 8 TeV, and reprised shortly thereafter to come to a close in late 2012. The data taking period ranging from the first physics collision at the LHC in 2010 to the end of the 2012 is informally referred to as LHC Run I. A total data volume of 20.3 fb−1 was gathered in the aforementioned time span.

The LHC Run I was highlighted by the discovery of a Higgs-compatible particle announced in a joint event by ATLAS and CMS on July 4th 2012. Both experiments had previously carried out searches in the cleanest final states a yet theoretical Higgs boson would decay into according to the Standard Model, H → ZZ(∗) → 4` and H → γγ, and by July 2012 claimed to have

observed excesses with a combined local significance of 5σ (Fig.1.2 ) [7, 8].

(9)

1.2. DIBOSON SEARCHES IN LHC RUN I 9

Figure 1.2: Invariant mass spectrum of the H → γγ channel as measured by ATLAS (left) and expected/osserved significance across the Higgs decay channels studied by CMS (right), as presented in the joint 4 July 2012 conference [7, 8].

2010-2012 data taking period, other important analyses produced significant results, improving our knowledge of a wide ranging spectrum of Standard Model processes, from low-pT studies of

the "Minimum Bias” and the soft-QCD underlying event, to determination of the inclusive mul-tijet cross section and to tests of the Electroweek predictions, such as the W/Z boson production cross sections and decay widths.

The following section will provide an overview of the state of the art at the end of the LHC Run I 1 for the searches in the diboson (i.e W W/W Z/ZZ) channel, the background context of

the analysis covered in this.

1.2

Diboson Searches in LHC Run I

The joint production of two massive gauge bosons W/Z (henceforth deemed “diboson produc-tion”) is a major point of interest for both Standard Model analyses and New Physics searches. A measurement of the pp → W W/W Z/ZZ cross section is set to shed light onto the little known self-coupling of SM gauge bosons (known as Triple Gauge Coupling, TGC), at the same time providing an experimental test for the theoretical calculations and generator models used to esti-mate the non resonant V V production. Another major point of interest in studying the diboson production is its sensitivity to New Physics, as undisclosed phenomena could occur in the form of deviations from the gauge structure of the Standard Model in the triple-gauge-boson couplings ZW W or γW W , an effect known as Anomalous-Triple-Gauge-Coupling (aTGC). Measurement on the aTGC have been held at ATLAS and CMS during Run I yielding results compatible with the Standard Model predictions but still dominated by the statistical uncertainty [9, 10]. Finally, a study of the WW/WZ non resonant production would provide invaluable information for Higgs searches, where the diboson production stands as an irreducible background to Higgs → W W decay searches.

The diboson channel is also a major searching ground for direct detection of New Physics in

1The denomination "LHC Run I" usually refers to the data taking runs held between 2009 and February 2013,

with center of mass energy√s = 7 TeV in 2011 and 8 TeV in 2012 and 2013. Over the entire duration of Run I ATLAS gathered a total 20.3 f b−1

(10)

the form of undiscovered resonances decaying to a pair of gauge bosons (or a gauge boson and a Higgs boson). Such resonances feature in a large number of Standard Model extensions, such as extended gauge models, warped dimensions and technicolour, and considering the relatively low mass of the Higgs Boson are predicted to occur at the TeV scale. [11].

During LHC Run I the ATLAS collaboration carried out four statistically independent searches for exotic resonances decaying to a pair of gauge bosons, each focusing on a different final state and thus sensitive to different combinations of W and Z bosons in the resonance decay:

• `ν`0`0 (`0= e, µ), sensitive to charged resonances decaying to WZ [12];

• ``qq, sensitive to charged resonances decaying to WZ and neutral resonances decaying to ZZ [13];

• `νqq, sensitive to charged resonances decaying to WZ and to neutral resonances decaying to WW [14];

• qqqq, sensitive to all charged and neutral resonances decaying into a pair of gauge bosons [15].

The unprecedented energy scale as well as the experimental challenge posed by the conspicu-ous multi-jet background in the partially or fully hadronic final stated motivated the development new techniques in the event reconstruction and selection stages, such as the large-R jets and grooming algorithms employed to prime such jets for the eventual analysis. These techniques have also been used in the analysis covered in this thesis work, and will be described in detail in Section 3.3.2.

The results of the Run I searches were interpreted using two benchmark models, the Extended Gauge Model (EGM) for a heavy W0 boson [16] and the bulk Randall-Sundrum (RS) model [17] with warped extra dimensions for a spin-2 graviton G∗. The results of the four independent channels were then combined in order to calculate the local p0-value of the null hypothesis given

the collected data and to set combined limits for the mass of the predicted resonances [18] .

(a) (b)

Figure 1.3: Observed Local p0-value for the combined Run I diboson searches for (a) an EGM W0

boson and (b) bulk RS G∗. The observed local p0 for the individual channels are also displayed

(11)

1.3. THE HEAVY VECTOR TRIPLET MODEL 11

Figure 1.4: Invariant mass distribution of the two large-R jets in the WZ channel of the fully hadronic final state search, showing a discrepancy with the background shape at around 2 TeV [15].

As shown in Figure 1.3a, the largest observed deviation from the background-only hypothesis was found at around 2 T eV in the WZ channel with respect to the EGM W0 model, with a statistical significance of 2.5 σ. Such an excess was however detected solely in the fully hadronic J J channel with a 3.4 σ significance, whilst results in the semileptonic and fully leptonic final state searches were found to be in agreement with the background-only hypothesis, decreasing the combined excess significance (as can be seen comparing the red line and the blue line with black dots in 1.3a). The invariant mass distribution in the fully hadronic channel, showcasing the highest deviation from the signal-only hypothesis, is shown in Figure 1.4.

From the aforementioned combined results, lower limits for the mass of an EGM W0 and a bulk G∗ were extracted using for the expected production cross sections and branching ratios of the two predicted resonances as calculated by the CalcHEP and Pythia8 generator respectively [14] . Masses below 1.81 TeV were excluded at 95% CL for the W0search, while the RS graviton was excluded for masses smaller than 810 GeV. In both cases the observed limits are close to the limits expected for a 20.3 fb−1 dataset.

While the statistical significance of the 2 TeV excess is insufficient to claim a discovery, the excess in the hadronic channel stirred the interest of the community and asked for further investigation of the diboson channel with increased statistics to either exclude the null hypothesis or file the excess as a statistical fluctuation.

1.3

The Heavy Vector Triplet Model

As stated earlier, a large number of New Physics models predict heavy resonances decaying into a pair of gauge bosons. It is obviously not possible to determine a priori which of those to favour as a benchmark model for comparison with the data, nor it is practical to present sets of limits for every proposed model. For this reason it was decided that Run II diboson analyses would employ a generic model based on a simplified Lagrangian approach aiming at providing a

(12)

phenomenological parametrization of a the broadest possible set of BSM models. Such method applied to W0 and Z0− like resonance is known as the Heavy Vector Triplet (HVT) Model. [11]

Figure 1.5: Conceptual representation of the HVT model

The major advantage of this choice lies in its versatility: experimental results presented as limits in the phenomenological parameter space of the simplified Lagrangian will then be translated into limits on the free parameters of any explicit model to be tested with the data. Fig 1.5 shows a conceptual representation of this line of thought, with the central pillar being the phenomenological Lagrangian model (HVT) acting as a bridge between data and the explicit models, defined by a set of parameters ~p. In the figure, ~c stands for the HVT model simplified parameters, L(~c) for the experimental likelihood function of the parameters ~c and ~c(~p) for the functional relation used to extract the explicit model parameters.

Basic Phenomenology in the HVT model

The simplified Lagrangian approach of the HVT model is especially effective as searches for resonances are typically sensitive only to those parameters influencing the production, mass, and decay of the resonance. All the other free parameters of any explicit model are not accounted for in the simplified Lagrangian, which is therefore entirely phenomenological.

The HVT model describes the dynamics of three spin-one fields Vµ, two charged and one

neutral, through the following Lagrangian: LS = − 1 4D[µV a ν]D [µVν] a+m2V 2 V a µV ν a + igVcHVµaH †τa↔DµH + g 2 gV cFVµaJ µ a F +gV 2 cV V VabcV a µV b νD [µ Vν] c+ gV2cV V HHVµaV µ a H†H −g 2cV V WabcW µν a VµbV c ν (1.1) In the above equation the terms in the first line account for the V field "mass", kinetic energy and for the tri-linear and quadri-linear interactions with the SM gauge bosons, arising from the covariant derivative DµVνa= ∂µVνa+ gabcWmub Vνc (where g is the SM weak coupling).

The terms in the second line represent the direct coupling with the fermionic current JFµ (with constant cF) and with the Higgs field. The Higgs term and its coupling cH accounts for

the coupling of the V field with both the physical Higgs and with the three unphysical Goldstone Bosons. As stated by the Equivalence Theorem [19], in the high-energy regime the Goldstone bosons represent the longitudinal polarizations of the W and Z bosons, implying that cH will

play an important role in both the production cross-section of the V resonance via Vector Boson Fusion (VBF) and its decay in the diboson channel. The terms in the third line are of minimal interest to the present analysis as they only have marginal influence on the phenomenology accessible by a collider experiment.

(13)

1.3. THE HEAVY VECTOR TRIPLET MODEL 13 The "mass" term mV in the first line of Eq.1.1 is not the actual mass of the resonance, since

the V± and V0field are not mass eigenstates and their mass will arise from mixing with the SM

gauge bosons and from the subsequent ElectroWeak Symmetry Breaking (ESWB). From this procedure arises a remarkable custodial relation between the masses of the charged (M+) and

the neutral (M0) resonance state:

m2WM+2 = cos2θWm2ZM 2

0 (1.2)

In the limit where M+,0  mW,Z, and considering that at tree level m2W m2

Z

' cos2θ

W, Eq.1.2

implies the neutral and charged states to be degenerate in mass at the per-cent level. Decay Widths

A major consequence of the hierarchy assumption (i.e M+,0 mW,Z) is the implication of a small

mixing angle between the SM gauge bosons and the V states arising from the diagonalization of the matrices mixing the neutral and charged V fields with the W±, Z particles. Thus:

θN,C' cH

mW,Z

MV

∼ 10−1 (1.3)

As all the direct couplings of the V with the WZ bosons originate from mixing in the Lagrangian, a small mixing angle suppresses the decay widths into vector bosons. This effect is however contrasted by the field’s coupling to the Goldstone bosons, which prevents the partial width into a pair of gauge bosons from becoming marginal. Accounting for these effects, the aforementioned width can be written as:

ΓV±→W W ' ΓV±→W Z '=

g2 Vc2HMV

192π (1.4)

The partial width into fermions is: ΓV±→f ¯f0 ' 2ΓV 0→f ¯f '= Nc[f ] g2cF gV 2MV 48π (1.5)

where Nc[f ] is the number of color states accessible to the specific fermion f .

The relative branching fractions into fermions and bosons, for fixed resonance mass values, are therefore entirely determined by the values of gVcH and g2cF/gV. In the HVT picture, the

constants cF and cH are simply place-holders for specific fixed values set by a particular explicit

model, with the coupling gV remaining the only true free parameter. As the branching ratios

can vary dramatically across the different models, the HVT framework provides two radically different benchmark pictures, inspired respectively by weakly (Model A) and strongly (Model B) coupled extension to the SM.

In Model A the Higgs coupling is suppressed: cH' g2cF/g2V while the coupling to fermions

is cF ∼ O(1). As a consequence of this scheme it is: ΓMW WV ' g

2

gV

2

, This implies that the partial width into bosons and fermions will be of the same order of magnitude, with differences originating only from numerical and colour factors in the amplitude (see Fig.1.6-left). The total width of the resonance will therefore be suppressed by a factor g4/g2

V, growing smaller with

increasing values of the free parameter gV (see Fig.1.6).

On the other hand, the strongly coupled Model B features an unsuppressed Higgs coupling: cH' −1, rendering the diboson decay mode dominant by a factor greater than 102. Furthermore,

since ΓW W ' g2VMV, while Γf f ' g

4M V

g2 V

, the total width of the V resonance increases with the free parameter gV, eventually leading to extremely broad resonances with Γ/M  0.1

(Fig.1.6-right).

The analysis followed in this thesis employs Montecarlo samples simulating charged and neutral HVT-like resonances generated in the weakly coupled picture (Model A), with gV = 1.

(14)

Figure 1.6: Partial branching fraction (upper row) and total decay width (lower row) of a neutral V resonance as predicted in a weakly (left column) and strongly (right column) coupled extension of the Electroweak model

Production Cross-Section

Since the HVT model predicts the V resonance to decay into a pair of fermions or a pair of gauge (or Higgs) bosons, it is natural to assume that such a resonance would be produced in a collider chiefly through Drell-Yan (q ¯q0) and Vector Boson Fusion processes. In both cases, the production cross section can be written as:

σ(pp → V + X) =

X

i,j∈p ΓV →ij MV 16π2(2J + 1) (2Si+ 1)(2Sj+ 1) C CiCj dLij dˆs ˆ s=M2 V (1.6)

with i, j being the particles interacting to produce the V resonance, ΓV →ij, Si,j and Ci,j

the partial decay width into said particles, their spin and color states. J and S account for the spin and color states of the final resonance and dLij

dˆs for the parton luminosity function of the

two initial state particles. In case of the VBF channel, the parton luminosity is taken as the convolution of the parton PDFs and their splitting function accounting for the W/Z emission probability, in line with the Effective W Approximation. [20]

Figure 1.7 shows the decreasing trend of dLij

dˆs with varying V masses. Another key feature is

the dominance of the Drell-Yan production over the VBF, consequence to the double electroweak suppression arising from the underlying q → W + q0 and/or q → Z + q emission.

(15)

1.3. THE HEAVY VECTOR TRIPLET MODEL 15

Figure 1.7: Parton luminosity as a function of the resonance mass for DY (left) and VBF (right) production at√s = 14 TeV.

(16)
(17)

Chapter 2

The ATLAS experiment at the LHC

2.1

The Large Hadron Collider

The Large Hadron Collider (LHC) [21] is a multi-purpose circular accelerator designed to pro-duce proton-proton, proton-ion and ion-ion collisions at unprecedented center of mass energies for earth-based experiments. It is installed at the CERN laboratories in a 27 km long under-ground tunnel that previously housed the Large Electron-Positron (LEP). The proton-proton collisions are designed to take place at a nominal energy of 14 TeV and at a nominal luminosity of 1034 cm−2s−1.

Many of those processes are predicted to have exceedingly small cross-sections, requiring the LHC to operate at a formidable collision rate. Such rate is quantified through the instantaneous luminosity L , which accounts for the space density of interaction centers in the unit time. In a collider such as the LHC, L can be expressed as a function of well known and monitored quantities [22]:

L0= nb·

N2f

4πσXσY

L0= [L]−2[T ]−1 (2.1)

where nb is the number of bunches in the ring, N the number of particles in each bunch, f the

revolution frequency of the bunches and σX,Y the bunch dimensions along the plane transverse

to the beam direction. Since the rate of bunch depletion is proportional to the collision rate, and thus toL (t) itself, and since LHC does not support beam refurbishment during beam runs, the instantaneous luminosity degrades exponentially over the run time, requiring each run to be halted once the L (t) drops below a certain fraction of L0.

The need for extremely high luminosity poses serious experimental challenges in the detector planning and building phases as well as in the process of data taking. As an example, in the 2015 running conditions,L = 1034 cm−2s−1 yielded an average of 25 proton-proton interaction

in a single bunch crossing, resulting in about 1000 particles originating from the crossing point. The amount of interactions for bunch crossing is commonly referred to as in-time pile − up, and is itself a major factor affecting the accuracy of the physical measurements carried out in the detector. These negative effects can be partially averted maximising the sampling frequency of the detector in order to minimise the number of integrated collisions. Furthermore, high spatial and angular granularity in the detector is instrumental in helping to associate each particle to its original collision point.

Another major challenge posed by the high interaction rate is the extremely small ratio between the production cross sections of the target processes of many analyses and the total inelastic cross section, which accounts for the largest part of jet production in QCD processes

(18)

Figure 2.1: Schematic representation of the Large Hadron Collider and its injection chain at the CERN laboratories. The approximate locations of the four major LHC experiments are specified.

[23, 24]. On one hand, this makes it essential for the detector to have an appropriately fast and efficient online selection system (known as trigger), but also limits the search for interesting processes to channels with experimental signatures granting excellent background rejection, such as the presence of a lepton, a photon or large values of missing transverse energy ( Emiss

T ).

Therefore, general purpose detectors at the LHC, such as the ATLAS and CMS detectors, must provide excellent object identification, energy measurement capabilities, and the greatest possible coverage in the polar-azimuthal plane (η × φ).

ATLAS (A Toroidal LHC apparatuS) [25] is one of the two general purpose experiments located on the LHC beam collider. Its detector is equipped with high-resolution instrumentation enabling precision identification and reconstruction of leptons, photons, jets and missing energy. The detector itself is built in the shape of a barrel of 44 m in length and 25 m in diameter. An in-detector cartesian frame of reference is defined from the direction of the beam (and barrel) axis, conventionally associated with the Z axis of the frame. The positive X axis is taken as the direction from the detector center pointing to the center of the LHC ring, while the Y axis is defined as the direction pointing directly upwards.

Additionally, a concentric cylindrical frame of reference is often used, with θ and φ being the polar (calculated from the beam axis) and azimuthal angle (calculated from the positive X direction on the transverse plane). As the center of mass frame of reference of the interaction is generally non-stationary in a hadron collider, the need for Lorentz-invariant angular distribution is met using the rapidity y defined as y = 1

2 E+pz

E−pz, a physical variable additive with respect

to Lorentz boosts. The psudorapidity η = − log (tan θ/2) is often used in place of y as it is experimentally simpler while still being Lorentz invariant under boosts along the Z axis. For a mass-less particle η is equivalent to y.

(19)

2.1. THE LARGE HADRON COLLIDER 19

Figure 2.2: Side representation of the ATLAS apparatus and its main detectors [25].

The thus defined cylindrical frame of reference ( η, φ, z ) is generally preferred in the quantitative description of the physical processes at ATLAS. The distance between two points in the η × φ space is definied as ∆R =p∆η2+ ∆φ2 ( with |∆φ| < π ).

The ATLAS detector (Fig. 2.2) is divided along the Z axis into five zones: the central region is called barrel region, while two end − cap and as many f orward regions are situated at the sides of the barrel region covering the large-η region. All those regions share a roughly cylindrical symmetry, with differently purposed instrumentation at different distances from the beam axis. In the following sections the various radial sections of the detector, their positions, their experimental purposes and design parameters are described.

2.1.1

The Inner Detector

The Inner Detector (ID), positioned just outside the beam pipe, is the innermost section of the ATLAS detector. Its purpose is to provide the best space point measurement resolution to allow an accurate and precise reconstruction of the trajectories of charged particles. An optimal track reconstruction results in both an optimal measurement of the transverse momentum (PT) and of

the initial flight direction of the particles as well as a prompt association of each physical object to its original interaction point. Extending from just 33.25 mm from the beam ( R = 0 ) to R = 1080 mm (Fig. 2.3), the ID withstands the highest radiation of any other active element of the entire detector, thus demanding the building materials to be especially radiation-hard. On the other hand, the ID components need to be as transparent as possible to minimise the energy lost by the particles, which would affect all the following measurements. Therefore, the ID subdetectors needs to feature:

• High resistance to radiation and stability after prolonged radiation exposure; • Low thickness to minimize energy losses and multiple scattering (MCS);

(20)

Figure 2.3: Perspective schematic view of the Inner Detector and its subdetectors, with their position on the radial and beam axis directions specified [26].

• Highest possible granularity;

• Precise alignment at assembly and constant alignment monitoring.

The Inner Detector accomplishes this goal through three concentric tracking systems, the Pixel Detector (PIXEL), the Silicon Microstrip Tracker (SCT) and the Transition Radiation Tracker (TRT), here ordered by increasing distance from the beam pipe. A super-conductive solenoid placed outside the ID produces a 2 T magnetic field along the Z axis, allowing the measurement of the transverse momentum of charged particles. The combined information gathered from these three tracking systems can thus measure the PT of a particle with a relative uncertainty

σPT

PT = 0.05% · PT [GeV ] ⊕ 1%.

Pixel Detector

The innermost element of the ID, the PIXEL, consists of 1744 pixel sensors assembled into four cylindrical layers in the barrel region and three disks in each end − cap region. All sensors are identical and each one houses 47232 pixels, for an overall total of 80 million pixels, each 50 µm long on the R × φ direction and 400 µm on the Z direction. A charged particle inside the η acceptance of the PIXEL detector ( |η| < 2.5 ) is expected to produce on average three hits in as many detector layers. The PIXEL setup yields very a high spatial granularity ( σR,φ×σZ = 10µm×115µm ), allowing excellent track reconstruction and, thanks to the proximity

to the beam pipe, optimal vertex identification.

During the long shutdown (2013-2015) between LHC Run I and Run II the original beryllium beam pipe was replaced by a new, smaller and thinner beam pipe, allowing a fourth layer of pixels, known as Insertable B-Layer (IBL) [27], to be inserted in the space ( 31.0mm < R < 40.0mm ) between the new pipe and the formerly innermost layer of the original PIXEL (known as the b − layer). A key goal of the IBL is to provide additional tracking robustness, compensating for the present and future irreparable failures and inefficiencies of the b − layer, thus preventing

(21)

2.1. THE LARGE HADRON COLLIDER 21 the efficiency of the b tagging from deteriorating over time. At the same increasing the layer redundancy decreases the rate of spurious tracks arising from random combination of clusters in the high pile − up environment expected after the Phase luminosity upgrade.

SCT

The Silicon Microstrip Tracker (SCT) extends the tracking coverage of the ID to a distance of 51.4 cm from the beam axis, providing four additional hits thanks to four cylindrical layers (barrel) and 18 disks (9 per end − cap), totalling 6.3 millions strips, each 6.5 cm long and with a pitch of 80 µm. Each layer and disk consists of a grid of silicon microstrips arising from two superimposed sub-layers of strips forming a relative angle of 40 mrad, with one set aligned on the Z direction.

The spatial position of a particle producing a hit in the SCT is known with a nominal resolution of 17 × 580 µm2.

TRT

The Transition Radiation Tracker is designed to achive the goal of providing a suitable ground for the e/π discrimination as well as to increase the tracking power and the accuracy of pT

mea-surement in the ID. The TRT employs 351000 gas-filled straw tubes forming 73 planes in the barrel and 160 wheels in the end − cap regions. Each straw has a diameter of 4 mm and contains an axial goldplated tungsten wire acting as the anode, and is filled with a mixture of 70% Xe, 27% CO2 and 3% O2. Charged particles passing through the a tube ionize the gas and leave a

trail of ions and electrons that is collected by the electrodes, producing a hit. Considering the whole TRT volume, ionizing particles are expected to produce an average of 36 hits. Due to very nature of the straw tubes, the hits in the TRT will give little information on the Z direction of the particles, while the position of the hit on the R − φ space will be known with a resolution of 130µm. This relatively low resolution with respect to the other ID detector is compensated by the much higher average hit multiplicity in the TRT.

The e/π discrimination is carried out on the ground of the transition radiation produced by a relativistic electron at the boundary between the straw and the gas. These photons will in turn interact with the gas resulting in additional charges to be collected, producing a larger signal than the average signal produced by heavier particles.

2.1.2

The ATLAS Calorimeter

At radii immediately larger than that of the solenoid magnet surrounding the Inner Detector lies the ATLAS calorimetric system, the sub-detector purposed with the task of measuring the energy of the outbound particles. This is achieved by stopping the particles by the means of their interactions with a thick layer of matter. Through a calibration of the detector, it is possible to convert the signal produced collecting the energy (or a known fraction of it) released in such interactions into a measurement of the the energy of the original particle.

Another major task fulfilled by the calorimeter is a precise measurement of the missing transverse energy in the event, which can be the experimental signature of a high-energy neutrino or a BSM weakly interacting neutral particle, the former being an instrumental feature in most analyses involving weak interactions. The calorimeter will therefore need to be hermetic, covering the largest possible fraction of the polar and azimuthal angle, to prevent a large value of Emiss

T

(22)

Figure 2.4: Schematic view of the ATLAS calorimeter and its sub-sections [25].

Due to the intrinsecally different nature of the interactions that electrons/photons and hadrons are subject as they pass through matter, the ATLAS calorimeter is divided into two coaxial sections (Fig.2.4): the electromagnetic and hadronic calorimeters.The inner sub-section of the calorimetric system measures the energy of those particles, such as electrons, photons and neutral pions ( as π0→ γγ ), that interact with matter primarily electromagnetically. On the other hand, information about the energy and position of hadronic particles is obtained by the ensemble of the measurements from the inner and outer sections of the calorimetric system.

The Electromagnetic Calorimeter

The electromagnetic (henceforth EM) calorimeter [28] is designed as a sampling calorimeter to contain the radial extension of the apparatus. The active material of the EM calorimeter was chosen to be Liquid Argon, a monoatomic noble gas offering an intrinsically high resistance and stability to radiation. A high energy electron or a photon passing through the EM calorimeter will lose its energy by the means of a so-called electromagnetic shower, a chain reaction of bremmstrahlung (e → eγ ) and pair production ( γ → e+e− ) processes.

The EM calorimetric system is hosted in three cryostats, one in the barrel and two in the end − caps. The barrel calorimeter covers the central section of the polar angle ( |η| < 1.475 ) and extends radially between 1.50 m and 2 m from the beam axis. The barrel calorimeter is segmented into three regions along the R direction (Fig.2.5a). The innermost section of the EM calorimeter has a thickness of 4.3X0and the finest granularity along η of all the EM calorimeter

( ∆η × ∆φ = 0.003125 × 0.1 ) in order to differentiate individual photons from those arising from pi0 decays examining the η profile of the shower at its earliest stage . The second layer of

the calorimeter has a slightly coarser polar granularity ( 0.025 × 0.025 ), and being 16X0 thick

(23)

2.1. THE LARGE HADRON COLLIDER 23

(a) Schematics of a module of the LAr calorimeter in the barrel region. Radial distances are shown in both mm and in unity of the radiation lenght X0

(b) Close-up view of a LAr module

thickness of 2X0 measures the energy released in tail of and EM shower, and therefore requires

the coarsest granularity of the calorimeter.

In the end − caps the EM calorimeter takes the shape of two coaxial wheels, covering 1.475 < η < 3.2, which are themselves segmented into layers on the Z direction. The interface between the two wheels leaves a small insensitive region at around η = 2.5.

A particle entering the EM calorimeter will have already passed through the ID, the solenoid and the cryostat wall, for an equivalent thickness varying from 2 to 6 X0. The related energy loss

is accounted for and corrected by inserting a 1.1 cm thick absoreber-free layer of liquid Argon in front of the calorimeter for |η| < 1.8 referred to as the pre-sampler. The overall resolution for a measurement of the energy of an electron or a photon through the ATLAS Electromagnetic Calorimeter can be written as the quadrature sum of a stocastic term and a calibration term:

σ E '

10%

pE[GeV ]⊕ 1% (2.2)

In the estimate for the relative energy resolution provided by Eq. 2.2 a constant contribution to the absolute resolution (and therefore appearing in the formula as a b/E term) arising from the detector noise is omitted, as the residual influence of the this source of uncertainty is found to be negligible once specific noise removal techniques are applied [29]. Therefore, the LAr Calorimeter is expected to yield an increasingly more accurate measurement of the energy of an electron or a photon, with a relative uncertainty of ∼ 2% for a 100 GeV incoming particle.

The Hadronic Calorimeter

Hadronic particles are too massive to radiate a significant amount of energy via bremmstrahlung, and are therefore unable to start electromagnetic showers. Such particles will therefore lose energy as they pass through matter as the result of nuclear interactions. Phenomena such as the production of new hadrons or the emission of pions through nuclear de-excitation will result in the development of a shower of hadrons regulated by the interaction lenght λ, a physical

(24)

Figure 2.6: Sketch of a TileCal module [30].

quantity related to the mean free path of a strongly interacting particle through matter. Such processes will take place in the EM calorimeter as well as in the hadronic calorimeter, requiring an optimal hadron energy measurement to draw information from both system.

The ATLAS hadronic calorimeter has a depth of 9.7 λ, allowing the full containment of hadronic showers up to the TeV scale. Similarly to its EM counterpart, the hadronic calorimeter has a central section in the shape of an azimuthal toroid (TileCal, |η| < 1.7 [30]) and two coaxial wheels covering the high-η region ( Hadronic End-Cap Calorimeter, 1.5 < |η| < 3.2 [31] ).

TileCal is a sampling calorimeter which employs plastic scintillators (tiles) as active material and lead as the absorber, while for the same purpose the Hadronic End-Cap (HEC) uses liquid Argon and Copper. The overall hadronic system satisfies the hermeticity requirements to produce satisfactory Emiss

T measurements, and yields an energy resolution for single hadrons equal to σE

E =

50% √

E ⊕ 3%.

The Forward Calorimeter

The Forward Calorimeter covers the regions of the polar angle closest to the beam direction ( 3.1 < |η| < 4.9 ). The comparatively higher dose of radiation affecting such regions requires the forward calorimeter to meet tighter standards of heat dissipation and radiation hardiness. Therefore, the FCal was designed as a sampling calorimeter and employs once again liquid argon as active medium and layers of copper and tungsten as passive absorbers.

(25)

2.1. THE LARGE HADRON COLLIDER 25 determination of the missing transverse energy Emiss

T , but can only provide very coarse resolution

for the energy of hadronic systems: σE

E = 100%

E ⊕ 10%.

2.1.3

The Muon Spectrometer

Figure 2.7: Section view of the ATLAS spectrometer [25].

The ATLAS Muon Spectrometer [32] is composed by a series of tracking chambers located outside the hadronic calorimeter and immersed in a magnetic field. The magnetic field is gen-erated in the central barrel region |η| < 1.0 ) by 8 superconducting toroidal magnets and by as many smaller toroids arranged in two wheels in the end − caps ( 1.4 < |η| < 2.7 ). Despite there being a small section of the polar angle not directly covered by a magnet, the interface of the two fields provides sufficient bending power in the in-between region as well.

The main tracking operations are carried out by an array of detectors employing two types of technology. In the pseudorapidity range |η| < 2.7 tracking hits are provided by three layers of Monitored Drift Tubes (MDT), each of which consists of two layers of drift tubes, whith each tube layer being three tubes thick. The tubes are 30 mm in diameter and can vary in lenght between 1 m and 6 m, according to their position in the detector. The MDTs can only provide the Z and R coordinates of a hit, thus requiring an additional subset of detectors to measure the azimuthal angle of the hit (φ) In the end − caps, the MDT are arranged into a wheel.

In the end−caps ATLAS employs four layers of multiwire proportional chambers with segmen-tated cathodes (Cathode Strip Chambers, CSC) arranged into two disks to provide a complete set of hit coordinates at high η. As the particle density is expected to be critically high in the end−cap and forward regions, the CSC were designed to meet tighter standards of rate capability and time resolution than those required from the MDTs.

In |η| < 2.4 the tracking information is completed and strengthened by sets of Resistive Plate Chambes (RPC) in the barrel and Thin Gap Chambers in the forward regions. The coarse

(26)

spatial resolution offered by such detectors is compensated by the fast response, making them an apt triggering system.

2.1.4

The Trigger and Data Acquisition System

Figure 2.8: Flow chart of the ATLAS TDAQ system during LHC Run II [33].

At the ATLAS interaction point collisions between bunches take place at a rate of 40 M Hz, orders of magnitudes higher than what the fastest data-writing system available today can reach. The ATLAS trigger and data acquisition system (TDAQ) [33] reduces this rate to about 300 Hz performing a multi-level online event selection targeting the objects and features most closely associated to interesting events to be saved for further offline data analysis.

During the long shutdown between Run I and Run II the trigger system was updated from an original three-level to a two-step structure (Fig.2.8). The first step, known as Level 1 trigger (L1), takes care of the most basic event selection and aims at reducing the event rate by a factor of ∼ 1000. Working at a data input rate of 40 M Hz and a latency of 2.5 µs, speed of data processing and decision making is paramount, requiring some compromises to be made on the completeness and accuracy of the event reconstruction. The L1 trigger will therefore draw information from the ATLAS detector using a much coarser granularity than what is nominally provided by each subdetector and look for specific patterns such as a high energy muon or a large and localized energy deposit in the EM calorimeter. The objects meeting the L1 trigger requirements are then passed to the next trigger stage as Regions of Interest (RoI) for further analysis.

The expected output rate for the L1 trigger is 75 kHz, requring a further steep selection step which is carried out by the High Level Trigger (HLT). This trigger step analyses the Regions of Interest, with access to the full detector granularity, and performs a full-event reconstruction

(27)

2.1. THE LARGE HADRON COLLIDER 27 searching for structures matching any of the numerous HLT trigger strings. An event passing the HLT stage is recorded permanently and stored for further offline analysis.

(28)
(29)

Chapter 3

Object Reconstruction

This chapter will summarise the procedures and algorithms employed for the reconstruction and identification of the physical objects and quantities of interests for the analysis. For each object, efficiency plots will be provided as a function of the object’s energy/momentum and of selected event topology variables, along with the working points used.

The most general requirement on an event is for it to have a primary vertex, defined as the vertex with two or more associated tracks and carrying the largest P

tr(P 2

T)tr, where the sum

runs over all the tracks associated to the vertex. All the physical variables or objects relevant to the events-of-interest will then be calculated (or reconstructed) with respect to the thus-defined primary vertex.

3.1

Electron Definition

Reconstruction

In the ATLAS diboson searches at √s = 13 TeV the electron reconstruction and identification procedures are carried out exploiting the results obtained by the Egamma Working Group for the 2015 physics analyses [34] .

In the present analysis, the electron reconstruction phase starts from the information provided by the Electromagnetic LAr Calorimeter, and is therefore be limited to the pseudorapidity range covered by the calorimeter, |η| < 2.47. At this stage the calorimeter is considered as a grid of Nη× Nφ = 200 × 256 towers on the η × φ plane. The tower granularity is that of the middle

radial layer of the EM calorimeter, ∆ηtower× ∆φtower = 0.025 × 0.025. The tower energy is

calculated as the sum of the energy released in all the sectors along the R direction within the η × φ span of the tower. The reconstruction algorithm then performs a search for a cluster seed considering all the possible combinations of 3 × 5 adjacent towers, requiring a window energy greater than 2.5 GeV and selecting a local maximum through a sliding − window algorithm [35]. A cluster is then built starting from the selected seed and the candidate electron is required to match at least one charged ID track extrapolated to the calorimeter middle layer. If more than one track matches with the calorimeter cluster a choice will have to be performed on which track to consider the “primary” electron track to be employed in all further tasks, including the electron identification. The primary track selection is based on criteria such as the hit multiplicity in the innermost layers of the ID and the ∆R distance between the extrapolated track and the cluster core.

The efficiency of the electron reconstruction was calculated using the Tag-And-Probe method 29

(30)

on Z → ee MC events and validated on data gathered in the 2012 data-taking run [34], and the corresponding scale factors were generated as the ratio between the efficiency measured in data and in the MC sample.

Figure 3.1: Electron reconstruction efficiency as a function of the transverse energy ET (top-left)

pseudorapidity (top-right) and number of primary vertices [34].

Figure 3.1 shows the reconstruction efficiency to be around 98% for electrons with ET >30

GeV, with a slight deterioration in the end-cap η range. The reconstruction algorithm was also tested over a range of primary-vertices multiplicity, and little to no effects on the efficiency were recorded. The results show an overall increase in the electron efficiency of about 5% with respect to the 2011 measurements. This increase was brought about by implementing an improved track-cluster matching procedure and a new track reconstruction algorithm greatly improving the reconstruction of electrons which undergo severe bremmstrahlung emission. As shown in Fig. 3.1, the aforementioned changes were especially effective in the low PT range, where the

(31)

3.1. ELECTRON DEFINITION 31 Identification

Once an electron candidate has been reconstructed, an identification stage is required to reject the various sources of background to prompt electrons. Such sources include electrons produced by semileptonic decays of heavy flavour hadrons, jets, and electrons from photon conversion taking place before the calorimeter. The signal electron identification is performed considering a likelihood (LH) function built on a large [34] set of discriminating variables accounting for the track quality, the goodness of the track-cluster matching, the longitudinal and transverse profile of the electromagnetic shower and the possible shower leakage in the hadronic calorimeter:

dL= LS LS+ LB , LS(B)(~x) = n Y i=1 PS(B) i(xi) (3.1)

The signal likelihood (Eq. 3.1) is calculated using the probability density function for each of the n variables as extracted from a pure data sample of signal electrons. Likewise, data driven probability density functions for background electrons are used in the background identification likelihood LB. The identification decision will be then performed on the value of the test statistic

dLbuilt from the signal and background likelihood functions. Three working points are provided:

• LooseLH, with signal efficiency ∼ 96% at ET = 100 GeV and background efficiency ∼ 1%

for 20 GeV < ET < 50 GeV;

• MediumLH with signal efficiency ∼ 94% at ET = 100 GeV and background efficiency

∼ 0.5% for 20 GeV < ET < 50 GeV;

• TightLH with signal efficiency ∼ 88% at ET = 100 GeV and background efficiency ∼ 0.3%

for 20 GeV < ET < 50 GeV.

The identification efficiency as a function of the transverse energy η and of the number of primary vertices for each of the three working points is shown in Figure 3.2-3.3. An electron passing the identification stage may then be tested for isolation to further reject the contamina-tion from jet mis-identificacontamina-tion. This is done evaluating the transverse energy released in a cone of variable size about the electron. The procedure was shown to yield marginal effects on the identification efficiency, and three working points were again provided, Loose, Medium and Tight. Candidate electrons are pre-selected for the diboson analyses requesting LooseLH Identifica-tion criteria and Loose track isolaIdentifica-tion, |η| < 2.47 and ET > 7 GeV. Furthermore, two additional

tracking conditions are to be met in order to strengthen the association to the primary hard-scatter vertex :

• |σd0/d

BL

0 | < 5 accounting for the significance of the transverse impact parameter with

respect to the beam line;

• |z0sin θ| < 0.5mm cutting on the longitudinal impact parameter.

In the present `νqq analysis, signal electrons are defined imposing further requirements to the already pre-selected electrons. Signal electrons candidates with with ET < 300 GeV are required

to exceed the energy threshold ET > 25 GeV, pass the TightLH identification stage and pass the

Tight isolation requirements. Electrons with ET > 300 GeV only need to pass the MediumLH

working point at the identification stage. Furthermore, electrons detected in the calorimeter crack-region, with |η| ∈ [1.37, 1.52], are excluded. Through a set of looser identification criteria, a sample of veto electrons is created. The purpose of those electrons is to serve as a veto on

(32)

Figure 3.2: Electron identification efficiency for each working point as a function of the ET (left)

and η (right) [34].

a possible second lepton in the event, which needs to be applied in order to keep the `νqq analysis statistically independent from the other multi-lepton final state searches in the Diboson channel (such as ``qq, ``ν`). An electron passing all these requirements will then undergo the overlap removal (OR) phase with the other reconstructed physics objects. The need for such a procedure arises from the fact that more than one distinct object can be reconstructed from the same detector input. To avoid double counting, an algorithm identifying the two overlapping objects and discriminating the real object from the reconstruction artefact has to be implemented. The OR procedure will:

• Remove the electron candidate should it share ID tracks with a pre-selected muon; • Remove the electron candidate should it come within 0.2 < ∆R < 0.4 from a small-R jet; • Remove the jet overlapping with the candidate electron if ∆R < 0.2. This is needed as

the Jet reconstruction procedure (Sec. 3.3) is triggered by every calorimeter cluster, and therefore also by clusters arising from showering by a high-energy electron, producing a fake jet.

Once the overlap removal procedure is completed with negative result the electron candidate is ready to be used for further use in the following stages of the analysis.

(33)

3.2. MUON DEFINITION 33

Figure 3.3: Electron identification efficiency for each electron identification working point as a function of the number of primary vertices [34].

3.2

Muon Definition

Reconstruction

Muon reconstruction and identification [36] is performed at ATLAS using the tracking hits gath-ered by the Inner Detector (for |η| < 2.5) and the Muon Spectrometer (|η| < 2.7), with the occasional support of the information provided by the calorimeter, where the muon, while not generating any EM shower, will release a small fraction of its energy. Due to both the ac-ceptance mismatch of the two main tracking devices in ATLAS and the detector inefficiencies, several identification and reconstruction criteria have been defined to allow the highest possible reconstruction efficiency:

• Standalone (SA) muons are reconstructed using only hits coming from the Muon Spec-trometer (MS). The fitted track is then extrapolated to the beam level taking into account the expected energy loss in the detector components prior to the MS;

• Combined (CB) muons are reconstructed matching independently reconstructed tracks in both the ID and the MS. To ensure a redundant muon identification, ID-MS track association is performed through both a statistical matching of the individual fit parameters and a global refit of the individual hits coming from the two systems;

• Segment-Tagged (ST) muons are reconstructed from ID tracks matching with at o;e or more track segments in the MS;

• Calo-Tagged (CaloTag) muons are reconstructed when a ID track with no matching MS track is associated to a calorimeter deposit with shape and energy compatible with the minimum-ionizing hypothesis, which is a reasonable assumption for muons up to the TeV scale. The calorimeter tagging is likelihood-based.

(34)

Figure 3.4: Reconstruction efficiency for the main muon authors and author combinations as a function of the candidate pT and η [36].

All ID tracks employed for muon tagging are required to meet precise quality standards, such as ≥ 1 hit in the Pixel, ≥ 4 SCT hits, ≥ 9 TRT hits for |η| ∈ [0.1, 1.9] and at most 2 active Pixel or SCT on the track reconstructed path the track but without hits.

The CB set of rules provides the muon sample with greatest purity and is supported by SA muons in the η range covered only by the spectrometer ( |η| ∈ [2.5, 2.7]), and by ST and CaloTag muons in the loosely instrumented region of the Muon Spectrometer in η ∼ 0 and |η| ∈ [1.1, 1.3] that cause efficiency losses in the CB set. The reconstruction efficiency for the various authors over the ID η range and for pT ∈ [2, 100] GeV is summarised in Figure 3.4, showing how muon

efficiency can be recovered up to 97% in all the pseudorapidity spectrum supporting the CB+ST combination with CaloTag muons in the central region, where CB+ST efficiency alone drops to 65%. The efficiency measurements were carried out using the Tag-And-Probe method and efficiency scale factors are generated taking the ratio between the efficiency measured in data and in the MC sample.

Identification

Reconstructed muons are sorted into four identification quality levels according to the recom-mendation of the Combined Performance Group [37], providing different efficiency and purity working points for the offline analysis:

• Tight working point, accepting only CB muons with specific requirements on the q/P significance and the goodness of the combined track fit;

• Medium working point, accepting both CB muons with looser requirements than in the Tight working point and SA muons outside the ID acceptance;

• Loose working point, accepting Tight and Medium tagged muons plus CaloTag+ST muons in |η| < 0.1

• Very Loose working point, accepting all reconstructed muon candidates.

The ATLAS diboson searches at √s = 13 TeV perform a muon pre-selection based on the muon momentum ( pT > 7 GeV), identification (Loose working point), track isolation (Loose

working point) and primary vertex association variables (|d0/σ(d0)| < 3 && |z0sin θ| < 0.5mm).

An Overlap Removal procedure will then be performed on the pre-selected muons according to the following guidelines:

(35)

3.3. JET DEFINITION 35 • The muon is removed if a small-R jet lies within ∆R = 0.4 and shares > 2 tracks with it; • The overlapping jet is removed if it lies within ∆R = 0.4 of the muon but shares ≤ 2 tracks

with it

After the OR stage, the sample of signal muons for the `νqq analysis is built adding the following requirements to the pre-selected muons:

1. pT > 25 GeV;

2. Medium Identification; 3. Tight Isolation; 4. |η| < 2.5.

3.3

Jet Definition

In a hadron collider such as the LHC jets of collimated hadrons are a major feature in an overwhelmingly large fraction of the events. Devising an accurate, precise and theoretically sound procedure to reconstruct such objects is therefore a paramount step in the event analysis, wether online or offline.

Unlike other physical objects such as leptons and photons, jets are not supported by a unique theoretical definition. Consequently, the critical variables of a jet, such as its transverse momen-tum, mass and area will not only be determined by the chain of reconstruction and clustering algorithms employed, but vary dramatically over the various possible choices.

The `νqq analysis employs three different reconstruction procedures, with varying input and parameters, to produce as many jet collections from the information gathered in the EM and hadronic calorimeter. An overview of these procedures will be provided in this section.

3.3.1

Small-R Jets

Calorimetric Cluster Reconstruction

The earliest stage of the jet reconstruction procedure focuses on the identification of the calorime-ter cells showing a significant amount of deposited energy, and then joining them into groups of cells generally referred to as clusters. According to the algorithm of choice, clusters can be of fixed or variable size on the η × φ plane. The algorithm used to reconstruct the jets used in the diboson analyses creates three-dimensional clusters of variable size (named topological clusters, or topo − clusters), joining together nearby cells according to the value of the cell signal significance [38]: ςcellEM= EEM cell σEM cell (3.2) where all energy measurements are carried out at the electromagnetic scale [39] without any correction accounting for the hadronic energy loss due to the non compensating nature of the ATLAS calorimeter. The standard deviation of the cell noise σEM

cell is computed as the quadrature

sum two contributions:

• the RMS of the electronics noise of the cell calculated for the current operational working point;

• the RMS of the expected energy deposit coming from pile-up events, as estimated using randomly triggered events.

(36)

1. All cells calorimeter with |ςEM

cell| > S are identified as cluster seeds and sorted in

descend-ing values of the signal significance. The seed threshold is rather high, suppressdescend-ing the formation of noise-triggered clusters.

2. The immediately neighbouring cells of each seed are examined. Such cells are defined as the 8 cells adjacent to the seed in the same layer and the cells from adjacent calorimeter layers (or adjacent calorimeters altogether) overlapping with the the seed cell area in the η × φ plane. If such a cell has |ςEM

cell| > N , they are added to a list of proto-clustered cells

spefic to each seed cell. This stage is carried out iteratively until no more neighbouring cells with |ςEM

cell| > N are found. If at any iteration a neighbouring cell is also a seed, the

two proto-clusters are merged.

3. The cluster is finalised adding to the cluster perimeter any adjacent cell with |ςEM cell| > P .

The values of S, N and P are hierarchical ( S > N ≥ P ) to meet expected spatial profile of a physical energy deposit, and the ATLAS optimised configuration for the reconstruction of hadronic final states is S = 4, N = 2, P = 0. The aforementioned algorithm is however inherently flawed as a cluster could become too large through successive merging stages to provide a good measurement of the energy flow in the shower. This would lead to a critical loss of the information concerning the substructure of the hadronic event most significantly in the highly radiated high-occupancy environment of the HEC. To prevent information loss from excessive cluster merging, the topological clustering procedure is equipped with a splitting algorithm that searches for local maxima above ET > 500 MeV inside the clusters reconstructed in the EM calorimeter

(exploiting its higher granularity). These cells will then become the seeds of a procedure aimed at reclustering the cells without possibility of cluster merging. In the event of cell shared by two clusters, the energy deposited in it will be split according to a weighting parameter sensitive to the distance between the cell and the two clusters.

Once the reconstruction stage is over, topo-clusters will be considered physical objects with vanishing mass, energy equal to the sum of the transverse energy of the cells included in the cluster and with η and φ calculated as the energy-weighted average of the centres of said cells. At this point, the topological cluster will be ready to serve as the input for the clustering algorithm elected to reconstruct jets out the energy clusters.

Jet clustering algorithm

The vast majority of jet clustering algorithms fall into two categories with opposed underlying philosophies: cone jet algorithms [40] and sequential recombination algorithms. This section will focus on the general idea behind sequential recombination algorithms and provide an in-depth description of the specific algorithm used in the diboson analyses.

Sequential recombination algorithms build jets joining together input elements (be they clus-ters, particles or tracks) considered likely to originate from a single parton through QCD radi-ation. As the cross section for the emission of a gluon by a parton diverges for both soft and collinear radiation, recombination between two nearby cluster or between two cluster showing a prominent energy mismatch will be greatly favoured.

This line of reasoning is implemented by considering an ad hoc definition of distance between two clusters: dij = min  kT i2p, k2pT j∆R 2 ij R2 (3.3)

where i, j are indices running on the cluster list, kT i the transverse energy of the i-th cluster,

p a weighting parameter, ∆Rij the geometrical distance between the two clusters in the y × φ

Riferimenti

Documenti correlati

This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivitives 3.0 License, where it is permissible to download and share

The aim of this brief note is to show that, in the ODE’s case (1.1) (or, for problem (1.2), when Ω is an annulus and the weight function is radial, see Corollary 4.2),

92 Department of Experimental Particle Physics, Jožef Stefan Institute and Department of Physics, University of Ljubljana, Ljubljana; Slovenia 93 School of Physics and Astronomy,

5 High Energy Physics Division, Argonne National Laboratory, Argonne IL, United States of America. 6 Department of Physics, University of Arizona, Tucson AZ, United States

Difficulty in model- ing the transition between soft and perturbative physics is indicated by disagreements between data and all MC distributions in the 10–20 GeV range in

32 (a) Institute of High Energy Physics, Chinese Academy of Sciences, Beijing; (b) Department of Modern Physics, University of Science and Technology of China, Anhui; (c) Department

33 (a) Institute of High Energy Physics, Chinese Academy of Sciences, Beijing; (b) Department of Modern Physics, University of Science and Technology of China, Anhui; (c) Department

33 (a) Institute of High Energy Physics, Chinese Academy of Sciences, Beijing; (b) Department of Modern Physics, University of Science and Technology of China, Anhui; (c) Department