• Non ci sono risultati.

Studies of characteristics and performances of the new tracker of MEG II experiment

N/A
N/A
Protected

Academic year: 2021

Condividi "Studies of characteristics and performances of the new tracker of MEG II experiment"

Copied!
121
0
0

Testo completo

(1)

Universit`

a di Pisa

FACOLT `A DI SCIENZE MATEMATICHE, FISICHE E DELLA NATURA

Corso di Laurea Magistrale in Fisica delle Interazioni Fondamentali

Tesi di laurea magistrale

Studies of characteristics and performances of the

new tracker of the MEG II experiment

Candidato:

Beatrice Pruneti

Relatore:

(2)

Contents

1 Theory and phenomenology of Charged Lepton Flavour

Vi-olation (CLFV) 5

1.1 Brief review of the Standard Model . . . 5

1.2 The Muon decay in the Standard Model . . . 10

1.3 Lepton Flavour Violation in the Standard Model . . . 11

1.3.1 Neutral Lepton Flavour Violation: neutrino oscillations 11 1.3.2 Charged Lepton Flavour Violation in the µ sector . . . 14

1.4 Charged Lepton Flavour Violation beyond the SM: the muon decay in SUSY-GUT theories . . . 15

1.5 Overview of Charged Lepton Flavour Violation experiments . 17 2 The MEG experiment 21 2.1 Event signature and background for µ+→ e++ γ . . . . 21

2.1.1 Correlated background . . . 21

2.1.2 Accidental background . . . 22

2.1.3 Single Event Sensitivity (SES) . . . 23

2.2 Experimental apparatus of MEG 1st phase . . . 24

2.2.1 Positron spectrometer . . . 25

2.2.2 The Drift Chamber (DCH) . . . 26

2.2.3 The Timing Counter . . . 28

2.3 The Liquid Xenon Photon Detector (LXe) . . . 29

2.4 The DAQ and trigger system . . . 31

2.5 Performance of MEG 1st phase . . . 33

3 The MEGII experiment: upgrade motivations 35 3.1 Target and µ+ beam . . . 37

3.2 The new MEG Cylindrical Drift Chamber (CDCH) . . . 39

3.3 The Timing Counter (TC) . . . 40

3.4 The Liquid Xenon Photon Detector (LXe) . . . 44

3.5 TDAQ system . . . 49

3.6 Radiative Decay Counter (RDC) . . . 50

(3)

4 The new tracker of MEG 2nd phase: the Cylindrical Drift

CHamber (CDCH) 54

4.1 Tracker description . . . 54

4.1.1 The CDCH wires . . . 56

4.1.2 Gas flushing system and Cluster Timing . . . 59

4.1.3 Tracker electronics . . . 61

4.2 Drift Chamber Costruction . . . 62

4.2.1 Description of the wiring robot and wiring procedures 62 4.2.2 Wires acceptance test . . . 64

4.2.3 Mounting produceres . . . 69

4.2.4 Radius and end-plates mechanical alignment measure-ments . . . 72

4.2.5 After assembly . . . 74

4.3 Problems with the wires . . . 77

5 MEG 2nd phase software 83 5.1 Brief review of the software architecture . . . 83

5.2 The algorithm of positron reconstruction: the Pattern Recog-nition . . . 86

5.2.1 Introduction and reconstruction of positron coordinates 86 5.2.2 Pattern recognition operation principle and operating methods . . . 87

5.3 Correlation between CDCH Drift Chamber and Timing Counter: a possible method to improve positron tracking . . . 92

5.3.1 Formation of true track segments and ∆Φ-∆Z corre-lation scatter plot . . . 92

5.3.2 The true CDCH-TC correlation matrix, the True-Rec matrix and the distance (δd) cut . . . 96

5.3.3 The β cut optimisation . . . 99

5.3.4 The δz cut optimisation . . . 99

5.3.5 Next step: applying the selection algorithm to the track Fitting information . . . 102

5.3.6 Possible application of the ∆Z-∆Φ algorithm to a second-level trigger . . . 106

(4)

Introduction

The MEG (Mu to E Gamma) experiment was projected and built for the search for the Charged Lepton Flavour Violation in the muonic sector; the investigated process is the decay

µ+→ e++ γ (1)

a process totally forbidden in the Minimal Standard Model (MSM) and pratically also in its extensions. The evidence of this specific process would have a strong impact on particle physics, because it would be a very strong support for the theories beyond the Standard Model. The first phase of the experiment, called M EG, collected data from 2009 to 2013, obtaining a final result on the BR(µ+→ e++ γ) < 4.2 × 10−13 at 90% Confidence Level and

establishing a new upper limit on this branching ratio. The second phase of the experiment, the MEG Upgrade or M EG II, is designed to reach a final sensitivity on BR(µ+ → e++ γ) ≈ 5 × 10−14 ; it is under construction and

provides a significant upgrade of all sub-detectors accompanied by a muon stopping rate increase of a factor > 2. Both the MEG experiment and its upgrade are located at the Paul Scherrer Institute (PSI), near Zurich; the PSI has the most intense continuous low-energy muon beam in the world, well suited for this kind of experiments.

This master thesis will develop as follows:

• The first chapter introduces briefly the Standard Model and the Lepton Flavour Violation physics, and describes the global overview of the Charged Lepton Flavour Violation experimental scene with particular emphasis on the muonic sector;

• The second and third chapters are dedicated to the description of the MEG experiment and of its upgrade (MEG II), providing a complete and global description of all sub-detectors;

• The fourth chapter is entirely dedicated to the description of the Cylin-drical Drift Chamber (CDCH), the new tracker of the MEG II experi-ment, and to its assembling procedures, together with the description of the tests on the tracker wires made in Pisa. The tracker is mounted and tested in the INFN facility in San Piero a Grado;

(5)

• The fifth chapter provides a briefly description of the MEG II software; then the chapter focus on the description of Pattern Recognition al-gorithm (the first step of the positron track reconstruction) and of a new selection program applied to the portions of positron trajectory (called track segments) before sending them to the Kalman Filter, for the final reconstruction of the positron kinematic variables. This new selection algorithm exploits the φ − z correlation between the CDCH track segments and the TC ones, in order to perform a pre-selction on the reconstructed segments to single out those corresponding to more energetic tracks;

• The last chapter provides the conclusions and envisages the future developments of the positron reconstruction and a possible implemen-tation of this selection algorithm in a second level trigger.

(6)

Chapter 1

Theory and phenomenology of

Charged Lepton Flavour

Violation (CLFV)

1.1

Brief review of the Standard Model

The particle physics phenomenology is very well described by the Stan-dard Model of Particle Physics [1][2][3]. This theory describes three of the four fundamental interactions in the universe: the electromagnetic force, the weak force (later on unified in the electroweak force by Sheldon Glashow, Steven Weinberg and Abdus Salam [4][5][6]) and the strong force. The Stan-dard Model of Particle Physics (SM) is based on the gauge group

SU (3)C ⊗ SU (2)L⊗ U (1)Y (1.1)

SU (3)C is the symmetry group that describes the strong interactions

(colour symmetry, with coupling constant gs), SU (2)L (weak isospin

sym-metry with coupling constant g) and U (1)Y (hypercharge symmetry with coupling constant g0) together are related to the electroweak interaction. The minimal SM is based on the following fundamental components:

• Leptons: leptons are six fermions, three charged (electron, muon, tau) and three neutral (the corresponding neutrinos), plus their re-spective antiparticles. The electron, muon and tau with the respec-tive neutrinos form three different families characterized by a quantum number, called Leptonic Family Number. This quantum number was added a posteriori since processes where leptons of the various generations change were never been observed. Every generation is de-scribed by two left-handed spinorial fields (SU (2)L doublets) and one right-handed spinorial field, a singlet on SU (2)L symmetry.

(7)

 e νe  L , µ νµ  L , τ ντ  L (1.2) eR, µR, τR (1.3)

The SM is left-right asymmetric and left-handed and right-handed fermions have different quantum numbers. The right-handed neutrinos don’t exist in the minimal SM.

• Quarks: quarks are the elementary particles, constituents of the hadrons and interact both electroweakly and strongly. Like leptons, quarks are gathered together in three generations, doublets of SU (2)L, and in six right-handed singlets. QαL= uα dα  L (1.4) uRα= uR, cR, tR (1.5) dRα= uR, sR, bR (1.6)

where uL(R)αrepresents the up-type quarks (up, charm and top), while dRα indicates the down-type quarks (down, strange and bottom).

In-dex α indicates the three colours (usually indicated as red, green and blue, or 1,2 and 3). All doublets include an up-type quark and a down-type quark. The up-down-type quarks have electric charge equal to +23, while the down-type quarks have −13. The quarks strong eigenstates are equivalent to the mass eigenstates, while the quark weak eigenstates are connected to mass eigenstates through the Cabibbo-Kobayashi-Maskawa matrix (CKM), a unitary matrix that describes the tran-sition probability from one quark i to another quark j.

  d0 s0 b0  = VCKM   d s b  =   Vud Vus Vub Vcd Vcs Vcb Vtd Vts Vtb     d s b   (1.7)

• Boson force carriers: there are 12 bosons of this type: one massless for the electromagnetic force (γ), three massive for the weak force (W± and Z0) and eight massless for the strong interaction (gluons). The photon, W±and Z0are produced by the spontaneous symmetry break-ing of the electroweak symmetry from SU (2)W I ⊗ U (1)Y to U (1)em,

caused by the Higgs field. In this way W±, Z0 and γ are linear com-binations of gauge bosons of the weak isospin symmetry (W1, W2 e W3) and weak hypercharge symmetry (B).

(8)

W±= √1 2 W 1∓ iW2 (1.8) Z0 γ  =cos θW − sin θW sin θW cos θW  W3 B  (1.9) where θW is the Weinberg angle, related to the coupling costants from the relation: θW = arcsin(√ g

g2+g02). The Weinberg angle causes the

rotation of the W3 and B0 gauge fields into the observable particles Z0 and γ. Moreover, it connects the Z0 mass to W± mass through the relationship

mZ =

mW

cos(θW)

(1.10) • Higgs boson: is the particle associated with the Higgs field, a doublet scalar field with a non-zero vacuum expectation value, that through the spontaneous symmetry breaking (Higgs mechanism) gives mass to fermions and weak bosons. On 4th July 2012, it was announced, through a conference at CERN, that the data collected during Run-I of ATLAS and CMS experiments, highlighted the presence of a Higgs-like particle having a mass of 125 GeV [7][8].

The Higgs field can be written as:

H(x) =φ + φ0  =  0 ν + h(x)2  (1.11)

where ν is the vacuum expectaction value (VEV) of Higgs field de-fined as h0|H(x)|0i and it is equal to ∼ 246 GeV: this produces the sponta-neous symmetry breaking that, as already mentioned, gives mass to fermions and vector bosons. h(x) is the dynamical field, whose vacuum expectac-tion value h0|h(x)|0i is equal to zero. The Standard Model lagrangian can be written as the sum of three terms

L = Lgauge+ LY ukawa+ LHiggs (1.12)

where Lgauge= X SU (3)C,SU (2)L,U (1)Y FµνaFµνa + i X quarks,leptons ¯ ψiγµDµψi+ DµHDµH (1.13)

(9)

LHiggs= −µ2|H|2+ λ|H|4 (1.15)

Lgauge is the lagrangian that describes free lepton and boson fields and

their interactions with the Higgs field. Fµν is the field strength tensor, for a spin-1 gauge field

Fµνa = ∂µAaν − ∂νAaµ+ gfabcAbµAcν (1.16)

The index a sums on all generators of symmetry group of the SM. So Aµ is the gauge field with coupling constant g and fabs are the structure

constants of the SU (3)C and SU (2)L gauge group (for an abelian group, like U (1), the structure constants are zero). Dµ is the covariant derivating, defined as: Dµ= ∂µ+ igs λa 2 G a µ+ ig τb 2W b µ+ ig 0 QYBµ (1.17)

where τb (with b = 1,2 and 3) are the Pauli matrices which represent the

SU (2)Lgenerators; λa (a = 1-8) are the Gell-Mann matrices and

represent-ing the SU (3)C generators; g, gs and g0 are respectively the weak isospin coupling constant, the strong coupling constant and the weak hypercharge coupling constant; Wb are the gauge bosons generated by SU (2)L

(symme-try group associated to weak isospin) and B is the gauge boson generated by U (1) symmetry group (associated with weak hypercharge). Equation (1.14) defines the Higgs potential, where µ is the Higgs mass parameter and λ is the Higgs coupling parameter. The Higgs field is a doublet of complex scalar fields with defined hypercharge and weak isospin.

H =φ + φ0  =φ1+ iφ3 φ2+ iφ4  (1.18) The weak hypercharge, third weak isospin component and charge values of complex fields φ+ and φ0 are shown in the table below.

H(x) Y I3 Q = I3+ Y/2

φ+ 1 1/2 1

φ0 1 -1/2 0

Because µ2 < 0, all minima are connected by a U (1) trasformation and the choice of a certain minimum leads to spontaneous symmetry breaking of three degrees of freedom of SU (2)L⊗ U (1)Y symmetry. The choice to set φ3

and φ4 equal to zero imposes also that φ1 equal to zero, because the vacuum has a zero charge. Therefore the Higgs field is:

H = h0|φ|0i =0 ν  = q0 −µ2 λ ! (1.19)

(10)

where ν is the previously mentioned VEV. By means of the coupling to Higgs field, vector bosons and fermions acquire mass.

The fermion mass lagrangian is given by LY ukawa. After the spontaneous symmetry breaking, equation (1.14) becomes:

LY ukawa = −¯li,RMijllj,L+ h.c (1.20)

where li,R are the right-handed leptons and li,L are the left-handed

lep-tons. As one can see in equation (1.20), it is expressed in terms of mass matrices Mijl and therefore the coupling of the fermions to the Higgs field is proportional to fermion mass. The elementary particles of the SM are shown in Figure 1.1.

Figure 1.1: The elementary particles of the matter

Though it is clear that the SM provides a good description of the particle physics, it doesn’t explain some still unsolved issues, both observational and theoretical:

• The Standard Model doesn’t take into account the gravitational force, described by the general relativity;

• The total mass–energy of the universe contains 26.8 % dark matter of non-barionic origin and 68.3 % dark energy. Only the 4.9 % of the universe is composed by ordinary matter [9];

• The SM doesn’t explain the matter-antimatter asymmetry, because the CP violation in the weak interaction is too tiny to solve the asymmetry; • In the Minimal Standard Model the neutrinos are massless, but the neutrino oscillations prove it isn’t true. The right-handed neutrino νR

(11)

and his coupling with the Higgs field could be added manually, but anyway the neutrino oscillations aren’t explained yet. Moreover neu-trino oscillations violate Leptonic Family Number, a quantum number preserved in the minimal SM;

• The SM doesn’t explain why three generations of leptons and quarks do exist and their mass hierarchy;

• Hierarchy and fine-tuning problems are also unresolved.

1.2

The Muon decay in the Standard Model

Since the muon decay µ → e + γ is searched for in the MEG experi-ment, it is useful to preliminarly analise in deeper details the muon and his interactions in the Standard Model. The muon was discovered in 1937 by Neddermeier and Anderson in a cloud chamber exposed to cosmics ray [10]. Initially it was interpreted as the short-range strong mediator of Yukawa interaction, but it was later proved to have the wrong properties. The muon is a lepton with mass and lifetime equal to [11]

τµ= 2.1969811 ± 0.000022 µs (1.21)

mµ= 105.6583715(35) ± 0.000034 MeV (1.22)

The interaction lagragian of the muon in the SM is

L = e¯µγµµAµ −√g 2 ν¯µLγ µµ LWµ++ ¯µLγµνµLWµ−  +pg2+ g02  ¯ µLγµ  sin2(θW) − 1 2  µL+ ¯µRγµsin2(θW)µR  Zµ0 −mµ v µµH.¯ (1.23)

and describes the interactions of the muon with Aµ, Wµ± and Zµ0 fields. The main muon decay modes are

µ−→ e−+ ¯νe+ νµ (1.24)

µ+→ e++ ν

e+ ¯νµ (1.25)

Those decay modes have a branching ratio of ≈ 99%, and they are also called Michel decays. Those processes are explained by the second term of equation (1.23), where the muon couples with W± bosons. Another im-portant muon decay is the radiative decay, where

(12)

µ− → e−+ ¯νe+ νµ+ γ (1.26)

µ+ → e++ νe+ ¯νµ+ γ (1.27)

In this process the usual muon decay is accompanied by the emission of a photon by an inner bremsstrahlung process. The branching ratio of this process is BR(µ → e + νe+ ¯νµ+ γ) = (1.4 ± 0.4)% for Eγ > 10 MeV [12].

This process results a background for the searched event µ → e + γ.

1.3

Lepton Flavour Violation in the Standard Model

1.3.1 Neutral Lepton Flavour Violation: neutrino oscilla-tions

As before explained, neutrinos in the Minimal Standard Model are mass-less. However the observation of neutrino oscillations (predicted by Bruno Pontecorvo in 1957 [13]) in many experiment, like Homestake [14], M ACRO [15], KamLAN D [16], GALLEX [17], SN O [18], SuperKamiokande [19], T 2K [20], Reno [21], Daya Bay [22], Double CHOOZ [23] proof that at least two of the three neutrinos have mass. The SuperKamiokande spokeperson Takaaki Kajita and Sudbury Neutrino Observatory spokeperson Arthur McDonald awarded the 2015 Nobel Prize for the discovery of neutrino oscilla-tions. This discovery is clearly in contrast with the Minimal Standard Model (MSM) assumption of massless neutrinos. Indeed in the Minimal Standard Model the interaction between leptons and bosons doesn’t have a mass term, and the interaction lagrangian is the following:

− L = √gL 2J

µ

LWLµ (1.28)

where JLµ is the leptonic current

JLµ= X

leptonf lavour

¯

νlLγµlLl (1.29)

In (1.29) the l index denotes the lepton flavour, while the L index refers to the left-handed projection. From (1.28) and (1.29) it is clear that the right-handed neutrino doesn’t appear in the interaction and the lepton flavour is automatically conserved. Adding manually a mass lagriangian term to the equation (1.28), that is not diagonal in the flavour eigenstates basis, a flavour mixing between the flavour eigenstates is induced and defines the interaction eigenstates. So the interaction eigenstates |νli, which are

ob-served directly in the experiments, are linear combinations of the mass eigen-states |ν1i, |ν2i, |ν3i to be expressed in terms of a 3 × 3 unitary mixing

(13)

analogous of the CKM matrix for the quark sector. So each member of flavour eigenbasis can be expressed by the relation:

|νli =

X

i

Uli∗|νii (1.30)

where |νli representes the flavour eigenstates, and |νii the mass

eigen-states. If we analyse the simplest case of only two family mixing, the flavour eigenstates are related to the mass eigenstates through a rotation matrix 2 × 2:  νe νµ  =cos θ − sin θ sin θ cos θ  ν1 ν2  (1.31) θ is the mixing angle, connected by the intereference term of the Yukawa mass term introduced in the equation (1.28). Assuming the ultrarelativistic limit (perfectly valid1, because the present upper limit on electronic neutrino mass is 2.2 eV [25]), the transition probability in vacuum of a neutrino of energy Eν with flavour α into another neutrino with flavour α

0 is P |ναi → |να0i = sin2(2θ) sin2  1.27∆m 2[eV2]L[m] E[MeV]  (1.32) where ∆m2= m21− m2

2 is the mass squared difference of two mass

eigen-values m1 and m2. It is clear that the transition probability goes to zero when θ or ∆m2 vanishes. As already stated, many experiments searched for neutrino oscillations. The neutrino oscillation search is based on the obser-vation (direct or indirect) of neutrinos with a well defined flavour at a given distance L from the production point: experiments which measure the ap-pearance of neutrinos of a flavour different from that of neutrinos produced at the source are called appearance experiments; while experiments which compare the number of produced neutrinos with that of the observed neutri-nos of the same flavour in a detector are called disapperance experiment. Moreover, there are several types of neutrino sources:

• Solar neutrinos: in the late 1960s, the Homestake experiment mea-sured for the firt time the flux of solar neutrinos [14] and observed only one third of the theoretical value predicted in the Standard Solar Model [26]. This experimental observation was named solar neutrino problem. After Homestake three other experiments measured the so-lar neutrino flux, GALLEX, SAGE and Kamiokande. Like Homestake, GALLEX and SAGE were radiochemical experiments based on inverse β decay reactions, which confirmed the deficit observed by Homes-take, with a measured neutrino rate about 60% of that predicted. Also

1

(14)

SuperKamiokande confirmed this deficit, but only with Sudbury Neu-trino Observatory (SNO) experiment, which measured the neuNeu-trino flux in a flavour independent way, it was possible to definitely solve the solar neutrino problem by means of neutrino oscillation mecha-nism. However the situation in the Sun is more complicated than in the case of neutrino oscillations in the vacuum schematically described previously. Indeed the electron neutrinos, produced within the Sun, can interact with the electrons of the matter not only through neutral current scattering, but also through charged current coherent forward scattering (differently from muon and tau neutrinos which can interact only via neutral current reactions). This means that electron neutri-nos in matter have a different effective mass than in vacuum, causing a differentiation between the oscillations in the vacuum and those in the matter. This effect is called Mikheyev-Smirnov-Wolfenstein effect [27][28];

• Atmospheric neutrinos: The primary hadronic cosmic rays inter-act with the earth’s atmosphere producing a large number of particles, mainly charged mesons like π and K. Charged particles decay in flight producing µ+(µ−) and νµ(¯νµ); the subsequent muon decay, if it takes

place, produces e+(e−), νe(¯νe) and νµ(¯νµ), Then, if all decays occur,

one expects to observe a ratio: ν¯µ+νµ

¯

νe+νe ≈ 2. However when the π (and

consequently µ) energy increases, the µ produced by π decay can ar-rive on the Earth’s surface without decaying, and therefore ν¯µ+νµ

¯ νe+νe >

2. Many underground experiments (like Kamiokande [29], IMB [30], Soudan 2 [31] and MACRO [15]) have showed a νµflux deficit. This

re-sult was interpreted as due to neutrino oscillations νµ→ ντ or νµ→ νe . In 1998 SuperKamiokande showed a strong evidence of νµ → ντ

os-cillation, since the measured atmospheric muon neutrino flux indicated a deficit, while the measured electron neutrino flux was in agreement with calculation. This fact has ruled out the νµ→ νe option, showing

a convincing evidence of νµ→ ντ oscillation;

• Nuclear reactor neutrinos: the nuclear reactors are intense isotropic sources of electron antineutrinos, clean from muon antineutrino con-tamination. Indeed they are used for disappearence experiments. By means of these sources, it is possible to investigate the electron an-tineutrino oscillation into another type of neutrino. The reactor exper-iments are also well suited for the determination of θ13 mixing angle, because in a 3−ν mixing scheme with a distance from the produc-tion point to the detector ∼ 2 km, the ¯νe → ¯νx is dominated by the

term of PMSN matrix mixing involving θ13. The antineutrino mean energy h ¯Eν¯ei is 3 MeV. Recently the Daya Bay [22], Reno [21] and

(15)

an-tineutrinos disappearence, and they have succeeded in measuring θ13,

obtaining sin(θ13) = 0.0900.008−0.009for the Daya Bay experiment [53] and

sin(θ13) = 0.097 ± 0.034(stat) ± 0.0034(syst) [23]. Previously CHOOZ

and Palo Verde experiments had established significant upper limits to θ13 and ∆m2 (CHOOZ : sin2(2θ13) < 0.17 and ∆m2 < 8 × 10−4eV2

for maximal mixing[24]);

• Accelerator neutrinos: two neutrino energy intervals can be inves-tigated: intermediate energy (Eν ≈ 50 MeV) and high energy (Eν > GeV). In the intermediate interval one can search ¯νµ→ ¯νe oscillation,

by means of appearance experiments (as KARMEN [32], LSND [33]). Indeed, in high energy interval, one can study:

– disappearance of νµ;

– appearance of νe;

– appearance of ντ.

As an example, the OPERA experiment, placed in Laboratori Nazion-ali del Gran Sasso, in Abruzzo, Italy, used a muonic neutrino beam from CERN [52]. The experiment started to take data in 2008 and the beam from CERN was stopped in 2012. In its lifetime the OPERA experiment measured 5 ντ-like verteces, which were interpreted as

os-cillation events of νµ neutrino into ντ neutrino [52].

The neutrino oscillation evidence has driven a necessary change in the formulation of Standard Model and indeed, its discovery opened to possibility of a Lepton Flavour Violation also in the charged sector, as we will see in the next section.

1.3.2 Charged Lepton Flavour Violation in the µ sector The neutrino oscillation discovery shows that the neutrino mass is non-vanishing, and this fact allows the possibility of charged lepton flavour vi-olation within the SM. We consider for example the µ → e + γ, process, searched for by the MEG experiment. In the Feynman diagram reported on the top of Figure 1.2, the muon couples with a virtual W (that radiates a real γ by inner bremsstrahlung) and a muon neutrino. The muonic neutrino, by oscillation, becomes an electron neutrino, that finally interacts with the virtual W boson to generate an electron detected in the final state. The branching ratio of this process is [34][35]

BR(µ → e + γ) = 3α 32π X i,j=1,2,3 Uµi∗Uej ∆m2ij M2 W (1.33)

(16)

where one can recognize a γ-vertex (∼ α, the fine structure constant) and the neutrino oscillations, which contribute to the branching ratio with the squared mass differences ∆m2ij and the mixing parameters Uij between the l flavour and the i-th mass eigenstate. The process energy scale is on the order of MW and the oscillation takes place during the virtual W ’s lifetime,

for the Heisenberg’s uncertainty principle. So we obtain that L = ct ≈ M1

W

and the BR dependence on ∆m

2 ij

M2 W

makes the process strongly suppressed: indeed if we insert the numerical values in equation (1.33), we have BR(µ → e+γ) ≈ 10−54, too much small to be ever experimentally measured. For this reason, if the CLFV takes place, its origin is outside the Standard Model. Other possible diagrams are always shown in Figure 1.2, where the photon is emitted by the initial muon or by the final electron.

Figure 1.2: Feynman diagrams of Charged Lepton Flavour Violation in the MSM extension including the neutrino oscillations.

1.4

Charged Lepton Flavour Violation beyond the

SM: the muon decay in SUSY-GUT theories

Since many questions are still unresolved in the Standard Model, the general opinion of the theorists is that the Standard Model is only a low-energy effective theory, and for this reason they are looking since tens of years for theories beyond the Standard M odel, which give answers to

(17)

un-resolved problems left behind by the SM. A class of these are called the Grand Unification Theories (GUTS), which assume that at high energy the three interactions described by the SM are merged into a single force. This means that the gauge symmetry groups increase with the energy. The electromagnetic, weak and strong force appears as different points of view of an unique force, with a simple gauge symmetry group and a single cou-pling constant. The unification of the three interactions takes place at very large energy, called GUT scale, that is ΛGU T ≈ 1015÷ 1016 GeV. The

Standard Model doesn’t explain the hierarchy problem, the unresolved question why the weak force is 1024times stronger than the gravity. In tech-nical terms, it is a mystery why mH, the mass of the Higgs field responsible

of spontaneous breaking of SU (2) × U (1) symmetry and mΣ, the mass of the Higgs field responsible of GUT groups breaking, have so different value. Indeed mH

mΣ ≈ 10

−14 1. To justify this difference and solve other problems,

the Supersymmetry models were introduced, that include also gravity in the force unification. In the particle physics there are two many particle groups: bosons, which have integer spin, and f ermions, which instead have half-integer spin. In the SUSY frame, a supersymmetric partner belonging to the opposite group is associated to each fundamental particle: a SUSY fermion is associated to every ordinary boson, and viceversa. Therefore the SUSY introduces a symmetry between bosons and fermions, which has the fundamental property of producing cancellations, at each order, of divergent diagrams to solve the hierarchy problem. However in nature the symmetry between bosons and fermions is clearly broken, and this breaking (for the Minimal Supersymmetry Model MSSM) happens at the 1 Tev scale energy [36][37].

In the SUSY and SUSY-GUT theories LFV processes like µ → e + γ nat-urally arise. At the GUT energy scale, the slepton mass matrix is diagonal in the flavour space, but in the evolution from GUT to electroweak scale, indeed with decreasing energy, the off-diagonal terms of slepton mass ma-trix become relevant: this slepton flavour mixing generates the CLFV. The predicted BRs of CLFV processes in SUSY theories are within the reach of experimental techniques: for instance the prediction for µ+→ e++ γ decay

(18)

Figure 1.3: Feynman diagrams of Charged Lepton Flavour Violation in the SUSY-GUT theories. On the left, the muon couples with a slepton-charginio, while on the right the muon couples with a slepton, producing a slepton-neutralino.

1.5

Overview of Charged Lepton Flavour Violation

experiments

The CFLV search is one of the fundamental keys in the field of particle physics. The positive observations of a CLFV process as µ → e + γ will be a clear evidence of physics beyond the Standard Model, and it will show that the lepton flavour violation takes place both in the neutral sector and in the charged sector. The CFLV is experimentally studied through many channels, which are (l and h(s) are generic leptons and hadrons):

• Rare decays of muon and tau: µ+→ e++γ, µ+→ e+ee+, τ±

l±+ γ (above all τ± → µ±+ γ), τ → 3l and τ → l + h(s);

• Rare kaon decays: K0

L→ eµ, K+→ π0e±µ±;

• Direct conversion between leptons of different flavours in nuclear field: µ−A → e−A, µ → τ .

Moreover, the search of SUSY particles at hadronic colliders like LHC opens the possibility of searching for their CLFV decays. In searches for CLFV processes, a fundamental role is played by the Muonic Sector, where experiments search for rare µ decays and µ conversion. Because of energy-momentum conservation, only few channels are allowed to CFLV studies in reactions involving muons, which are:

• µ+→ e++ γ: this decays is the historical channel for studying CLFV

processes. The first studies in muon sector date back to 1948 when Hincks and Pontecorvo searched for this anomalous decay by using cosmic rays [13] and since then, this search was repeated several times. The positive muons are produced by the decay of positive pions, which in turn are originated by collision of protons on fixed target. Muons decay at rest emittig simultaneously a photon and a positron in back-to-back directions. So the photon and the positron have the same

(19)

energy, equal to Eγ = Ee = m2µ = 52.83 MeV. Negative muons are

not used because they could be captured in nuclear matter when stop within it. These types of experiments are characterized by a clear signature determined by the single kinematic relation between positron

and gamma. The µ+ → e+ + γ decay is searched for in the MEG

experiment, at Paul Scherrer Institute (PSI). The decay and the MEG experiment will be described in detail in the next chapter;

• µ → 3e: In many models (for example SUSY-GUT theories) the BR of this process is connected to µ → e + γ BR by a α factor, because the positron-electron pair is produced by a virtual photon. This process is characterized by only charged particles in the final state. The exper-iments searching for µ+ → e+ee+ process are based on kinematical

criteria: the momentum of all electron triplets are reconstructed and the selected events must have a zero three-electrons total momentum, with total charge equal to +e. The invariant mass of the three parti-cles must be equal to the muon mass and the three simultaneous tracks must originate from a common vertex. In 2013 PSI approved a new experiment, called Mu3e [55][56], with the challenge to observe the triple electron decay, aiming to a BR sensitivity of 10−16. The present upper limit of BR(µ → 3e) is 10−12, as shown in Figure 1.4 [38]; there-fore the Mu3e experiment has the challenge to lower the current upper limit by four orders of magnitude;

• µ−A → e−A: this process takes place when negative muons are stopped in the nuclear matter, creating muonic atoms in the ground state. The negative muons can decay in orbit into an eletron, emitting one elec-tronic neutrino and one muonic neutrino, or they can be captured by a nucleus, emitting only a muonic neutrino. But, if the CFLV takes place, the muons can convert into a single electron. In SUSY frame BR(µ → e) is 2 ÷ 3 order of magnitude below BR(µ+→ e++ γ),

be-cause the conversion is dominated by the exchange of a virtual photon, even if there is a large variety of SUSY schemes which predicts rates for the µ → e conversion larger by hundreds of times. Two experiments will search for the neutrinoless muon to electron conversion: one is Mu2e [57][62], located in Fermilab, in USA; the other is COMET, placed in JPARC, Japan [58][59][60][61]. The present upper limit of the BR(µ−A → e−A) is 10−12: Mu2e target sensitivity is 10−17, while COMET’s is 10−16, namely five and four orders of magnitudine better than the actual limit. The COMET physics run is expected to start in 2018/2019, while Mu2e team planes to have the preliminary results in 2020.

Figure 1.4 shows a summary of the experimental upper limits on CLFV in the muonic sector as a function of time.

(20)

The Tauonic Sector is formed by a group of experiments searching for rare τ decays. This sector is in principle very promising because the large τ mass (mτ ≈ 1.777 GeV ≈ mµ) allows many decay channels, in addition to the purely leptonic ones (τ → l + γ, τ → 3l, τ → l). In several SUSY and SUSY-GUT models the BR(τ → µ+γ), dominated by (mτ/mµ)γfactor with γ ≥ 3,

is 103÷ 105 greater than BR(µ → e + γ), depending on the modelling of the

slepton mixing matrix. However many difficults arise experimentally: tau is an unstable particle with a very short lifetime (τ ≈ 2.91 × 10−13 s), which makes it not easy to handle. Indeed it is impossible to produce τ beams and τ ’s are produced in pair in e+e− or hadronic collisions. The most efficient machines are the B-factories, like SuperKEKB[39], since the cross section for producing τ+τ− at the Ψ4s energy is 0.9 times that for producing b¯b. In each collision several particles are emitted, then it is difficult to identify the τ events and to single out them within a large background. The present best experimental upper limits on the CLFV in the τ sector are shown in Figure 1.5.

Figure 1.4: Evolution of the search for CLFV in muonic sector as a function of time. Open symbols represent future projects.

(21)

Figure 1.5: The 90% C.L. branching ratio of tauonic CLFV process as a function of the τ decay channels. The Belle, BaBar and LCHb results can be view in the references therein [40][41][42]

(22)

Chapter 2

The MEG experiment

2.1

Event signature and background for µ

+

→ e

+

The µ+ → e++ γ signature process is very simple and efficient in

dis-criminating the signal from the background. Positive muons decay at rest, producing two daughter particles, the photon and the electron, which are simultaneously emitted in back to back directions. Because of kinematical constraints, the two daughter particles have the same energy, equal to half of the muon mass Eγ= Ee = (mµ/2) = 52.83 MeV. Therefore it is very

impor-tant to measure with high resolution the variables that identify the event: the relative time between photon and electron (teγ), the photon energy (Eγ),

the electron energy (Ee), and the stereo angle between the directions of the two daughter particles (Θeγ ≈ π). In order that the muons stop efficiently

in the thin target, soft muons are used, which are produced by the decay at rest of charged pions in their production target. The main backgrounds

which can mimic the µ+ → e++ γ process are the Radiative Muon

De-cay, also called correlated background (or physical background), and the accidental background.

2.1.1 Correlated background

The correlated background is given by the Radiative Muon Decay (RMD), a process where the muon decays in a final state formed by two neutrinos, one positron and one photon. Since the photon is emitted by Inner Bremsstrahlung, the γ of this process is also referred to as IB photon. The RMD can mimic the signal event when the photon and the electron are emitted back to back and the neutrinos carry out small energy, making the photon and electron energy close to mµ/2. The radiative muon decay rate is

given by RRM D= Rµ× BR(µ+→ e++ νe+ ¯νµ+ γ), and it is proportional

to the rate of muon beam Rµ. The differential decay width is expressed as a function of the adimensional variables x = 2Ee/mµand y = 2Eγ/mµ(which

(23)

value mµ/2) and the relative emission angle, defined as z = π − Θeγ. As

we can see in Figure 2.1, in the signal region (x, y = 1 and z = 0) the differential decay width vanishes, but the finite detector resolutions introduce background events, reducing the experimental sensitivity. To quantify the detector effect on the experimental sensitivity, we use the concept of Signal box1 we define a signal region centered on the nominal values of positron energy and photon energy and nominal values with a widths equal to the positron energy resolution δx and the photon energy resolution δy and the relative angle resolution δz. By integrating the differential RMD width over that multi-dimensional box, we obtain the RMD branching ratio as a function of δx and δy, showed in left Figure in 2.1. As one can see from the plot in Figure 2.1 left, at least one of the two energy resolutions must be very high to keep the branching ratio of physics background at least one order of magnitudine below the experiment sensitivity. For MEG, the effective branching ratio of correlated background was 3 × 10−14 [44].

Figure 2.1: Left: the RMD branching ratio shown as a function of positron reduced energy (δx) and photon reduced energy (δy) resolutions. Right: the RMD branching ratio plot as a function of the photon reduced energy y after integration on positron energy and positron-photon relative angle [54].

2.1.2 Accidental background

This kind of background takes place when a photon and a positron, com-ing from two different processes, are in random temporal coincidence and are emitted collinearly, with a stereo angle Θ ≈ π. This is possible because

1

This calculation is reported here to estimate the experimental sensitivity, but the effective search is based on a likehood approach, not on a box approach.

(24)

these are many sources of high energy photons, as the RMD described above, the various interaction processes between the positrons and the detector el-ements (annihilation-in-flight, bremsstrahlung) and the neutron interactions with materials near the detectors. The accidental background rate is given by Racc= Rµ× Bacc, where Rµ is the beam muon rate. More precisely, the

accidental background rate Racc can be expressed with the formula:

Racc∼ R2µδx(δy)2δteγ(δθeγ)2 (2.1)

where δx and δy are respectively the reduced energy resolution of positron (x = 2Ee/mµ) and photon (y = 2Eγ/mµ), as defined in the Paragraph 2.1.1,

and δteγ and δθeγ are respectively the relative time and the relative angle

resolutions between the two particles. As we can see in equation (2.1), the accidental background rate increases quadratically with the muon beam rate Rµ because both positron and photon come from the muon beam. In case

of high muon rate the accidental background becomes a serious problem and for this reason it is more indicated to use a continuous muon beam than a pulsed one, so as to reduce the number of accidental timing coincidences. The physical background rate RRDM is linearly proportional to the muon beam rate Rµ, while the accidental background rate Racc is proportional to R2µ: therefore at high Rµthe accidental background is dominant with respect

to the correlated background. Since also the signal rate is proportional to Rµ

the muon beam rate must be chosen carefully in order to optimise the signal to background ratio, taking into accounts all resolutions. Moreover, due to the quadratic dependence of Racc on the relative angle resolution δθeγ and

the photon energy resolution δy, it is necessary to improve the resolutions of those variables (in addition to the timing and positron energy resolutions) to obtain a better rejection of uncorrelated events.

2.1.3 Single Event Sensitivity (SES)

For a given branching ratio BR(µ+ → e++ γ), the signal events number

N (µ+→ e++ γ) is given by:

N (µ+→ e++ γ) = BR(µ+→ e++ γ) × k (2.2)

k = RµT

4πe+γsel (2.3)

where Rµ is the muon beam rate, T is the data-taking time, Ω is the

angular detector acceptance, e+ are the detection efficiencies on e+ and γ

respectively, and selis the signal selection efficiency, that takes into account the trigger and offline selection efficiencies. The Single Event Sensitivity is defined as the signal branching ratio when the expected number of events is equal to 1, in absence of background. Therefore the SES is the inverse of normalization parameter k in equations (2.2) and (2.3).

(25)

2.2

Experimental apparatus of MEG 1

st

phase

The goal of the MEG experiment [46] was to look for the µ+→ e++ γ

muon decay, whose identification would prove that CFLV exists and that physics beyond the standard model is a reality. To detect this extremely rare process it is necessary a high resolution in energy, both for positron and photon, in relative time and relative angle between the two particles, to discriminate the wanted two-body decay from the background. For this reason it is fundamental for the experiment to have high resolution in the four variables which describe the event. The MEG experiment became op-erative in 2008 and took data from 2009 to 2013. The full dataset collected by the MEG experiment corresponds to 7.5 × 1014µ+stopped on the target. The experiment was located at the Paul Scherrer Institut (PSI) in Villigen, Switzerland. The PSI has the most intense continuous muon beam in the world (108µ+/s) which, as described in the previous section, is the best suited for this kind of experiments. To produce the muon beam, the cyclothron be-longing to the High-Intensity Proton Accelerator (HIPA) accelerates protons at a kinetic energy of 590 MeV. Later on, the protons collide on two rotating graphite targets, producing π’s, because the proton energy is below the K meson production energy threshold. The pions produce muons and positrons by decaying in flight and inside the target and the positrons are eliminated from the beam by a combination of an electrostatic filter (W ien f ilter) and a lead collimator. The muon beam used by MEG was formed by surf ace muons, i.e muons produced by π+decay near the surface of the production

target with a momentum of 29.7 MeV/c.

The experiment was composed by three sub-detectors: the Drift Chamber (DCH), the Timing Counter (TC) and the Liquid Xenon Photon

Detector (LXe). The first two, together with the COBRA magnet

(Constant Bending Radius), formed the Positron spectrometer, that reconstructed the trajectory and measured the momentum vector of positrons with high precision. The total geometrical acceptance of the apparatus for a signal event was ≈ 11%. The MEG reference system is reported in Figure 2.2: the origin of the system is the centre of the magnet and the z-axis is parallel to the muon beam and the magnet axis. The region with positive z is referred to as downstream, that with negative z as upstream. The x-axis is directed opposite to the photon detector, so the LXe is located in the half-space with x < 0, while the y-axis is directed upwards. Because of the cylindrical symmetry of the apparatus, it is convenient to use a pair of cylindrical coordinates system (r, φ) in the plane trasverse to beam axis. The system was completed by the z coordinate for TC and the θ spherical coordinate for DCH. The positrons have trajectories with φ decreasing into the spectrometer. The stopping target, which allowed the muons to decay at rest, was a thin target placed in the centre of the spectrometer. It was designed to minimise multiple scattering, Bremsstrahlung and

(26)

annihilation-Figure 2.2: Sketch of the MEG experiment showing the MEG reference system: the reference system origin is in the centre of COBRA magnet; the z axis is parallel to the muon beam; the y-axis is directed upwards, while the positive x-axis is opposite to the photon detector. In this figure a simulated event is also shown both on the upper (left) and lateral (right) view.

in-flight (AIF) of positrons produced by the muon decays, and at the same time to maximise the muon stopping efficiency. The target was composed by a 205 µm thick sandwich of polyethylene and polyester; it had an elliptical shape with seven cross marks and eight holes on its surface, used for soft-ware alignment and to check the target position with respect to the reference frame measured by an optical survey. The angle between the muon beam and the target was 20.5◦ , the best compromise between the opposite needs of high muon stopping efficiency and low material traversed by outcoming positrons. The target was put in a He atmosphere to reduce the production of photons coming from AIF and Bremsstrahlung.

2.2.1 Positron spectrometer

The goal of the positron spectrometer was to reconstruct the positron tra-jectory, since the particle was emitted by the muon stopped on the target, until it hit the timing counter. The Drift Chamber (DCH) and the Timing Counter (TC) were immersed in a mainly longitudinal magnetic field, gen-erated by a superconducting magnet with an axial gradient, called COBRA (Constant Bending Radius). This kind of magnetic field was chosen be-cause the positron produced with a small longitudinal momentum had a smaller latency in the spectrometer with respect to particles moving in a pure solenoidal field, reducing the probability of pile-up events and allowing stable operation in a high-rate environment; moreover the magnetic field was

(27)

designed so that the positron trajectories in the spectrometer had a bending radius weakly dependent on the emission polar angle from the target.

Figure 2.3: Overview of the MEG experiment.

2.2.2 The Drift Chamber (DCH)

The Drift Chamber was the positron tracker and was formed by sixteen trapezoidal, identical modules, placed in a semi-circular shape with a step of 10.5◦. Each module was composed by two detector planes, each one formed by two cathode foils, separated by 7 mm and filled with a mixture (50:50) of Helium-Ethane (C2H6). Between the two planes cathodes wires

and field wires were placed alternating with a pitch of 4.5 mm. Since any DCH module was made-up by two identical layers, the two wire arrays were staggered to resolve the left-right position ambiguities. A double pad with Vernier structure was attached on both cathodes. The z coordinate was determinated both by the charge division method, which consisted in the ratio of the charges collected at the two ends of the wire and returned a premiminary z survey, and by the Vernier pad method, which improved the previous survey and used the cathode information: a double wedge (or Vernier pad structure) etched on the cathode foil collected the charge induced on each pad, which is periodically dependent on the z coordinate. The time difference between the wire signals and the absolute track time (provided by the TC) gave the positron radial coordinate, while the φ coordinate was determined by the electronic address of the hit wire.

(28)

Figure 2.4: Top figure, left: a view of the MEG Drift Chambers from the downstream side. The muon stopping target was placed at the center and the sixteen trapezoidal modules were situated semicircularly. Top figure, right: close-up view of MEG Drift Chambers. Down figure: view of the modules installed on the DCH.

(29)

The positron angular resolutions (on θ and φ coordinates) were mea-sured by means of the positron events making a double turn inside the Drift Chamber. Indeed the resolution on the positron momentum was measured by fitting the Michel spectrum of the positron at the end point folded with the detector response [47]. The coherent Mott scattering of positrons on the polyethylene target atoms was another tool to measure acceptance, angular and momentum resolution [65]. Thanks to the DCH low-mass, the average material intercepted by a typical signal positron when it crossed the DCH full volume was 2.0 × 10−3X0.

Figure 2.6: View of a DCH module disassembled. The zig-zag structure of the Vernier pad is visible on the right part of the figure. The cathode foils are divided in two sub-cathodes foils for aluminum deposition.

2.2.3 The Timing Counter

The Timing Counter [48] was dedicated to precisely measure the signal positron time and to provide a lower quality position information: it was composed by two sectors, called respectively upstream and downstream, corrisponding to negative z and positive z half-spaces respectively. Each sector was barrel-shaped and was made-up of 15 scintillating bars with a 10.5◦ pitch between them. The scintillating bars had a 4 × 4 cm2 square section and a length of ≈80 cm, and were placed along z axis and covered a z interval 25 cm ≤ |z| ≤ 95 cm, and a φ interval of 147◦. In the scintillating bars the z measurement was obtained by the difference of the arrivals times of the signal at the two ends of scintillators. The bar signals gave the recon-struction of the positron impact point on Timing Counter. Thanks to the magnetic field generated by COBRA, the positrons emitted with | cos(θ)| < 0.35 hit the TC after 1.5 turns (on average) in the DCH Chambers. The inner radius of a TC sector was 29.5 cm, so that only the positrons with an energy close to that of the signal can hit the TC, while several Michel positrons of low energy didn’t reach the scintillator bars. The Timing Counter was isolated by DCH He atmosphere in order to preserve the PMTs lifetime.

(30)

Figure 2.7: Schematic view of a Timing Counter sector. The scintillating bars were read out by PMTs positionated at each bar end.

2.3

The Liquid Xenon Photon Detector (LXe)

The MEG photon detector [50] had the aim to detect the photons and to measure their energy, timing and position of first impact point. It was a C-shaped homogenous calorimeter, with a cylindrical symmetry around the stopping target, which contained 800 liters of pure liquid xenon and covered ≈ 11% of the solid angle viewed from the center of the stopping target (| cos(θ)| < 0.35 and 120◦ in φ). The liquid xenon was chosen because this medium, with its high density in the liquid phase (ρLXe = 2.95 g/cm3) and short radiation length (X0 = 2.77 cm), is very efficient in detecting photons.

The LXe calorimeter had radial extension of 38.5 cm (corrisponding to ≈ 14 X0) and it was designed to fully contain the shower induced by a 52.83 MeV

photon. This detector was placed completely outside the COBRA magnet. In the high rate MEG environment, only the scintillation light was de-tected (thanks to its very fast signal) and measured by 846 photomultipliers immersed in the liquid xenon. The MEG calorimeter was designed to com-bine the advantages of inorganic and organic scintillators: the high light yield and the short scintillation time (4.2 ns and 22 ns for α’s and 45 ns for lightly ionizing particles, as e± and γ). Thanks to the calorimeter fast re-sponse, the probability of having pile-up did not exceed the 15%. The liquid xenon is maintained at −108◦ temperature; the PMTs, which were placed

(31)

Figure 2.8: Downstream view (left) and top view (right) of MEG LXe de-tector.

Figure 2.9: Schematic view of MEG first phase Liquid Xenon Calorimeter. In the figure on the left it is possible to see the all LXe Calorimeter faces covered by PTMs. On the right we can see an electromagnetic shower simulation in the detector.

(32)

on all the six faces of the calorimeter, had to work at low temperature. The range of temperature in which xenon is liquid is small; therefore the LXe calorimeter must had a efficient cryogenic system and the xenon tempera-ture must be controlled in each phase of the cryogenic circuit. The choice of the liquid xenon was based on its optical and physical propeties: it has a high atomic number (Z = 54) and a small radiation length (X0= 2.7 cm) at the temperature of −108◦ (165 K).

The scintillation process of liquid xenon occurs via the formation of ex-cited dimers Xe∗2 (an excimer formed by two species with at least one in an electronic excited state). Then, the dimers decay, producing two unbounded xenon atoms at ground state and emitting scintillating photons whose wave-length has a lorentzian distribution centred at 178 nm and with FWHM = 13 nm (V acuum U ltra-V iolet Region, V U V ) [66]. Because the energy to excite the Xe atom is high, the Xe is trasparent to its scintillation light with an absorption length  1 m. The photomultipliers must also work in condi-tion of high pulse rate. The photon energy was directly proporcondi-tional to the amount of light detected by the photomultipliers, while the photon impact point was extracted by the pattern of scintillation light on the detector inner face. The photon direction was not directly measured by the LXe calorime-ter, but it was obtained by the line of flight that connected the photon impact point in the calorimeter and the intercept of the positron trajectory on the stopping target, by means of an offline analysis. The photon interac-tion was reconstructed by the informainterac-tion from all photomultipliers. Even small concentrations of impurities on the liquid xenon (O2 and H2O) could considerably worse the γ’s energy resolution and the detector response uni-formity because produces the scintillation light absorption. For this reason a complex Xe purification system was built, which was designed to operate in both gaseous and liquid Xenon phase.

2.4

The DAQ and trigger system

The DAQ system was designed to record the analog signals of all detec-tors by VME boards, arranged in nine crates, each crate having a dedicated readout online machine. The digitisation and data acquisition system were based on a high frequency digitiser implementing the switched capacitor ar-ray technique, the Domino Ring Sampler 4 (DRS4) [67]. Each of the ≈ 3000 read-out channels recorded a waveform of 1024 samples when the corresponding signal overcame the threshold. The TC and LXe sampling rate was 1.6 GHz, while the DCH one was 0.8 GHz. Each waveform was processed offline to optimise the variables extraction by applying various methods (spectral analysis, baseline subtraction, noise filtering, digital con-stant fraction discrimination). The read-out time from the VME boards to the online disks (t ≈ 24 ms/event) is three orders of magnitude larger than

(33)

the DRS4 read-out time (≈ 625 µs) and caused the main contribution to the DAQ dead-time. This problem was solved thanks to the use of a triple buffer read-out scheme.

Figure 2.10: Scheme of DAQ and Trigger systems of MEG experiment. The MEG experiment, which was searching for ultra-rare event and had a huge background due to the high muon stopping rate: needed a fast and efficient trigger system, capable to have a high efficiency for signal events and a high background rejection [51]. The trigger system, based on the FPGA (Field Programmable Gate Array) technology, operated in strict connection with the DAQ system, by a fast processing of the analog signals. The trigger performed a fast reconstruction of some kinematic variables, very efficient to reduce the background level:

• Photon energy;

• Relative e+− γ timing;

• Relative e+− γ direction.

Due to the latency of the read-out electronics it was not possible to extract any information about the DCH, because the electron drift time to anode wires was too long (drift time ≈ 250 ns).

The trigger system was arranged in a three-layers structure, composed by VME custom boards: at the lower layers, the boards (called T ype 1) received the input signals by all the different sub-detectors; later the input signals were digitised by AD9218 ADCs, with sampling speed of 100 MHz. The digitised signals were processed by on-board Xilinx VirtexII XC 2VP20 FPGA. The upper layer boards (called T ype 2) received the processed signals and merged their information; the final layer was composed by the M aster

(34)

Boards, which collected the complete event information coming from the intermediate layer and reconstructed the physical observables relevant to event selection, and then generated the trigger signal. A set of ancillary boards distributed a common 100 MHz clock signal to all the boards. The trigger rate was required to be ≤ 10 Hz so as to do not overload the DAQ system. The most important variable to be reconstructed is the photon energy, because the γ energy spectrum decreases quickly close to the end-point. The photon energy was estimated by the weighted linear sum of all PMT pulse amplitudes. The estimator of photon interaction vertex with the detector was given by the position (index) of the PMT that had collected the highest charge in the LXe calorimeter inner-face.

Only the TC information could be used to evaluate the positron variables at the trigger level. The positron radial coordinate was extracted by the radial location of the hit TC bar, while the positron φ coordinate was given by the index of the first bar hit following the rotation direction expected for a positively charged particle. Instead the z coordinate on the hit bars was given by the charge ratio of the PMTs on each bar end. On the assumption of signal events (well defined positron momentum and photon and positron in back to back directions) by means of Monte Carlo simulations each LXe calorimeter PMT index had an associated TC region: if the online TC coordinates falled in this region, the condition of back to back direction was satisfied. The relative time between positron and photon was obtained by the difference of photon interaction time (with the LXe calorimeter) and positron interaction time (with the TC) respectively. The interaction time of the two particles were estimated by the fit of the PMTs leading edges.

2.5

Performance of MEG 1

st

phase

For what concerns the DCH performances, the positron energy resolution was extracted from the fit of the energy spectrum of Michel positrons: the obtained resolution was ≈ 330 keV. The positron angle and production vertex resolutions were obtained using the positron tracks that made two turns in the spectrometer: the two turns were considered as two indipendent tracks and the differences between the two tracks in position and direction defined the positron angle and position resolutions. The two obtained resolutions had to be corrected (using MonteCarlo’s corrections) because the two-turn tracks were a biased sample: in fact the double-turn tracks were possible only for small initial θ . The corrections corrisponded to a multiplicative factor (from 0.75 to 1.20) applied to the resolutions extracted in this way. The relative time between photon and positron (te+γ) was defined as the difference

between the photon arrival time on LXe calorimeter and the positron arrival time on Timing Counter, calculated at the target by a back-extrapolation of the positron track. The RMD peak in real data was used to determine

(35)

the relative time resolution and to determine the common offset between LXe and TC. To extract the te+γ resolution for signal events, the obtained

resolution had to be corrected for the photon energy dependence and the positron energy dependence. The te+γ resolution was dominated by three

main contributions: the positron track length uncertainty (∼ 75 ps), the Timing Counter intrinsic time resolution (∼ 65 ps) and the LXe photon detector time resolution (∼ 64 ps). In Table 2.1 we report the resolutions on positron variables and on photon variables, and in Table 2.2 positron and photon reconstruction efficiency and the trigger efficiency.

Variable name Expected value Final value

δEe 200 keV 330 keV

δθe+ 5.5 mrad 9.4 mrad

δφe+ 5.5 mrad 8.4 mrad

δEγ/Eγ 1.2 1.7

δγ position 4 mm 5 mm

δteγ 65 ps 120 ps

Table 2.1: MEG detector resolutions, expected values and final values.

Expected efficiency Final efficiency e+ reconstruction 90% 40% γ reconstruction > 40% 60% trigger ∼ 90% ∼ 99%

Table 2.2: Reconstruction efficiency for photon and positron and trigger efficiency

The final efficiencies and resolutions (specially for positron reconstruc-tion) were worse by about a factor of 2 than the expected ones [43]. Because of the lower resolutions, the systematic uncertainties dominated the MEG experiment sensitivity and for this reason the data-taking was stopped in 2013: in fact, continuing to take data would not lead to any significant im-provement on the experimental limit. Therefore it is necessary to improve the detector efficiencies to increase the sensitivity on µ+→ e++ γ process.

This requirement has led to the experiment improvement and to MEG 2nd phase (MEG II). The MEG upgrade was proposed in 2012 and accepted by the PSI committee to overcome the effects which caused the worsening of sub-detector perfomances.

(36)

Chapter 3

The MEGII experiment:

upgrade motivations

The MEG experiment final results, based on the full dataset collected

in the period 2009-2013, is BR(µ+ → e+ + γ) ≤ 4.2 × 10−13 at 90%

C.L [44]. This upper limit is the best experimental result on this CLFV muon decay to date, better by a factor 1.4 than the previous constraint (BR(µ+→ e++ γ) ≤ 5.7 × 10−13 at 90% C.L [44]) established by using the

MEG 2009-2011 dataset and by a factor of 30 than the limit set by previous experiments (M EGA 2001, [63][64]). At the end of 2013 the data taking was stopped because the sensitivity of the experiment has reached its ultimate limit, for fixed experimental resolutions of the photon and positron kine-matic variables. The accidental background events extend into the signal region, and the experimental sensitivity does not increase linearly with the statistics: therefore collecting new data the sensitivity gain would become asymptotically flat. For this reason the MEG upgrade was proposed and in 2013 was approved by the PSI research committee. The MEG upgrade (also called MEG II) has a target sensitivity of 4 × 10−14 for the µ+ → e++ γ

branching ratio, one order of magnitude better than the sensitivity reached by MEG. The MEG II sensitivity as a function of the DAQ time is shown in Figure 3.1. To achieve this experimental sensitivity it is necessary:

• To build a new detector with enlarged acceptances, better perfor-mances and higher efficiences;

• To make changes on the beam line to increase the muon stopping rate by more than a factor 2;

• To build a new and optimised trigger and DAQ electronic systems; • To improve the kinematic variable resolutions, to keep the accidental

(37)

Figure 3.1: Expected sensitivity of MEG II experiment on µ+ → e+ + γ

branching ratio as function of running time, at 90% C.L. The MEG results [44] is shown as comparison.

In particular, in the MEG II experiment the following changes were done with respect to MEG:

• The muon stopping rate on the target was increased from 3 × 107 µ+/s

up to 7 × 107 µ+/s;

• The material trasversed by photons and positrons was reduced, by choosing a thinner positron target and a light gas mixture;

• The MEG tracker, the Drift Chambers system, was replaced by a new tracker, a single volume cylindrical drift chamber with a radia-tion length lower than that of the previous tracker;

• A new segmented timing counter, with higher granularity, replaced the previous scintillator bars Timing Counter;

• The photon detector acceptance was enlarged, changing the shape of the lateral sides;

• The γ’s kinematic variables and time resolution were improved by re-placing the PMTs inner face with SIPMs;

• A new trigger and DAQ electronic boards were built, tailored on in-creased bandwidth requests.

(38)

The comparison between MEG and MEG II is shown schematically in Figure 3.2.

Figure 3.2: MEG and MEG II in comparision.

3.1

Target and µ

+

beam

The beam used in the MEG experiment comes from low-energy muon beam line called πE5. This line has been chosen because it has the highest acceptance for surface muons. The muons are produced by soft π’s, which come from the interaction of a proton beam of 590 MeV energy on a graphite target. The pions decay on the surface of the target, and for this reason the produced muons are called surf ace muons. The πE5 channel is composed by two sets of quadrupole magnets, and between the two sets a Wien filter is placed, which removes the residual contamination of pions and positrons contained in the beam. A schematic view of the MEG beam line is shown in Figure 3.3. Afterwards the muon momentum is reduced by means of a Mylar degrader (with a thickness of 300 µm) placed in the center of the supercon-ducting beam transport solenoid (BTS). The muon momentum distribution in the πE5 channel is shown in Figure 3.4. As mentioned in previous section, to achieve a sensitivity better than that of MEG, it is crucial to increase the stopping muon rate: so it is necessary to maximise the stopping density, i.e to have the maximal stopping rate for the thinnest target. To reach the

(39)

desired sensitivity we need also to exploit the full beam intensity, paying at-tention to the accidental background and keeping it one order of magnitude below the target values. An increase in the muon rate is translated in an increase of momentum-byte ∆p/p, and so in an increase of the range strag-gling. Indeed the range straggling is proportional to p3.5and is composed by two factors: the first is the energy-loss straggling of the traversed material (that amounts to about 9% of the range) and the second is the momentum-byte. Because of the different impact of the two terms, a better reduction of the range straggling is obtained by reducing the muon momentum than the momentum-byte.

Figure 3.3: Schematic view of MEG beam line. In the figure one can see the πE5 channel and the two triplets of quadrupole magnets to focus the muon beam; the Wien filter (which removes the pion and electron contamination) is inserted between them; the Mylar degrader is placed before the beam entry into the MEG experiment.

Two configurations of momentum beam and target parameters were stud-ied: the first considers a surface muon beam of 28 MeV/c with a polyethylene target of 140 µm thickness, placed at an angle of 15◦ to the beam axis; the second considers a sub-surface beam with 25 MeV/c and a polyethylene tar-get of 160 µm and placed at 15◦ to the axis. The most likely choice is to use the first target.

(40)

Figure 3.4: Momentum spectrum of muons coming from πE5 channel. The data are fitted with a p3.5 power law, folded with a Gaussian momentum resolution. The blue and red zones in figure show the ±3σ spread for surface and sub-surface muons respectively.

3.2

The new MEG Cylindrical Drift Chamber (CDCH)

The decision to build a new tracker for the MEG upgrade was due to the unsatisfactory positron tracking performances of the previous experiment as can seen in Table 2.1 and 2.2: while the TC and LXe calorimeter almost met their requirements, the DC efficiency and resolutions were far below the design value, making the final positron tracking performances significantly worse than that expected. The main contribution to this inefficiency was due to the DC front-end electronic boards and the mechanical supports, which intercepted and stopped many positrons on their path to the TC. When the positron exited from the DCH, it induced a low signal amplitudine on the cathode foils, causing a low quality z coordinate reconstruction, because the measurement was more easily spoiled even by moderate noise. Furthermore the DC had ageing-related problems, because of the high radiation environ-ment in which the tracker had to work. To overcome the previous limitations it was decided to build a new cylindrical single volume tracker, with low mass and high granularity called Cylindrical Drift CHambers (CDCH). The tracker was designed to obtain a large radiation length, high trasparency, low multiple Coulomb scattering contribution and to increase the tracker efficiency and the positron reconstruction. This new tracker (schematically shown in Figure 3.5) will be decribed in details in the next chapter.

(41)

Figure 3.5: Picture of the CDCH Drift Chamber with the pixelated Timing Counter.

3.3

The Timing Counter (TC)

The time coincidence of e+− γ pair is a fundamental variable which has to be measured as precisely as possible. The Timing Counter detector was built to give a precise measurement of the positron time te+ . The MEG

Timing Counter [49] was composed by 30 scintillator bars with two PMTs at each end which read out the scintillation light induced signal. This detector showed an intrinsic time resolution (σt≈ 40 ps) during laboratory tests, but during the data-taking the time resolution worsened until reaching σte+

70 ps. The reasons which produced this worsening are:

• The COBRA magnetic field caused a gain lowering of the PMT ampli-fication and an increse of the Transit Time Spread (TTS);

• The optical photon path had a large dispersion due to the large size of the scintillator;

• The electronic time jitter gave a contribution to the time resolution of ≈ 40 ps;

• Uncertainties in the inter-time calibration between the bars produced a further timing resolution worsening.

The positron could hit twice the same bar, and this double-hit produced a tail component in the timing response function, causing an additional time

Riferimenti

Documenti correlati

Fig. – Image reconstructions from RHESSI visibilities provided, from left to right, by a Back- Projection algorithm, CLEAN, an interpolation/extrapolation method, a compressed

Relying on the Solar Monitor Active Region Tracker (SMART) tools and also using pattern recognition methods developed within the ’Horizon 2020’ FLARECAST effort

Due to a recent findings that the serum concentration of mitochondrial ATP synthase inhibitory factor 1 (IF1) is an independent prognostic factor in patients with coronary heart

Differences in nucleotide excision repair capacity between newly diagnosed colorectal cancer patients and healthy controls1. Jana Slyskova 1,2,3 , Alessio Naccarati 1 , Barbara

Per quanto concerne il tono di scrittura, dai risultati (vedi Tabella 27) si evince che la percezione di qualità e le valutazioni ad essa associate sono caratterizzate da

A phase I dose-escalation trial of tremelimumab (CP-675,206) in combination with gemcitabine in chemotherapy-naive patients with metastatic pancreatic cancer.. - In: ANNALS

Non solo il passo di Ulpiano si presenta come stravagante – rispetto al sistema della Compilazione – in quanto adotta una concezione di contractus tutt’altro che in linea con

Procedura di inserimento del catetere di Swan Ganz * Preparazione del materiale (kit CVC + linee di monitoraggio) * Preparazione pre-procedura (informazione e posizionamento del