• Non ci sono risultati.

fulltext

N/A
N/A
Protected

Academic year: 2021

Condividi "fulltext"

Copied!
7
0
0

Testo completo

(1)

M. L¨uscher(1)(2)

(1) CERN, Theoretical Physics Department - 1211 Geneva 23, Switzerland

(2) Albert Einstein Center for Fundamental Physics - Sidlerstrasse 5, 3012 Bern, Switzerland

received 27 November 2017

Summary. — Lattice field theory and lattice QCD, in particular, are areas of research in which Roberto was strongly interested practically since the beginning of his scientific career. He contributed to the development of lattice QCD through his ideas and publications, but also through his engagement in the APE project and in many other ways as well. Some of the work we did together is recalled in this talk and put in the context of the situation in the field at the time.

1. – Introduction

My collaboration with Roberto started in 1993, when he invited Rainer Sommer, Peter Weisz, Ulli Wolff and me to Rome to make plans for some common research work. Our immediate scientific goal was to further develop and apply the step-scaling technique in lattice QCD, using numerical simulations and the APE100 computers. At this memorable first collaboration event, Roberto, Nicola Cabibbo and Simone Cabasino introduced us to the hardware and software of these massively parallel machines, which had just become available for physics computations.

As far as I know, Roberto’s first paper on lattice field theory and numerical simulations appeared in 1981 [1]. In the following years, he got more and more interested in this field of research, gave lectures on lattice QCD at Carg`ese in 1987, engaged himself in the APE project and created a group of young lattice physicists at Tor Vergata. His main motivation was no doubt the perspective of obtaining quantitative results for hadron matrix elements and related quantities, but he had a broad approach to the subject, where the development of new field-theoretical concepts, for example, was considered to be an essential part of the research effort.

2. – Lattice QCD in the early ’90s

A hot research topic at the time was non-perturbative renormalization, i.e. the question of how exactly the properties of the theory at low energy are related to the

(2)

renormalized parameters and fields defined at high energies. Lattice QCD had already been shown to be a valid ultraviolet regularization of QCD to all orders of perturbation theory, but the existence and universality of the continuum limit at the non-perturbative level was still a largely open issue. Another closely related topic that received much attention was the Symanzik improvement programme, where one attempts to accelerate the convergence to the continuum limit by adding irrelevant terms to the action and the local fields.

Computations of quantities of phenomenological interest, on the other hand, were hampered by various purely technical difficulties. The inclusion of the sea quarks in the numerical simulations was practically infeasible, for example, and the known simulation algorithms scaled poorly with the quark masses and the lattice spacing. Moreover, high-performance computing was still largely based on expensive vector-processing machines. 3. – Step scaling

Calculations of hadron masses and matrix elements in lattice QCD require lattices with a spatial extent of at least a few fm to be simulated. The properties of the theory at high energies can, in principle, be determined on the same lattices, provided the lattice spacings are significantly smaller than the shortest distances considered. Such lattices would however have hundreds if not thousands of points in each direction and would, at present, be quite impossible to simulate.

Step scaling is a non-perturbative renormalization technique that does not require a wide range of physical scales to be accommodated on a single lattice [2]. It makes use of an intermediate renormalization scheme for the gauge coupling, the quark masses and the local operators of interest, where the lattice size L is taken to be the renormalization scale. The dependence on the latter can then be determined by simulating a sequence of matching lattices (see fig. 1). Moreover, contact with the fundamental low-energy scales can easily be made at large L, while at physically small L (corresponding to values of 1/L of, say, 100 GeV) perturbation theory may be used to convert from the intermediate to any standard infinite-volume scheme.

Roberto was very interested in this new technique and tried it out with his team in the SU(2) Yang-Mills theory, using a particular intermediate scheme [3]. A plot of the running coupling obtained in the course of this work and of another coupling previously calculated in ref. [4] is reproduced in fig. 2. The fact that the two schemes could be

(3)

Fig. 2. – Scale dependence of the running coupling computed by Roberto et al. [3] (points labeled “Twisted Polyakov loops”) and another coupling defined in a completely different way [4] (points labeled “Schr¨odinger functional”). The renormalization scale of the latter has been scaled by a constant factor so as to match the two couplings at high energies.

matched by a rescaling of the renormalization scale and that both accurately followed the perturbative evolution down to fairly low energies came a bit as a surprise. Relating the low- to the high-energy perturbative regime of the theory thus proved to be easier than suspected, at least in the simple theory considered in these early studies.

There was, however, still a lot of work to be done before the systematic errors were fully understood. The continuum limit had to be taken carefully and further verification of the independence of the final results on the choice of the intermediate scheme was desirable. Filling these gaps was the first goal of the ALPHA Collaboration, the collabo-ration that started with the meeting in Rome previously mentioned. In a paper entitled “Universality and the approach to the continuum limit in lattice gauge theory” [5], many details were sorted out and the viability of step scaling was definitely established. A nice quantitative result obtained in this paper is the value

(1) αs(q) = 0.1288(15)(21) at q = 200/r0 80 GeV

of the strong coupling in the MS scheme of dimensional regularization at a high momen-tum q given in units of the Sommer radius r0[6] (a characteristic low-energy scale defined through the heavy-quark potential). The value of the coupling cannot be compared with the experimental one, since the simulated lattice theory did not include the quarks and the gauge group was set to SU(2) instead of SU(3). But the calculation demonstrated that step scaling can deliver results with small systematic and statistical errors.

The required simulations have all been carried out on APE100 machines at Tor Ver-gata and at DESY, where the first of these computers were installed in 1994 (see fig. 3). DESY bought many APE machines in the following years and also participated in the development of the next generations of APE computers. Step scaling later became an industry, with application mainly in QCD and technicolour theories. The ALPHA Collab-oration changed its composition over time, expanded the scope of its scientific programme and established itself as a most successful international lattice QCD collaboration.

(4)

left to right) Zoltan Fodor, Stefan Sint, Istvan Montvay, Martin L¨uscher, Hubert Simma, Karl Jansen, Marcus Speh and Rainer Sommer.

4. – Overcoming the quenched approximation

Since the virtual effects of the quarks are difficult to include in the simulations, lattice QCD was often studied neglecting them. The valence quarks (and thus the quark prop-agators connected to the local operators in the correlation functions considered) were however not dropped nor were their interactions with the gluons and of the gluons with themselves. The approximation (referred to as the “quenched” or “valence” approxima-tion) works pretty well at large quark masses, but violates unitarity and is known to fail badly near the chiral limit.

The principal technical difficulty in lattice QCD simulations derives from the fact that the quarks are fermions. At present their effects can only be taken into account using a pseudo-fermion representation such as

(2) det D∝



φ

exp−(φ, (D†D)−1/2φ)

of the determinant of the Dirac operator D. The field φ integrated over in this formula is a Dirac field with complex rather than anti-commuting components and is therefore re-ferred to as pseudo-fermion field. No approximation is made at this point, and the integral (2) is accessible to standard importance sampling algorithms, but the pseudo-fermions add a large amount of statistical noise to the theory that slows down the simulations to the extent of making them practically infeasible at small quark masses.

In 2001 Hasenbusch [7] suggested to separate the high modes of the Dirac operator by factorizing the quark determinant according to

(3) det D = det Dhigh× det(D/Dhigh)

and to use an independent pseudo-fermion field for each factor of the determinant. While more noise is added to the system in this way, the net effect of the factorization is a sig-nificant damping of the random fluctuations of the forces that drive the simulation. An acceleration of the simulation was thus achieved, but further ideas and a few years of

(5)

10 20 30 40 50 60 70 80 90 100 110 120

|λ| [MeV]

Fig. 4. – Typical shape of the low end of the spectral density of the Wilson-Dirac operator in QCD with a doublet of quarks of mass m [11]. The tail at eigenvalues smaller than m is a lattice artifact that disappears in the continuum limit. At fixed lattice spacing and small masses, the tail may however extend to zero, in which case accidental zero modes may occur.

algorithm development were still required until the quenched approximation was even-tually overcome [8, 9].

Roberto was enthusiastic about this important progress, and together with Luigi Del Debbio, Leonardo Giusti and Nazario Tantalo (who was then a PhD student of Roberto) we decided to perform some significant simulations of QCD with a doublet of light quarks. In view of its simplicity and conceptual clarity, we agreed to use the formulation of lattice QCD introduced by Wilson in 1974 [10]. Before 2004 Wilson’s formulation was however considered to be a particularly difficult case for numerical simulations, because chiral symmetry is only preserved up to lattice effects. The Wilson-Dirac operator is therefore not protected from having accidental zero modes, and while these cancel in the functional integral, they can trigger instabilities in the simulations (see fig. 4).

The issue thus had to be addressed first and it turned out that the width σ of the tail of the spectral density plotted in fig. 4 is approximately related to the spacing a and volume V of the lattice through [11]

(4) σ √a

V.

Apart from the expected suppression of the tail in the continuum limit, the formula reveals that the low end of the spectrum also becomes sharper when the lattice volume increases. Some further investigation then showed that stability on lattices of size 2L×L3 is guaranteed provided a, L and the pion mass Mπ are in the range

(5) a≤ 0.1 fm, L ≥ 2 fm, MπL≥ 3.

In practice these bounds should anyway be satisfied if the lattice and finite-volume effects are to be small.

We could then safely proceed with the QCD simulations we had planned to do. The simulation algorithm we used was based on a factorization (3) of the quark determinant with

(6) Dhigh=

 blocks Λ

(6)

where the lattice is divided into blocks Λ of points, as in fig. 5, and DΛdenotes the Dirac operator on Λ with Dirichlet boundary conditions [8]. With respect to the algorithms that did not include a mode separation, the DD-HMC algorithm (as it was called) achieved an improved scaling behaviour as a function of the lattice spacing a and the quark mass

m [12]. Moreover, it turned out to be much more efficient already at fairly coarse spacing

and large mass (by roughly two orders of magnitude at a = 0.1 fm and m = 20 MeV, for example).

Apart from mastering the technical difficulties in these simulations, our aim was to study the behaviour of two-flavour QCD at small quark masses and, if possible, to make contact with chiral perturbation theory. In a series of two papers entitled “QCD with light Wilson quarks on fine lattices” many interesting results were obtained [12]. Particularly impressive was a plot showing the dependence of the square of the pion mass on the quark mass (see fig. 6). Leading-order chiral perturbation theory predicts a linear dependence,

0.0 0.5 1.0 1.5 2.0 2.5 3.0 2m/(mref+ms, ref) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 (Mπ/MK, ref)2 y = 0.975 x + 0.050 x2 a = 0.072 fm a = 0.052 fm a = 0.078 fm, O(a) imp. 0.6 0.8 1.0 0.6 0.8 1.0

Fig. 6. – Dependence of the square of the pion mass Mπ on the quark mass m at fixed lattice spacing [12]. Both masses are given in units of some reference masses so that the points obtained at different spacings a must lie on the same curve up to lattice effects. The data points shown range from Mπ 377 MeV up to Mπ 830 MeV in the plot on the left and up to Mπ 500 MeV in the scaled figure on the right.

(7)

terterms required to cancel the leading lattice effects (i.e. the ones proportional to a). A theoretical reason for the surprisingly linear dependence of M2

π on the quark mass in

two-flavour QCD however awaits to be found. 5. – Working with Roberto

The research work Roberto was interested in usually had to include some new physics ideas or at least be technically innovative. He very much appreciated discussions at the blackboard, particularly in the phase where a project was in the process of being defined and many things were still unclear.

Roberto had a profound knowledge of field theory and the phenomenology of elemen-tary particles. Collaborating with him was inspiring and not complicated. In April 2013, when I spent a few days at Tor Vergata to give a talk on the Yang-Mills gradient flow, Roberto got immediately interested in the subject and started to speculate on the pos-sible use of the flow for computations of the nucleon structure functions. The discussion then went on and it quickly became clear that electroweak transition matrix elements are likely to be a promising field of application too. What happened in these days describes well how we interacted many times and the generosity with which Roberto shared his insights and ideas.

REFERENCES

[1] Martinelli G., Parisi G. and Petronzio R., Phys. Lett. B, 100 (1981) 485. [2] L¨uscher M., Weisz P.and Wolff U., Nucl. Phys. B, 359 (1991) 221.

[3] de Divitiis G. M., Frezzotti R., Guagnelli M. and Petronzio R., Nucl. Phys. B,

422 (1994) 382; 433 (1995) 390.

[4] L¨uscher M., Sommer R., Weisz P.and Wolff U., Nucl. Phys. B, 389 (1993) 247. [5] de Divitiis G. M., Frezzotti R., Guagnelli M., L¨uscher M., Petronzio R.,

Sommer R., Weisz P.and Wolff U., Nucl. Phys. B, 437 (1995) 447. [6] Sommer R., Nucl. Phys. B, 411 (1994) 839.

[7] Hasenbusch M., Phys. Lett. B, 519 (2001) 177. [8] L¨uscher M., Comput. Phys. Commun., 165 (2005) 199.

[9] Urbach C., Jansen K., Shindler A. and Wenger U., Comput. Phys. Commun., 174 (2006) 87.

[10] Wilson K. G., Phys. Rev. D, 10 (1974) 2445.

[11] Del Debbio L., Giusti L., L¨uscher M., Petronzio R. and Tantalo N., JHEP, 02 (2006) 011.

[12] Del Debbio L., Giusti L., L¨uscher M., Petronzio R. and Tantalo N., JHEP, 02 (2007) 056; 02 (2007) 082.

[13] PACS-CS Collaboration (Aoki S. et al.), Phys. Rev. D, 79 (2009) 034503; 81 (2010) 074503.

Riferimenti

Documenti correlati

Due to this variability, there is a great need for studies that include SPT or DBPCFC to confirm insect allergic patients and to prove that proteins expressed in edible

I criteri della European Society for Immunodeficiencies (ESID) per la definizione di DIgA prevedono livelli sierici di IgA inferiori a 7 mg/dl con normali livelli di IgG

 Lomonaco T, Ghimenti S, Biagini D, Salvo P, Bellagambi FG, Onor M, Trivella MG, Di Francesco F, Fuoco R. The sampling issue for the chemical analysis of breath and oral fluid

Osservando la parte alta del modello di business , i primi aspetti che volevano essere raggiunti combinando insieme alcuni dei punti elencati sotto le attività chiave, le risorse

We measured the mechanical properties using indentation of the elytra external bulk layer to estimate the wear properties using the embedded cross-section samples, and the

memoria come «un film sfuocato e frenetico, pieno di fracasso e di furia e privo di significato: un trame- stio di personaggi senza nome né volto annegati in un continuo

By combining the new 3 GHz data with the archival 1.4 GHz VLA dataset we produced a spectral index map of the extended emission, and then estimated the radiative age of the arcs

In this paper, the relevance of two recently introduced device metrics for broadband circuit design is analyzed, namely the available bandwidth f A [1] and the maximum