• Non ci sono risultati.

Simplicial quantum gravity with dynamical gauge fields

N/A
N/A
Protected

Academic year: 2021

Condividi "Simplicial quantum gravity with dynamical gauge fields"

Copied!
135
0
0

Testo completo

(1)

Simplicial Quantum Gravity

with

Dynamical Gauge Fields

Dipartimento di Fisica E.Fermi Corso di Laurea Magistrale in Fisica

Curriculum Fisica Teorica

Candidate: Supervisor:

Alessandro Candido Prof. Massimo D’Elia

(2)
(3)

Simplicial Quantum Gravity with

Dynamical Gauge Fields

(4)

inal LATEX package classicthesis of André Miede, inspired by Robert Bringhurst’s work “The Elements of Typographic Style”.

On previous page is reproduced Print Gallery by M. C. Escher, 1956 (this and other works of Escher are shown onhttp://www.mcescher.com/).

contacts

(5)

felix qui potuit rerum cognoscere causas [...] fortunatus et ille deos qui nouit agrestis

— P. Vergilius Maro, Georgicon

(6)
(7)

C O N T E N T S

i literature

1 asymptotic safety 3

1.1 The problem of Quantum Gravity 3

1.2 Renormalizable theories of gravitation 5

1.2.1 Extended theories of gravitation 5

1.2.2 Resummation 6

1.2.3 Effective Theory (composite gravitons) 7

1.2.4 Other options 7

1.3 Asymptotic Safety 8

1.4 Effective Theory 11

2 causal dynamical triangulation 15

2.1 Path Integral 15

2.1.1 Feynman’s Path Integral 16

2.1.2 Wilson’s RG 17

2.1.3 Gravity 21

2.2 Lattice and Gravity 22

2.2.1 Triangulations 23

2.3 Lorentzian vs Euclidean 24

2.4 Causal Dynamical Triangulation 27

2.4.1 Links 30

2.4.2 k- Simplices 30

2.4.3 Wick Rotation in CDT 32

2.5 CDT phase structure 34

3 numerical methods 41

3.1 Markov Chain Monte Carlo 42

3.1.1 Markov Chain 42 3.1.2 Metropolis algorithm 44 3.1.3 Heat-Bath 46 3.2 CDT moves 47 3.2.1 Pachner moves 47 3.2.2 Alexander moves 48

3.2.3 Pachner - Alexander connection 52

3.3 Standard Algorithm 53 ii original developments

4 gauge fields and cdt 57

4.1 Existing analytic results 57

4.2 Gauge fields implementation 58

4.2.1 Gauge action 60

(8)

4.2.2 Gauge freedom 61

4.3 Markov Chain for Gauge Fields 62

4.3.1 CDT moves upgrade 62

4.3.2 Gauge move introduction 68

5 results 69

5.1 The theory without gauge fields 70

5.1.1 Total volume 70

5.1.2 Profile’s Correlation Length 71

5.2 Effect of Gauge Fields for fixed β 74

5.2.1 Critical Indices 75

5.2.2 Comparison between β = 0 and β 6= 0. 77

5.3 The correct Continuum Limit 78

5.3.1 Topological charge 79

5.3.2 Topological susceptibility 79

5.3.3 Continuum limit: β scaling 83

5.4 Gauge Correlation Length 84

5.4.1 Why a gauge correlation length? 84

5.4.2 A candidate: torelon 85 Appendices

a miscellanea 93

a.1 Asymptotic Safety 93

a.1.1 Unphysical poles in propagators 93

a.1.2 Inessential coupling test 94

a.1.3 AScondition and inessential coupling 95

a.1.4 An example of nonAStheory 95

a.1.5 Dimensionality of the critical surface and critical exponents 96

a.2 Causal Dynamical Triangulations 98

a.2.1 Triangle volume 98

b gauge related stuffs 101

b.1 Analytic Solution for 2D CDT coupled to Gauge Fields 101

b.2 Gauge Action 104

b.3 Algorithm Upgrades for Gauge Content 106

b.3.1 Duality for Moves with a Gauge Structure 106

b.3.2 Normalization Integrals for U(1) field extraction 107

b.3.3 Heat-Bath U(1) 109

(9)

L I S T O F F I G U R E S

Figure 1 Example of paths between two events. 16

Figure 2 RG flow in a space with 2 parameters. 18

Figure 3 Fluctuations towards a ξ = 0 fixed point. 19

Figure 4 Fluctuations towards a ξ =∞ fixed point. 20

Figure 5 Two example of torus triangulations. 24

Figure 6 Wick rotation in complex time plane. 25

Figure 7 Regge definition of curvature by missing angle. 27

Figure 8 Comparison between a sliced triangulation and one not 29

Figure 9 Some examples of d - simplices kinds. 31

Figure 10 CDT phase diagram. 35

Figure 11 4D CDT phase A. 37

Figure 12 4D CDT phase B. 37

Figure 13 4D CDT phase C. 38

Figure 14 Ergodicity. 43

Figure 15 Von Neumann accept-reject. 45

Figure 16 Attempt to perform Pachner move. 48

Figure 17 Move (2 → 2). 49

Figure 18 Move (2 → 4) and its inverse (4 → 2). 50

Figure 19 3D Alexander Moves. 52

Figure 20 Triangulation slab. 57

Figure 21 Dual graph. 59

Figure 22 Plaquettes comparison. 60

Figure 23 Move 22 with gauge. 63

Figure 24 Move 22, gauge randomization. 64

Figure 25 Move 22, plaquettes numbers. 64

Figure 26 Move 24 with gauge. 65

Figure 27 Heatbath distributions. 66

Figure 28 Move 24, plaquettes numbers. 66

Figure 29 β = 0volume divergence. 71

Figure 30 Space profiles and correlations. 72

Figure 31 β = 0profile length divergence. 73

Figure 32 λc(β)plot. 74

Figure 33 Volume critical indices for β 6= 0. 76

Figure 34 Profile length critical indices for β 6= 0. 77

Figure 35 Qhistory (topological charge). 79

Figure 36 Space profiles and correlations. 80

Figure 37 χ(β)(topological susceptibility). 81

Figure 38 χ(β)(topological susceptibility). 82

Figure 39 Freezing. 83

Figure 40 Torelon sketch. 85

(10)

Figure 43 Triangle volume 99

Figure 44 Tetrahedron volume 100

Figure 45 Ambjorn Heat Kernel additivity. 102

Figure 46 Overheatbath. 111

L I S T O F T A B L E S

Table 1 Points in fit for β = 0 volume divergence. 70

Table 2 Fit parameter for β = 0 volume divergence. 71

Table 3 Points in fit for β = 0 profile length divergence. 73

Table 4 Fit parameter for β = 0 profile length divergence. 73

Table 5 λc(β)data table. 74

Table 6 µ(β)data table. 76

Table 7 ν(β)data table. 77

(11)

I N T R O D U C T I O N

Since Einstein-Hilbert action is non-renormalizable there is no natural candidate for a quantum theory of the gravitation. Thus there are more different attempts to define a consistent theory for Quantum Gravity, one of these is Asymptotic Safety (Weinberg 1979).

Asymptotic Safety is still a Quantum Field Theory approach, the difference is that another notion of renormalizability is taken into account, that generalizes the usual one; it consists in searching for a generic Renormalization Group (RG) ultraviolet fixed points rather than requesting the renormalizability perturbing around the Gaussian one.

Causal Dynamical Triangulation (CDT, Ambjorn, Goerlich, et al. 2012) is a non-perturbative regulator for gravity used for defining the RG flow, in a way similar to that of lattice in flat space QFT, in which the ultraviolet divergences are cutoffed limiting the configurations (that in general is the quotient set of differential manifolds with diffeomorphisms) to the piece-wise flat ones, where the geometry is defined by an abstract triangulation and the length of the simplex’s edge, that acts as a lattice spacing.

Moreover a suitable version of discrete action must be defined to be evaluated on a given triangulation, with the requirement to reproduce the Einstein-Hilbert one in the continuum limit. This is called Regge action (Regge 1961) and has been defined associating the curvature term to the missing angles (while the volume term is trivially the triangulation’s volume).

So CDT regulator allows the use of numerical simulations, because the new con-figuration space has a discrete structure that can be represented in a finite memory space.

This has been done employing Monte Carlo methods to create a sample distributed according the probability defined by the Euclidean action, to obtain a discrete version of the path integral. Some successful results have been found for a particular phase, e.g. it has been shown that the shape of the simulated quantum universe is consistent with a de Sitter one.

The main goal of the thesis is to extend the dynamical variables from the bare ge-ometry, i.e. a pure gravity theory, to the inclusion of gauge fields living on the trian-gulation.

The first step consists in identifying an appropriate representation for the new de-grees of freedom in the framework of the Dynamical Triangulations. We did this localizing gauge fields on the links of the graph dual to the triangulation, and the reasons of this choice will be discussed in the thesis.

(12)

Then other two fundamental ingredients needed are: defining a suitable (and effi-cient) Monte Carlo algorithm and an appropriate action that considers the new degrees of freedom introduced.

The Monte Carlo algorithm for sample extraction is a local Markov Chain, in the sense that transition probability are non-zero only between those configurations that differ only in a small region. The small regions considered are called cell, and are the smallest available to jump from a given triangulation in another legal one. These jumps, i.e. the set of couples of "adjacent" triangulations on which the Markov Chain has non-zero values, are called moves, and are already defined in absence of gauge fields.

The presence of the gauge fields require mainly two upgrades of the Markov Chain: the definition of gauge structure update during each triangulation move and a ded-icated gauge move (performed on a fixed triangulation) that updates a single gauge field, for which we choose to employ the heat-bath algorithm. The second upgrade is needed in order to ensure the ergodicity of the upgraded Markov Chain.

We define the second ingredient, the action for gauge-including triangulations, merg-ing the most similar objects that are already used: the Regge action for triangulations and the lattice one for gauge theories.

While the Regge action is already appropriate for this kind of objects the lattice ac-tion needs to be generalized for being applied on triangulaac-tions, because its definiac-tion is for regular flat lattices. This generalization has been chosen in order to still have a correct naïve continuum limit, and to be proportional to the original action on a flat triangulation; this is done multiplying the term preexisting term for the inverse length of the plaquette, that oppositely to what happens for hypercubic lattices is not fixed for generic triangulations, and according the Regge action is the measure of the local curvature.

We used the defined algorithm to test the behavior of gauge fields in a dynamical geometry, and their back-reaction on the geometry itself. So both gauge and triangu-lation observables are analyzed to characterize the physics of the system, in particular, the causal structure of CDT gives us a straightforward definition of correlation lengths, and taking one length from the geometry (from slice volumes’ correlation) and one from the gauge sector (torelons for example) it is possible to define a common contin-uum limit, where the two lengths scale with a fixed ratio while taking triangulations of increasing volume.

The thesis is structured in five chapters: the first three are an introduction to theory and the methods we used, while the following two contain our developments.

In particular:

the first chapter is a review of the Asymptotic Safety physical principle, and some related concepts are introduced;

(13)

introduction xi

the second chapter is a review of the Causal Dynamical Triangulation framework, explaining how and why it can be used as a theoretical and numerical instrument for studying quantum gravity;

the third chapter is an introduction to some numerical methods employed in our simulations;

the fourth chapter is about the introduction of gauge fields dynamics in CDT and it describes both the theoretical principles and the new Monte Carlo algorithm;

the fifth chapter collects the results of the simulations performed with the new algorithm, discussing the effects of gauge on triangulations dynamics and vice versa;

the appendix a analyzes in details some concepts introduced in chapters1and2; the appendix b contains detailed description of topics related to gauge fields,

(14)
(15)

Part I

I N T R O D U C T O R Y M A T E R I A L A N D

(16)
(17)

1

A S Y M P T O T I C S A F E T Y

1.1 The problem of Quantum Gravity 3 1.2 Renormalizable theories of gravitation 5

1.2.1 Extended theories of gravitation 5

1.2.2 Resummation 6

1.2.3 Effective Theory (composite gravitons) 7

1.2.4 Other options 7

1.3 Asymptotic Safety 8 1.4 Effective Theory 11

1.1

the problem of quantum gravity

An open problem in Theoretical Physics is how to reconcile Quantum Mechanics

(QM) andGeneral Relativity(GR).

The two theories achieved fundamental results in their respective domains1

(atomic and particle physics forQM, astrophysics and cosmology forGR), leading to the con-firmation of their validity and their capability in the description of Nature.

The huge distance between these domains is a good reason explaining why this reconciliation has not already happen, indeed they are poorly related because at QM

scales gravity is too weak to have relevant effects, while atGRscalesStandard Model

(SM) interactions are not relevant too because bodies have almost null overall charges and so there are no macroscopic interactions (even for charged bodies the relevant effects are well-described by classical theories).

Moreover it is worth noticing that gravity is quite different from others interaction: it’s typical length is the Planck scale (`Pl =MPl−1= 1.6 × 10−33cm), but it is known

from its effects on much larger scales, because at the typical lengths of others interac-tions it is hidden by them, for its weakness.

1

the domain in which they show relevant differences from the classical theory

(18)

So gravity becomes manifest only on even larger (astronomical) scales because its charges can’t be screened (the interaction is always attractive), and therefore sum co-herently.

It is reasonable that it has to be regarded differently, because the known theory is really effective, compared with the others (short range and not only attractive).

unification The ambition to describe all interactions in a unique framework leads to the attempt of extending the successes ofQuantum Field Theory(QFT) to all energy scales (or to replaceQFTin its role of the most general framework with something that reproduces its results at the already explored energy scales).

Extending the range of validity of the current quantum theory up to the Planck scale (MPl ≡p1/G∼ 1019GeV) is a big step of 15 orders of magnitude (from the TeV to

MPl), so it is very likely that in the middle there are a lot of new degrees of freedom not already accounted for). Nevertheless gravity has to be included at the end, but it is the theory governing the space-time geometry (that inQFTis assumed to be flat Minkowski), so it is expectable that something new will happen.

Non-renormalizability of Einstein gravity

A natural approach to the problem is trying to quantize the gravity field in a similar way to the electromagnetic one.

This naïve attempt is not enough to solve the quantum gravity problem, because the

Einstein-Hilbert(EH) action is non-renormalizable by power-counting, so it can’t be a goodUVcandidate.

Indeed theEHaction is:

SEH= 1 16πG

Z

d4x√−g (R− 2Λ)

where Gis the Newton constant, g = det(gµν) is the determinant of the metrics, R is the Ricci scalar andΛis the cosmological constant.

In ordinary perturbation theory the request to be renormalizable imposes to have all couplings of non-negative mass dimension, because at high external momenta the amplitude of a Feynman diagram of order N is proportional toRpA−Nddp, where d is the mass dimension of the coupling and A a coefficient not dependent on N.

So the Ultraviolet(UV) divergences would generate an infinite number of counter-terms, making the theory no more predictive.

Therefore the quantum theory of gravity can’t be the naïvely quantized Einstein’s theory, but this has to be regarded has an effective theory (in the low energy limit) of a more fundamental one still to be found, as in the case of the Fermi theory of beta decays.

(19)

1.2 renormalizable theories of gravitation 5

Moreover a perturbative approach on a flat space-time, the way it would be done for

SM, is not sufficient to describe GRmain features (e.g.: the description of Universe’s geometry in cosmology, and strong phenomena like black holes), so a non-perturbative theory must be developed2

.

Renormalization’s properties ofEHaction are different in the theory of pure gravity and that coupled to matter:

pure gravity without matter counterterms generated by 1-loop Feynman diagrams would have the form √−g αRµνRµν+ βR2, but is possible to absorb these contributes with a redefinition of the metrics field, preserving the form of the action: SEH→SEH− Z −g αRµνRµν+ βR2 =SEH− Z δSEH δgµν∆gµν= = SEH(gµν− ∆gµν) + . . .

Instead it has been proved that at two loops genuinely new counterterms appear

Goroff and Sagnotti 1985, whose form is√−g RµνρσRρσαβRαβµν.

matter coupled in the presence of matter (e.g.: a single scalar field) also one loop divergences generate non absorbable counterterms’t Hooft and Veltman 1974

1.2

renormalizable theories of gravitation

In section1.3it will be described a possible scenario for the solution to the problem

of non-renormalizability of the quantized theory of gravity, introduced in Weinberg 1979in the context ofQFT.

Instead in this section some preexisting alternatives are reviewed (highlighted also inWeinberg 1979), together with some contras for these theories.

1.2.1 Extended theories of gravitation

One way in which a theory of gravity could avoid the ultraviolet divergences present in the quantized EHis to be symmetric under some transformations that involve also matter fields (otherwise it would be a symmetry also in absence of matter), and prevent all ultraviolet divergences on the ground of symmetry.

2

Not exactly: the main issue is that a perturbative theory on a flat space-time begins very distant from the phenomenology ofGR, so the needed feature to solve this issue is the background independence, that should be enough to approximate the phenomenology of the classical theory, even if the theory is still perturbative (because it make possible to perturb on a given background, typically taken as the solution of the classical equations).

(20)

In this way a quantum theory including gravity could be renormalizable in the usual sense3

.

Considering that the main idea ofGRis to build a theory invariant under diffeomor-phisms the request for a successful theory of gravity is to fulfill this requirement.

So if a LagrangianLg is built invariant under a group including diffeomorphisms (that preserves also the measure √−gd4x) there will be an infinite series of possible terms (powers of the LagrangianL2g,L3g, . . . ) not ruled out by the symmetry group, therefore a non-renormalizable theory of this kind would have an infinite number of couplings.

The only chance is to find a symmetry of the action S =RL that is not a symmetry of the Lagrangian, that is possible if the Lagrangian (actually√−gL) shifts under this symmetry by a total derivative:

−gL →√−gL + ∂A

or equivalently the Lagrangian only shifts by a total covariant derivative δL = DA A well known and interesting example of such a symmetry is the supersymmetry. Introduced by Wess and Zumino at the beginning of ’70sWess and Zumino 1974a,b, supersymmetry is a very promising approach that enlarge the group of space-time symmetries (opposed to the group of inner symmetries, ones that fully commute with the Poincaré group) with some fermionic transformation, replacing the symmetry al-gebra with a superalal-gebra in which anticommutation rules take the place of some commutation rules.

contra the main disadvantage of supersymmetry application to the Standard Model (MSSM, . . . ) is that introduces a lot of new unobserved degrees of freedom, so is not minimal in the context of all quantum field theories.

It has to be said that it is minimal in some sense: in the perspective of Coleman and Mandula 1967;Haag et al. 1975supersymmetry is the maximal set of compatible space-time symmetries (in the non-conformal case), so it is minimal4

in the same sense in which one could desire to justify the Poincaré group assuming only some basic principles (such as finite maximum speed for interactions and so on).

1.2.2 Resummation

Another chance is that even if there are an infinite number of divergent diagrams in the original perturbative expansion they may be resummed in a finite number of them, rearranging the series.

3

A priori it could be possible that a theory does not generate a certain divergence even if is not ruled out by any symmetry, but it is rarely the case; instead it is more true that if a theory does not exhibit a divergence this is a clue of some hidden symmetry.

4

Here is a bit confusing: I use the word minimal to underline that there is the least freedom possible in arbitrary choices, and this corresponds to the maximal symmetric case.

(21)

1.2 renormalizable theories of gravitation 7

This has to be done selecting and summing together some infinite subclasses of graphs, so that they become more regular in theUVand could be used as new building blocks for the perturbative expansion.

contra This is certainly a difficult task, and seems a very natural solution, but in some cases it could also lead to unphysical singularities in propagators, that violates the unitarity of the theory. An example is shown in appendixA.1.1.

1.2.3 Effective Theory (composite gravitons)

Another possible idea (more popular in other sectors) is that the graviton may be an effective degree of freedom of a more fundamental theory, like pions in the case of

Quantum Chromodynamics(QCD).

So the ultraviolet divergences are not to be solved in the frame of a quantized version of GR (the equivalent of low energy theory for pions), but inside the fundamental theory (to be found).

It would be surprising if the graviton were an effective particle, because the theory describing it in the infrared is extremely elegant and symmetric.

This apparent suggestion, of considering GR as fundamental by virtue of its el-egance, may seems less unavoidable regarding Weinberg’s soft theorem, Weinberg 1965, that states that each mass-less spin two particle (which must be coupled to a

conserved energy-momentum tensor) must be described by an effective theory satis-fying the Equivalence Principle, and must be non-interacting with any other spin two particle (decoupling it in a completely different universe)5

.

If one is only interested in long range dynamics the only possible theories of this kind is theGR(that is constructed exactly as the unique theory satisfying the Equiva-lence Principle, considering only terms with a minimal number of derivatives).

As an effective theory, it is no more important if it is divergent or not, because at sufficiently low energies (mainly with respect of Planck mass MPl) is possible to calculate any observable with the required finite precision.

contra The main reason not to consider this as a promising alternative is that is ignored what kind of symmetry may cause the graviton (in general a bound state of spin two) to be mass-less6

. 1.2.4 Other options

Since the second half of 20th century more theories were proposed to try to deal with quantum gravity issue, and they’re not limited to the framework of quantum field theory.

5

This issue will be analyzed deeper in section1.4, also from the quantum point of view.

6

A candidate in this sense could be again the supersymmetry, putting the graviton in a mass-less multiplet of composite particles, but this is true if the supersymmetry is exactly realized.

(22)

The principles adopted in the quest for a correct quantum theory of gravity are not the same for all approaches, and the focus is on different features according to the specific approach.

Two popular different approaches are:

string theory string theory, originally formulated as a theory of strong interactions (subsequently replaced completely by QCD), considers the chance that elemen-tary constituents of matter are not point-like object, but extended along a single dimensions, so they are called strings (actually in the theory are present also higher dimensional objects, generically called branes); string theory is essentially the quantum mechanics of string-like objects, but from this simple feature a rich scenario comes out, with a lot of insights but also a number of open issues, and an extended phenomenology not proved (nor disproved) by any experiment; it’s considered a candidate for quantum gravity because of the existence of a spin 2 mass-less excitation in the spectrum of the closed string, and the low-energy effective theory for these kind of excitations reproduces Einstein’s equations; it has been formulated also an analogous of quantum field theory for strings, in which fields are defined on the space of available string’s positions, and is called string field theory;Polchinski 1998a,b

loop quantum gravity this is another proposal, whose purposes are to realize a minimal synthesis betweenQMand gravity; the main assumption made in this formulation is that on short distances the space-time can not be considered any-more a smooth manifold, but it has a discrete quantum structure;

this structure is imposed quantizing the elements of space-time (grains), and de-scribing regions of space as a set of interconnected grains, where the meaning of connections is the adjacency of grains, the so called spin-networks, while the evolution in time is obtained via the quantum propagation (so quantities calcu-lated are quantum amplitudes) summing over the possible histories that leads to a given spin-network from another one; each history of adjacencies (with appro-priate labels) is called a spin-foam. Rovelli and Smolin 1990

A lot of further approaches have been proposed, including: causal setsBombelli et al. 1987, twistors Penrose 1967, quantum graphity (Konopka et al. 2006) and many more.

1.3

asymptotic safety

Asymptotic safety deals with the scenario in which it is not possible at all to describe gravity with a quantum field theory that is renormalizable in the usual sense.

The aim is exactly to find a way to generalize the usual perturbative renormalization, considering the framework of WilsonianRenormalization Group(RG).

(23)

1.3 asymptotic safety 9

Wilson’sRG

The Renormalization Group was initially formulated in the context of low energy physics, where the idea of regularization is more intrinsic (due to the physical discrete-ness of the structure of matter).

The idea is to define a field theory with an ultraviolet cutoff (in momentum space), that is a coarse-grained theory on a lattice. The fields can be furtherly coarse-grained reducing theUVcutoff, defining in this way the RG flow.

A more in-depth explanation of the RG group and the RG flow will be given in section2.1.2.

UV fixed points

So it is useful to consider the dimensionless coupling parameters, ¯gi(µ), defined from the dimensionful renormalized couplings gi(µ)as:

¯gi(µ) = µ−digi(µ)

where di is the dimension of gi(µ), and µ is the energy scale of the renormalization point.

Physical quantities, like reaction rates, may be written as: R = µdf E

µ, X, ¯g(µ) 

where d is the ordinary dimension of R, E is the characterizing energy scale of the process and X all other dimensionless variables.

But any physical quantity can not depend on the renormalization scale, so it can be chosen arbitrarily as it is most convenient. In this case, choosing µ = E, the generic rate becomes:

R = Edf(1, X, ¯g(E))

so, apart from the dimensional scaling, the dependence of R on E is through the di-mensionless couplings ¯g(E). So the high-energy behavior of R is through the behavior

¯g(µ) µ →∞.

essential couplings An important observation is that mass-shell matrix elements (and so the derived quantities) do not depend on the inessential couplings, those that change with a redefinition of fields, while the off-shell Green’s functions depend on them, because they reflect the definition of the fields involved (so they embed the wave-function renormalization constants Z)7

.

From now on gi(µ)will be used to indicate a generic essential coupling.

The change of a dimensionless coupling ¯gi(µ)under a change of µ is a dimension-less quantity, so it depends on the other dimensiondimension-less couplings, but not directly on µ (because it is chosen as the only energy scale, factorizing all other scales with dimensionless proportionality parameters, the ¯gi).

7

(24)

Thus it is possible to rewrite this consideration as a Gell-Mann - Low equation: µ d

dµ¯gi(µ) = βi(¯g(µ)) that defines the flow in coupling space.

asymptotic safety It is possible to identify an actually fundamental theory if this flow lead sto a fixed point g∗ for µ →∞.

The reason to search for such a theory is that an actually fundamental one is ex-pected to be well-defined without any UV cutoff, and so it is able to describe the physics at any energy scale8

. This is exactly what happens for a fixed point of theRG

flow.

Indeed in general it is possible that a trajectory approaches asymptotically such a point only if:

βi(g∗) = 0

The set of points lying on a trajectory of this kind is defined the critical surface of the fixed point9

g∗. In the present case is anUVsurface, instead in the theory of critical phenomena one uses frequentlyIRsurfaces.

Definition(Asymptotic Safety). The condition of asymptotic safety is that the couplings must lie on the ultraviolet critical surface of some fixed points.

Moreover it is requested that such a surface is of finite dimensionality.

This condition is intended as a generalized renormalization condition, indeed: perturbative renormalization The usual one is defined if couplings are small,

and is affordable in theUVlimit only if couplings continue to be small in this limit.

This condition is verified for theories that are called asymptotically free, such as

QCD.

asymptotic safety Does not require the couplings to be small, but that they remain within a neighbourhood of a fixed point g∗. This include the usual case, because it is simply the case with g∗ = 0, i.e. the Gaussian fixed point.

The capability of doing perturbation theory around a fixed point different from the Gaussian one require that one is able to solve the theory for that point, find-ing the exact propagator (as it happens in the free theory).

AS is regarded as a sufficient and not a necessary condition to be fulfilled in order to avoidUVdivergences10

.

8

Even if probably is not the most convenient, as in the case ofQCDused describing light hadrons.

9

Actually this point is taken in the space of essential coupling, otherwise it is impossible in general to find aUVfixed point, as shown in appendixA.1.3. But this is unimportant, because on-shell quantities depend only on the essential coupling.

10

(25)

1.4 effective theory 11

The requirement that the critical surface has a finite dimension ensures that the num-ber of free parameters is finite. Indeed it is theUVattractive surface in the couplings’ space, so it describes the subspace of eligible theories for the given fixed point g∗, therefore in the general case of AS free parameters are the coordinates on this surface. More specifically if the dimensionality of the surface is:

∞ nothing can be learn for this kind of theory, indeed one needs an infinite number of measures to fix the theory free parameters;

0 if there isn’t a proper surface the requirement of AS can not be fulfilled, and the fixed point is isolated;

finite and let it be C, so for any C there is a single dimensionful parameter, de-scribing the position on the trajectory, and other C − 1 dimensionless, that take precisely the same role of free parameters in the usual perturbative renormaliza-tion (thus the optimal condirenormaliza-tion of C = 1 corresponds to the case in which there are no free parameters).

So the condition of asymptotic safety serves to fix all but a finite number of the parameters of the theory, provided that a possible fixed point has been identified.

The dimensionality of the critical surface can be determined from the beta function βi(¯g) in a neighbourhood of the fixed point. The proof will be given in appendixA.1.5.

Redefinition invariance

The way in which the dimension of the critical surface is determined from the be-havior of beta functions has to be invariant under redefinition of coupling parameters, because theRGflow is not related to the choice of a coordinate system, and the dimen-sion of a surface is invariant under diffeomorphism.

This is really useful, because it is possible to redefine coupling as is most useful according to the renormalization scheme chosen. The outcome will be a physical quantity invariant under diffeomorphisms in coupling space.

Critical exponents The diffeomorphisms in coupling space leave invariant not only the dimension of the critical surface, but the whole set of eigenvalues of the matrix

Bij = ∂βi(¯g) ∂¯gj



¯ g=g∗

These eigenvalues are called critical exponents, and depend only the nature of degrees of freedom of a system.

1.4

effective theory

In this last section the aim is to improve the explanation of why theEHaction is ex-pected to be a good description of theIRbehavior of gravity interaction, even without

(26)

knowing the exact underlyingUVtheory.

In the following discussion it is not relevant if the graviton is a composite or elemen-tary particle, so theEHwill be shown to be theIRtheory of gravity even in the case of a non-fundamental graviton.

Diffeomorphisms invariance

It has been already said that the strongest requirement for a theory of gravity is the diffeomorphisms invariance. Employing this it is possible to constrain very much the Lagrangian.

Thus let the Lagrangian be taken of the form:

L = − 1 16πG

−g − f√−gR2− f0√−gRµνRµν and the theory is assumed to beAS.

Gaussian fixed point The low-energy physics is governed by the behavior of the couplings ¯gi(µ)for µ → 0.

The simplest and usual possibility is that for µ → 0 couplings’ trajectory approaches another fixed point, and often it is the Gaussian one (it is frequent, but not guaranteed: for an asymptotically free theory the perturbation expansion is no more reliable in the

IRlimit, so it is possible not to approach a fixed point in this limit, and it is sure that it can’t be the Gaussian one).

In the case of the assumed Lagrangian the Gaussian fixed point is a really good candidate, indeed every term in the Lagrangian is non-renormalizable, thus the Gaus-sian fixed point is entirely repulsive flowing towards theUV, and attractive in theIR11

. Near ¯gi= 0the loop contribution are suppressed by the magnitude of couplings, so the scaling of these is expected to approximate the classical one:

gi(µ)≈ Mdii for µ  Mi

and if there are no reason to have very different energy scale it is reasonable to assume that Mi≈ M ≈MPl∀i.

DetailedIRanalysis

Now it will be given an explanation of the success of EHtheory in theIR in terms of Feynman diagrams’ dominance.

Consider a connected Green’s function for a set of gravitational fields at points with typical space-time separation r. Renormalized couplings gi(µ) are defined at renormalization points with momenta of order 1/r.

11

Following Weinberg 1979 here is chosen to ignore the cosmological constant Λ. It is a super-renormalizable coupling (the interaction is in the form Λ√−g), so it would be absolutely repulsive while flowing in theIRnear ¯gi= 0.

(27)

1.4 effective theory 13

According to previously established couplings’ magnitudes a graph withNivertices of type i yields a factor proportional to N powers ofMPl−1, where:

N = −X

i Nidi

so on the ground of ordinary dimensional analysis the contribution of such a graph will be suppressed by a factor:

(rMPl)−N

and considering the value of thePlanck length,`Pl = 1.6 × 10−33 cm, at the scales of

GR each power of rMPlbrings a factor of 1040 (at least, for Earth radius the value is 1042).

The dimensionality diis given by:

di= 4 − pi− gi

where pi is the number of derivatives and gi the number of graviton fields in the interaction of type i.

Furthermore, it is well-known that for a graph are valid the relations: L = ` −X i Ni+ 1 2` +E =X i Nigi

where L is the number of loops, ` the number of internal lines and E the number of external ones. The first equation is related to Euler characteristic, and the second is simply a counting (internal lines are connected to 2 vertices and external only to 1).

Previous relations can be gathered to give:

N =X

i

Ni(pi− 2) + 2L + E − 2

so for a given kind of graph (fixed external lines) N is minimum if the diagram is at tree level and interactions involved have the minimum number of derivatives.

So (continuing to leaving aside cosmological constant term√−gΛ) the gravitational interactions with the smallest number of derivatives (pi = 2) are those derived from theRterm, that is theEHaction.

For the next terms R2, RµνRµν the number of derivatives is pi = 4, so each vertex is suppressed by a factor of 1080on astrophysical scales, therefore these corrections to

GRare negligible.

conclusion Classical General Relativity with Einstein-Hilbert action is a very ef-ficient theory in describing macroscopic (astronomical, human or even below) gravi-tational phenomena, because at these scales the quantumness is hidden for the huge suppression of loop diagrams, and the same happens for higher-derivative terms.

Matter It is also possible to extend previous arguments to theories including mat-ter, with some precautions, as it is done inWeinberg 1979.

(28)
(29)

2

C A U S A L D Y N A M I C A L T R I A N G U L A T I O N

2.1 Path Integral 15

2.1.1 Feynman’s Path Integral 16

2.1.2 Wilson’s RG 17

2.1.3 Gravity 21

2.2 Lattice and Gravity 22 2.2.1 Triangulations 23

2.3 Lorentzian vs Euclidean 24

2.4 Causal Dynamical Triangulation 27 2.4.1 Links 30

2.4.2 k- Simplices 30

2.4.3 Wick Rotation in CDT 32

2.5 CDT phase structure 34

2.1

path integral

The lattice regularization of aYang-Mills(YM) theory (or other field theories) allows to study the physics employing numerical simulations.

The advantage gained is the possibility to explore the non-perturbative scenario. The main element of this setup is the path integral formulation ofQFT (the extended version of the same tool used inQM).

Using the path integral concepts it is possible to calculate expectation values of observables averaging on a sample of system’s configurations representing the distri-bution defined by the (euclidean) action, extracted employing numerical techniques and the computing power provided by machines.

(30)

2.1.1 Feynman’s Path Integral The idea of path integral1

derived from the concatenation property of the propagator: U(xB, tB; xA, tA) = Z dx0U(xB, tB; x0, t0)U(x0, t0; xA, tA) tB> t0> tA U(xB, tB; xA, tA)≡ hxB| U(tB, tA)|xAi = hxB| Te− i h RtB tAH(t)dt|xAi

that is related to the completeness of|xBi as basis. Using this property2

it is possible to show inQMthat the propagator U(xB, tB; xA, tA) can be rewritten as a sum (integral) over the whole set of possible paths X(t) from the event (xA, tA)to (xB, tB): U(xB, tB; xA, tA) = Z X(tA)=xA, X(tB)=xB DX(t)ei hS[X(t)]

where S[X(t)] is the action evaluated on the path X(t).

The proof follows the idea of reiterating the sum in the first relation for each inter-mediate time t0.

x

t

t

A

t

B

x

A

x

B

Figure 1:Example of paths between two events.

1

Here is described a quantum system composed by a single particle, only as an example. For a general description there are textbooks or the original workFeynman 1948.

2

(31)

2.1 path integral 17

Semiclassical limit

Each path contribute in the path integral with a weight given by the value of action S on that path. Each weight is a pure phase (due to the i at the exponent) so path are not weighted more or less than others, but there are group of them that are more determinant in the sum: those on which action is stationary.

Indeed if action varies rapidly contributions from near paths cancels: they are de-structively interfering.

Instead, in those areas (in paths space) where the variation of the action is small (with respect to h), contributions are aligned, and the interference among paths is con-structive. Path corresponding to stationary value of the action satisfies the Hamilton’s principle, so they are classical paths.

So in the semiclassical limit paths the contribute more to the action are those which lay in a neighbourhood of classical ones (in the paths’ space), while those far from classical are suppressed.

In conclusion the semiclassical limit is a saddle-point expansion of the path integral, and can be organized in powers of h (that corresponds to number of loops in Feynman diagrams).

Path Integral for fields

Path integral can be generalized to QFT, where the role of paths is taken by field configurations, i.e. the sum has to run over all possible values of a field ϕ for each space-time point: ϕ(x), x ∈M4.

Vacuum expectation values of observables can be found using path integral in the same way in which observables are evaluated in statistical mechanics:

hOi = 1 Z Z Dϕ O[ϕ]ei hS[ϕ], Z = Z Dϕ ei hS[ϕ]

where the role of Hamiltonian is taken by the action S, and the temperature is taken to be imaginary (see section2.3).

As previously statedAsymptotic Safetyis defined in the framework of Wilson’sRG. Before going on here will be summed up main ideas of it.

2.1.2 Wilson’s RG

The Renormalization Groupas it is defined is actually a semigroup of transforma-tions. The idea is to replace a theory, with a new one, describing the same physics at a larger scale.

(32)

g

1

g

2

Figure 2:RG flow in a space with 2 parameters.

In this context indeed a theory is defined by anUVcutoff in addition to its couplings. Transformations are defined acting on fields in real or momentum space. While the first alternative involve a discrete procedure the second one is performed also in the continuum, and in the following this last will be chosen.

So the path integral for a theory is defined integrating its modes up to the cutoff Λ, i.e. the sum is performed only on mode ϕ(p) with|p| > Λ:

e hiS[ϕ;Λ] =

ZΛ Dϕ ei

hS0[ϕ]

Transformations that bring S[ϕ; Λ] → S0[ϕ; Λ] are defined in the following way: • integrate out the high momentum modes, Λ/ζ < p < Λ

• rescale the momenta p0= ζp

• rescale the fields so that the gradient term remains canonically normalized. from which is evident that is a semigroup, because it is impossible to "deintegrate" the modes to go back to the previous action.

(33)

2.1 path integral 19

RG flow Integrating out modes a flow in the space of theories is established: for each ζ ∈ [1,∞) a function from the space of couplings into itself is defined by the trajectories starting from each given theory (points in couplings’ space), parameterized by ζ.

So the flow is a function g(ζ, g0)on that space.

Flowing in the UV Notice that even if it is impossible to revert theRG transforma-tions (and so values ζ ∈ (0, 1) are not possible), once that is formulated as a flux it is meaningful to flow towards theUV, assuming that exists aUVtheory that generates the flux.

Indeed, keeping track of the original theory, is possible to choose an arbitrary value of ζ, and instead of analyzing the behavior when ζ →∞, that corresponds to reiterate

RGtransformations flowing in theIR, choosing to analyze the opposite regime, when ζ → 1. There is no intuitive way of interpret this in terms of a sequence ofRG trans-formations, but nevertheless it is well-defined.

A fundamental quantum field theory should be able to describe physics at all en-ergy scales, without the introduction of new fields, so it has to be anRG fixed point.

fixed points If the system admit the definition of a typical scale-length ξ (like a correlation length) not all values of ξ are available for a fixed point, because it has to be invariant for RG transformations. So, the only possible values for such a ξ in a fixed point are ξ = 0,∞.

ξ = 0 this case is perfectly admissible for a fixed point, but usually is not interesting: interpreting ξ as a correlation length this corresponds to infinite temperature limit, in which each points acts independently from the others and there is no space-time coherence in fluctuations;

Figure 3:Fluctuations towards a ξ = 0 fixed point.

(34)

ξ =∞ this is the actually interesting case: for ξ = ∞ there is a space-time correla-tion in fluctuacorrela-tions, but it has not a preferred scale-length, so fluctuacorrela-tions are expected at all scales.

Figure 4:Fluctuations towards a ξ =∞ fixed point.

(source: Statistical Field Theory - D. Tong)

relevant operators As previously discussed in section 1.3 each fixed point is

characterized by a basin of attraction, called critical surface, and this happen both for

IRfixed points and forUVones3 .

TheUVbehavior of all theories in the critical surface of aUVfixed point is the same (by definition), so to distinguish the UV is sufficient to consider those dimensions that are orthogonal to the surface itself. The Lagrangian operators associated to such directions are called relevant (UVrelevant in this case, from the point of view of the opposite flow they are IR irrelevant), because are exactly those that determine the asymptotic behavior.

RG recap It is useful to sum up the exposed concepts of RG in a short-list of "practical" steps aimed to identify the correctUVtheory in the context ofQFT.

select the space

The coupling space is formed by all the couplings that are needed to describe the most general Lagrangian compatible with a certain symmetry group, so the first steps are to identify the degree of freedom involved, the most appropriate symmetries and collect the selected coupling (selected by the symmetries). define the flux

Integrating out the high-momenta modes the flux is retrieved, either directly (choosing the extreme of integration dependent from a parameter ζ and doing the integral) or integrating on small intervals obtaining the RG equations, and finding the flux from the solution of these equations.

identify the fixed points

3

There is no real difference betweenIRandUVfixed points: if they are fixed it’s true also in the opposite flow. The difference is from the point of view of the other points (theories) that could lie either in theIRorUVcritical surfaces, that instead are different (aut aut).

(35)

2.1 path integral 21

Exactly what is said in the title: finding the set of points that are fixed in the retrieved flux.

select the best fit

Select the better UV fixed point by the comparison with experimental observa-tions.

Lattice Observables

When the lattice regularization is chosen to define the theory at finite energy scales the renormalized couplings are determined by the way the relevant couplings ap-proach the critical surface, rather than by the value at the surface.

Example 2.1.1. Consider an adimensional correlation length ξ(g0) (in units of lattice spacing) defined by an observableO(xn)according:

−log hO(xn)O(xm)i = |n − m| ξ(g0)

+ o(|n − m|) xn= na where a is the lattice spacing.

When approaching the critical surface, g0 → gc0, the correlation ξ(g0) length will diverge, as in a second order phase transition, with some critical exponent ν:

ξ(g0)∝ 1 g0− gc 0 ν

So it is possible to remove the regulator in a way that identifies a real physical constant:

a(g0)∝|g0− gc0|ν =⇒ mpha(g0) = 1 ξ(g0)

such that in the final theory hO(xn)O(xm)i falls off exactly like e−mph|xn−xm| when g0 → gc

0 and a(g0)→ 0.

2.1.3 Gravity

The description of path integral observables and Wilsonian scenario were performed in the case of a flat space-time, they have to be generalized to the case of a curved dynamical one to be applied in the study of gravity. To make this generalization some non-trivial aspects of gravity have to be taken into account:

• not only coordinates, but even coordinate distances are of no meaning, because of diffeomorphism invariance;

• each local observable has to be averaged over all space-time points, because there is no way to mark any specific point (not even a group of these) throughout all ensemble copies.

This observations are of fundamental relevance, because in order to find RG fixed points is important to identify a divergent correlation lengths, but each length has to be treated carefully in gravitational scenario.

(36)

with matter In a theory of gravity with a coupled scalar field a possible correlation length may arise from the two-point function, but it has to be appropriately redefined to obtain a diffeomorphisms invariant quantity. A possible definition of this kind is: hφφ(R)i = Z DgµνeiS[gµν] ZZ dx dyp−g(x)p−g(y) hφ(x)φ(y)i[gµν] matter δ(R − dgµν(x, y)) where hφ(x)φ(y)i[gµν]

matter is the matter fields’ correlator, calculated with a fixed geome-try gµν(x). The δ-function fixes the geodesic distance between x and y to be R for all the geometries, and the double integral integrates out the whole coordinates de-pendence, leaving the correlator dependent only on the distance R, that is the unique diffeomorphism invariant information on the pair of points x and y.

The previous quantity is not local, but this is due to the request of diffeomorphism invariance: insisting that metric-dependent continuum observables should be invari-ant there exist no such quinvari-antities which are local. This is a general feature in (quinvari-antum) gravity.

pure gravity Previous definition (hφφ(R)i) is not immediately useful as a graviton propagator, but it is still possible to extract information from it. Indeed dropping the connection with the matter field, by putting hφ(x)φ(y)i[gµν]

matter= 1, it is still a diffeomor-phism invariant expression, that expresses the volume of a geodesic ball of radius R. For small R it will determine the fractal dimension of space-time, which in a theory of quantum gravity can be different from the canonical one, put in by hand.

2.2

lattice and gravity

As it has been already said lattice regularization is a powerful tool used to study

QFTwith the employment of numerical techniques, perform analytic calculations, and formulate a non-perturbative definition ofQFT. Thus it would be desirable to use this last feature of lattice in order to construct a non-perturbative theory of quantum grav-ity too.

So the strategy will be to define a gravitational path integral over the possible space-time geometries, and introducing lattice regulator as aUVcutoff to make it sure to be convergent. Once the path integral will be defined it will be possible to explore the space of bare coupling of the lattice action to search systematically for a fixed point which could be realize theAsymptotic Safetycondition.

Note that also in the case in whichAS were not the correct answer to the quantum gravity issue, the lattice theory may still provide a good description for the effective quantum gravity theory (obtained by the fundamental one integrating out all the de-gree of freedom but the spin-two mass-less field), down to the scale of its validity, that in absence of other indications is presumable to be the Planck one.

diffeomorphisms invariance One could be worried about if lattice regulator breaks the diffeomorphisms invariance of the continuum theory. It could be possible

(37)

2.2 lattice and gravity 23

however that symmetries lost in the discrete regularization may be restored once the regulator will be removed. This is not the situation: triangulations are a particular subclass of space-time differential manifolds, not a set of coordinates system on some of them (see next section 2.2.1for the definition), and the invariance is not lost from

the beginning.

2.2.1 Triangulations

From now on definitions and key facts are given followingAmbjorn, Goerlich, et al. 2012. Already in the previous discussions about path integral in gravity (section2.1.3),

and lattice in it was the main reference.

To recap: the idea of lattice regulator is to introduce a real space length cutoff to avoid UVdivergences, and in flat space this is done precisely introducing a discrete hypercubic lattice to replace to the continuum flat space-time. Now the goal is to repro-duce this scenario for a generic curved space-time, so to make it possible to define a path integral over geometries.

Instead of defining a regularized copy for each continuum geometry the strategy will be to identify a subset of already regular geometries for each value of the lattice spacing a, such that this set will become dense once the cutoff will be removed a → 0.

A natural choice for the already regular is the set of piece-wise linear geometry. A piece-wise linear geometry is defined:

• taking a generic d-dimensional topological manifold;

• choosing an abstract triangulation on it (a set of points and curves among them, with the requirement to represent a pure simplicial complex);

• assigning lengths to its links;

• insisting that the interior of each d-simplex is flat (either Minkowski M4 or

Eu-clideanE4)

in this way a manifold is equipped with a continuous geometry without having to introduce any coordinate system.

Keeping the data which characterize the piece-wise linear geometry, namely, the triangulation together with the link length assignments, the coordinate gauge redun-dancy of the continuum theory is no longer present.

A given linear geometry can be changed in two ways: by changing either the length assigned to its link or the abstract triangulation itself. In the path integral that will be used the topological manifold M on which triangulations are defined will be fixed, and also the link length a, so that it can be play the role of aUVlattice cutoff.

So each abstract triangulation is associated with a piece-wise flat geometry (a “sim-plicial manifold”), which is unique up to graph automorphisms. Thus performing

(38)

Figure 5:Two example of torus triangulations.

(source: Wikipedia)

the path integral amounts to summing over the set of abstract triangulations of M, choosing a suitable gravitational action S[T]:

Za=X

T

1 CTe

iS[T]

where a refers to the cutoff, and CTis the order of automorphisms group ofT(because the idea is to sum over piece-wise geometries, not triangulations themselves).

Using triangulations (and piece-wise linear geometry) in the way just described goes by the name ofDynamical Triangulation(DT).

Observables One can use the partition function Za to calculate the expectation values of certain observablesO as:

hOia= 1 Za X T 1 CTe iS[T]O[T]

and relate these regularized observables to observables defined in the continuum ac-cording to standard scaling relations of the type:

hOia= a−∆hOicont+ O(a−∆+1)

In general this is not always the scaling, but it is expected close to aRGfixed point.

2.3

lorentzian vs euclidean

In Minkowski space-time the metric is the well-known: ds2 = −dt2+dx2+dy2+dz2

the Wick rotation consists in an analytic continuation of the time t on the complex plane, i.e. using τ ≡ −it as time, redefining other quantities in agreement with this transformation. Therefore the metric becomes:

(39)

2.3 lorentzian vs euclidean 25

The other quantities redefined4 are: x→ xE≡ (τ ≡ it, x) p→ pE ≡ (ip0, p) iS≡ i Z d4xL(x)→ −SE≡ − Z d4xLE(xE)

Aµ(x)→ AE,µ(xE)≡ (iA0(x(xE)), A(x(xE)) ) γµ → γE≡ (−γ0, iγ)

where the reasons behind these definitions are:

• the 4-momentum is defined to preserve the scalar products x · p = xE· pE(where on the lhs there is the product, and on the rhs the Euclidean one);

• the definition of the action is simply derived, defining LE so that the potential part (the non-derivative one) is unchanged (while the kinetic one change sign in the case of canonical terms, resembling the form of an Hamiltonian);

• the fields are redefined to transform according their tensor structure;

• also gamma matrices are redefined according to their "vector" structure, but with an additional i to make it real the Dirac Lagrangian and account for the extra mi-nus sign i/∂→ −/∂E(where on the rhs either the / and the product are Euclidean). Note that the Wick rotation is a counterclockwise rotation in the complex plane of time. The sign is important: it is chosen to be consistent with the Feynman prescription for time ordered propagator.

Re Im

Wick

Wick

Figure 6:Wick rotation in complex time plane.

4

Here the notation is not so clear: the → means that in equations the quantities on the left have to be replaced with those on the right. For example /p = γµp

µ→ /pE= γE· pE= i(−γ0p0+ γ· p) = iγµpµ, so

(40)

Graviton There is a problem in performing Wick rotation when one has to deal with gravity, because there is no known general map between Lorentzian metrics g(M)µν and real positive metrics g(E)µν such that the associatedEHactions satisfy:

iSM= i Zq −g(M)R(g(M) µν )→ − Zq g(E)R(g(E) µν)≡ −SE

Intuitively the motivations of this issue are:

• there is no preferred time in a Euclidean space;

• Wick rotation doesn’t commute with diffeomorphism;

• Wick rotate the metric, if performed according to its tensor structure as for spin-1 fields, in general produce a complex and not a real metric: so the path integral would not be guaranteed to be real.

In numerical simulations is very useful to employ the Euclidean version of path integral (instead of the Lorentzian one), because if the weight is a complex phase the sum can be numerically unstable. For example, if the action is varying rapidly in a certain region (with respect to observables values) the phases tend to cancels among them, and small numerical errors can become relevant, because could happen to be considering differences of almost equal numbers (that suffer in numerical precision). Instead with Euclidean weights the addends are all of the same sign (if observables’ values are of the same sign), and if the action varies rapidly there will be some smaller and some bigger weights, but no oscillating phases.

Moreover in this way the weights in path integral may be interpreted as a probability distributionP[ϕ] = e−SE[ϕ]/Z, exactly the same that happens in the Gibbs’ ensemble

in statistical mechanics. So the definition of observables is:

hOi = Z

DϕO[ϕ]P[ϕ] or, for discrete degrees of freedom

hOi =X ϕ

O[ϕ]P[ϕ] where ϕ is now a random variable distributed asP[ϕ].

A similar procedure will be applied to Regge action inCDT, and will be presented in section2.4.3.

(41)

2.4 causal dynamical triangulation 27

2.4

causal dynamical triangulation

In a previous section (section 2.2.1) it has been already discussed the concept of triangulation. In this one it will be discussed instead how the information on curvature is stored in such triangulations, even if its basic elements are forced to be flat, and in which way the triangulations considered are causal.

Curvature

The notion of curvature inCDTis based on that formulated in Regge calculus (Regge 1961): for a d-dimensional triangulations the curvature is encoded in the deficit angle

around (d − 2)-simplices.

Figure 7:Regge definition of curvature by missing angle.

(source: G. Clemente)

2-dimensional surfaces This idea may be intuitive understood considering a 2-dimensional surface. If a vertex (the (d − 2)-simplex in this context) is surrounded by 6 equilateral triangles the hexagon they form is a planar polygon, while to close a circle of more or less than 6 triangles they have to be arranged in 3-dimensional space, and the surface element they represent has finite curvature radii. The deficit angle is:

 = 2π − nπ 3

and its sign is in agreement with the usual convention on curvature (the sphere has positive curvature, because it needs less than 6 triangle per vertex, while a saddle point has negative curvature).

(42)

Once is possible to express the curvature with triangulation’s information also the

EHcan be reproduce using triangulation’s data. So starting from the continuum action:

SEH= 1 16πG Z ddx√−gR | {z } total curvature − Λ 8πG Z ddx√−g | {z } total volume

the following action is derived, expressed only in terms of triangulation’s data:

SRegge= 1 8πG   X σd−2 ∈T σd−2Vσd−2−Λ X σd ∈T Vσd  

where T is the triangulation, σk is an index running over k-simplices, 

σd−2 is the

deficit angle around σd−2 and V

σk is the k-dimensional volume of σk.

total curvature the total curvature term is derived following Regge definition of curvature, where each σd−2 contributes with 2σd−2Vσd−2;

total volume this is exactly the space-time volume of the manifold in the continuum, so it is reproduced in the triangulation setting by summing the volume of each flat element, the maximal simplex σd.

In order to find a comfortable action for numeric computations is useful to intro-duce dimensionless quantities to take the place of the dimensionful coefficients in the previous expression: Vσk ≡ Vσk ak κ≡ ad−2 8πG λ≡ Λad 8πG

where a is an arbitrary length scale (e.g. can be chosen to be the lattice spacing) and κ, λ the new action parameters.

Therefore the final form of the action is:

SRegge= κ X σd−2 ∈T σd−2Vσd−2− λ X σd ∈T Vσd

gauss-bonnet theorem In 2 dimensions the overall integral of Ricci scalarR(the total curvature) is a diffeomorphisms invariant, so the first term in the Regge action is a constant if the sum is performed on surfaces with the same topology, thus it will be ignored in numerical simulations (where only the action differences are relevant).

(43)

2.4 causal dynamical triangulation 29

Causality

To restrict the set of triangulations to those which can describe a Lorentzian geome-try is necessary to introduce a further requirement: the triangulations considered have to exhibit a sliced structure.

A triangulation is sliced if it is composed by a set of spatial hypersurfaces (the slices), which are themselves (d − 1)-dimensional triangulations, connected by suitable sets of d-dimensional simplices.

Building block:

Building block:

Figure 8:Comparison between a sliced triangulation (on the right) and one not (on the left).

(source: adapted fromJordan and Loll 2013)

Triangulations considered are constrained to have the same topology for each spatial slice. Let a space slice be homeomorphic to a manifoldS, so the whole triangulation is homeomorphic toS × S1, having chosen periodic boundary conditions in time.

Differently from the general case in sliced triangulations exists a preferred time: it is the integer label of the slices that acts as a time itself. However this doesn’t correspond to a gauge fixing, indeed the restriction operated is on the set of abstract triangulations, that are not correlated with the choice of coordinates, because they describe only the adjacency relations among space-time elements.

dynamical triangulations CDT is the evolution of the concept ofDT. The need for this step forward comes from two main issues that have been discovered during the study ofDTitself:

• the first one is the concrete a posteriori observation that DThas not any appro-priate phase that can describe the observed universe;

• the second one is the difficult in defining observables related to the time evolu-tion, because the absence of the enforced foliation makes it impossible to define a consistent notion of causality in the general context.

(44)

2.4.1 Links

Furthermore the sliced structure allows to distinguish between two types of links: space-like those which connect two vertices of the same slice;

time-like those which connect vertices on two adjacent slices.

Being able to separate links in these two classes is possible two select different lengths for each kind. So the space-like ones are chosen to have length (i.e. the interval) a2, while the time-like length will be −αa2 (the α parameter will be also useful in next section2.4.3, in the discussion of Wick rotation in triangulations).

In this way is available to assign to each d-simplex (maximal ones) a flat metric, as if it were a piece of Minkowski space (embedded).

2.4.2 k - Simplices

In addition to impose that links have to connect vertices on adjacents slices, this constraint is extended to all k - simplices, limiting the number of possible simplices of different kinds. With this further restriction is feasible to label k - simplices as (m, k − m), where m is the number of vertices of that simplex that lie on the slice with the smaller time label5

.

d - simplices Maximal simplices (d - simplices) can not have m = 0, because they are always space-time volume elements, so they can’t be taken as purely spatial. Example 2.4.1.

2d so in two dimensions there will be only: • 2types of triangles: (2, 1) and (1, 2)

• 2 types of links: (2, 0) space-like, and (1, 1) time-like (when m = 0 it is (m, k − m) ∼ (k − m, m), because they are lying on only one slice)

4d and in four dimensions:

• 4types of pentachorons: (4, 1), (3, 2), (2, 3) and (1, 4) • 4types of tetrahedrons: (4, 0), (3, 1), (2, 2) and (1, 3)

• 3types of triangles: (3, 0), (2, 1) and (1, 2) (notice that this are obvi-ously the same of 2D, with the addition of the purely spatial one) • the same 2 types of links: (2, 0) space-like and (1, 1) time-like

5

(45)

2.4 causal dynamical triangulation 31

Figure 9:Some examples of d - simplices kinds.

Regge action rewritten

It is useful to rewrite the Regge action in terms of simple triangulations properties, such as the total numbers of various kinds of simplices.

This is trivial for the total volume term, because it’s simply proportional to the total number of d-simplices (maximal ones).

missing angle While it could not seem intuitive also for the missing angle the situation is not so different, and the key fact is that only the whole sum of angles is needed, and not a more complex function of the whole set of σd−2.

As usual it is simple to visualize in 2D, but there are no further difficulty to prove it in greater dimension.

Indeed, with reference to the bidimensional case, for all vertices the missing angle on a given vertex is related to the number of adjacent triangles, and to their orientation, but if a triangle exists it will contribute with all its vertices to the global sum.

X σd−2 ∈T σd−2Vσd−2 =Vσd−2 X σd−2 ∈T  2π − X σd 3σd−2 θdd−2)   =Vσd−2     X σd−2 ∈T 2π − X σd ∈T i=1 θσd,i     =Vσd−2   X σd−2 ∈T 2π − X σd ∈T π  

Riferimenti

Documenti correlati

A data set can be explained by alternative sets of patterns, and many computational problems arise related to the choice of a particular set of patterns for a given instance.. In

It has never been discussed by the Italian courts whether, according to Shevill (Case C–68/93), the claimant in an antitrust action can recover from a defendant, sued under Article

In this paper, we have empirically shown that the genetic algorithm in the calibration process plays a crucial role in this study, since a more efficient exploration of the

The increase in student debt, low rate of home- ownership, and low rate of participation in retirement saving plans has produced a big decline in the median ratio of wealth to

Keywords (separated by '-') Human language faculty - Human language parser - Recursion - Linguistic maturation - Natural language understanding - Parsing arguments and adjuncts

I consider two different “positive” wittgensteinian accounts—Campbell’s idea that delusions involve a mechanism of which different framework propositions are parts, Sass’

Abstract In this paper we analyze the effects of restricted participation in a two-period general equilibrium model with incomplete financial markets and two key elements:

149 In order to simultaneously encompass both bandwagon and snob effects, in the 150 updating preference functions we will introduce a threshold effect (see Granovetter 151 1978),