• Non ci sono risultati.

A Stochastic Approach to Nucleosynthesis and Turbulent Mixing Mechanisms in Classical Nova Thermonuclear Runaways

N/A
N/A
Protected

Academic year: 2021

Condividi "A Stochastic Approach to Nucleosynthesis and Turbulent Mixing Mechanisms in Classical Nova Thermonuclear Runaways"

Copied!
107
0
0

Testo completo

(1)

D

IPARTIMENTO DI FISICA

T

ESI DI

L

AUREA

M

AGISTRALE

A

NNO

A

CCADEMICO

2018/2019

A Stochastic Approach to Nucleosynthesis

and Turbulent Mixing Mechanisms in

Classical Nova Thermonuclear Runaways

Author:

Giovanni LEIDI

Supervisor: Prof. Steven N. SHORE

(2)
(3)

Summary

Classical Novae (CNe) are explosions sited in short period (<1.5 days) binary systems composed of a degenerate compact star (a CO or ONe white dwarf) and a low mass main sequence (MS) or only slightly evolved star. When the MS companion overfills its critical tidal (Roche) lobe, the external hydrogen-rich layers flow through the inner Lagrangian point L1. This flow collimates into a stream with high angular momentum and ultimately forms and feeds an accretion disk around the white dwarf. A fraction of this material, whose composition is expected to be solar (at least for disk population systems), spirals toward the white dwarf, where it is gradually compressed and heated by the high surface gravity. Pressures and densities at the base of the accumulated layer get so high that the envelope becomes degenerate. As temperatures rise, nuclear reactions initiate (mostly pp chains and proton captures on the CNO group) increasing the thermal energy. However, since the electrons are governed by a degenerate equation of state, the envelope cannot hydrostatically readjust to this energy excess through expansion. The baryons, whose equation of state obeys the perfect gas law, begin convecting as a consequence of the in-ternal energy release and a subsonic deflagration front propagates outward, which soon becomes structurally unstable (closer to what happens during the helium flash rather than normal nuclear burning). When the temperature at the base of the envelope finally reaches the Fermi level (0.03-0.05 GK), the expansion begins, but at this point the average nuclear burning time scale is much shorter than the sound speed crossing time and a ther-monuclear runaway develops (TNR). Within this interval of time, considerable amounts of unstable nuclei such as13N,14O,15O and17F are created. The subsequent energy re-lease by these β decays is sufficient to lift the accreted envelope (10−7−10−4M ) leading

to an explosion, with typical ejected velocities of several thousands km s−1(the detailed dynamical evolution and the total amount of mass ejected depend on the white dwarf mass and composition).

Recent 2D and 3D models have shown that during the TNR stage, convective transport of energy becomes turbulent. Turbulence introduces a broad range of time and length scales on element mixing that can influence the nucleosynthesis of some fragile species, such as 7Be, which is fundamental for calculating the final lithium yield. In the context of galactic chemical evolution, the reliability of lithium as a cosmological indicator depends on the efficiency of CNe in processing this element.

This thesis uses the results of new 3D dynamical simulations to model mixing processes with the velocity probability distribution functions along with 1D hydrody-namical energetic profiles of the explosion to better treat light element nucleosynthesis from the beginning of the TNR to the latest stages of mass ejection. A post-processing approach is used to study two nova models: 1 M CO and 1.25 M ONe. This method

assumes that the overall description as given by 1D models of energetics and dynamics is essentially correct but the more detailed information regarding the velocity distribution arising from 3D calculations allow for a more physically realistic simulation of the nu-cleosynthesis. This assumption is valid if the final yields in the most important isotopes contributing to the energy generation (e.g. the CNO group) do not deviate much from the results of 1D models. This can be verified within the approach, for self-consistency. To this end, nuclear networks are computed on top of the temperature and density profiles,

(4)

while turbulent mixing rates are directly sampled from the 3D velocity spectra to get the diffusive evolution and the fraction of mass exchanged between adjacent meshes. Both processes are treated with a fully stochastic algorithm, the Stochastic Simulation Algorithm (a.k.a Gillespie algorithm), widely and successfully used in kinetic chemistry in the last 40 years. As resulting from these simulations, after7Be is created, it is broadly spread within the convectively unstable envelope. However, the rare but high velocity events coming from the non-Gaussian wings of the distribution keep on bringing7Be into the

inner and hotter zones, where it is destroyed by proton captures. This restricts the final lithium overabundance in a range from 0.1 to 10 times solar, contrary to what predicted by most of nova simulations to date (100 to 10000 times solar), which are still based on the mixing-length theory (MLT) prescription of convectively unstable media. However, not only MLT has been developed for describing hydrostatic systems, which CNe are not, but it is also incapable of predicting the natural velocity spectrum of turbulent eddies. Any modification to the so-called mixing-length parameter (α) would not correctly encompass the physics behind these phenomena and would be potentially misleading.

The thesis is structured in the following way:

In Chapter 1, the theory regarding TNRs and Classical Nova explosions is reviewed. In Chapter 2, the Stochastic Simulation Algorithm for diffusive-reacting systems is de-scribed and implemented in a post-processing method, which is capable of coupling nu-cleosynthesis and turbulent mixing mechanisms in Classical Novae, with the help of 1D profiles of the explosion and the time dependent velocity spectrum resulting from 3D models.

In Chapter 3, the post-processing approach is applied to two nova systems: 1 M CO and 1.25 M ONe. First, an extensive nuclear network hosting hundreds of

isotopic species, ranging from 1H to 40Ca, is used to evaluate the differences between the post-processed models and 1D outputs. Lithium production efficiency is then tested against thermodynamical profiles variations, accretion stage nucleosynthesis and the eventual presence of an unburnt helium layer on top of the white dwarf. Finally, the systematic effects of parametric representation and nuclear rates uncertainties are con-strained.

In Chapter 4, the main results are summarized and eventual interesting directions for further work on Classical Novae are suggested.

Appendix A contains the code used for the stochastic description of turbulent mixing and nucleosynthesis.

In Appendix B the nuclear reactions of the extensive network used in Chapter 3 are listed.

(5)

Contents

Summary iii

1 Thermonuclear Runaways in Classical Nova Explosions 1

1.1 Hydrodynamical modeling of the Thermonuclear Runaway Stage . . . 2

1.2 Classical Nova Nucleosynthesis . . . 3

1.3 Necessity of Turbulent Mixing Effects on Light Isotopes Production . . . . 4

2 Methods and Algorithms 7 2.1 Stochastic Chemical Kinetics . . . 9

2.1.1 The Chemical Master Equation . . . 10

2.1.2 The Stochastic Simulation Algorithm . . . 12

Example: Decaying System . . . 12

2.1.3 Non-Stirred Systems . . . 16

Example: Thermal Diffusion . . . 16

Example: Diffusive-Reacting System . . . 19

2.2 Stochastic Simulation Algorithm for Nuclear Networks . . . 21

2.2.1 Application: ppI chain . . . 23

2.2.2 Time Dependent Profiles . . . 26

2.3 Post-Processing Approach to Classical Novae . . . 31

2.3.1 123 Approach . . . 31

2.3.2 321 Approach: Post-Processed Profiles . . . 31

2.3.3 Turbulent Mixing Representation . . . 32

2.3.4 Algorithm Outline . . . 33

3 Post-Processing Results 37 3.1 Post-Processing Nucleosynthesis Simulations . . . 37

3.2 Lithium Production in Classical Novae . . . 49

3.2.1 Dependence on Thermodynamic Profile Variations . . . 55

3.2.2 Light Element Nucleosynthesis During the Accretion Stage . . . . 60

3.2.3 Lithium Yield Dependence on3He Left-Over Abundance from the Accretion Stage . . . 65

3.2.4 Dependence of the Lithium Yield on the He/H Ratio . . . 67

3.3 Systematic Effects of Parametric Representations and Nuclear Rates Un-certainties . . . 69 4 Conclusions 75 A Routines 77 A.1 Main.f90 . . . 77 A.2 lib.h . . . 86 B Nuclear Network 87

(6)
(7)

Chapter 1

Thermonuclear Runaways in

Classical Nova Explosions

In stars, a thermonuclear runway (TNR) results from a positive feedback between nuclear energy generation and thermal energy content (see, e.g., Mocák et al., 2009). Under nor-mal burning conditions, where the system is in both hydrostatic and thernor-mal equilibrium, and its equation of state resembles the perfect gas law, an excess in energy deposition in-creases both temperature and pressure. This drives an expansion that cools the system and decreases the rate of nuclear energy generation, which is very sensitive to temper-ature variations (Iliadis, 2015), leading to a new equilibrium configuration. This pro-cess acts like a stellar thermostat and it is the basis for evolution theory calculations (see, e.g., Kippenhahn and Weigert, 1990; Salaris and Cassisi, 2005; Iben, 2012). In contrast, if the pressure is almost independent on temperature, as would happen in a degenerate medium supported by the electrons, the increase in nuclear energy production does not cause the expansion and the subsequent cooling. All the energy is converted into thermal content of the ions, which increases nuclear energy generation and the system enters in a vicious cycle. As the temperature continues to rise, it eventually reaches the Fermi level, but at that point the timescale set by nuclear burning can be much shorter than the time required for hydrostatic adjustment, leading to a supersonic expansion and potentially producing an explosion.

Even though classical nova outbursts had been observed for centuries 1, their thermonuclear origin was only hypothesized in the 1950s and 1960s (see, e.g., Schatz-man, 1949; SchatzSchatz-man, 1951; Gurevitch and Lebedinsky, 1957; Cameron, 1959). The bi-nary nature of these systems (Walker, 1954; Kraft, 1963; Kraft, 1964; Paczy ´nski, 1965), along with photometric and spectroscopic studies and hydrodynamical simulations (see Payne-Gaposchkin, 1958; Bode and Evans, 2008; José, 2016), has lead to the current ac-cepted scenario, in which classical novae result from accretion of hydrogen-rich material on top of a CO or ONe white dwarf surface in a close binary system with orbital peri-ods mostly ranging between 1.5 and 30 hr (Diaz and Bruch, 1997; Warner, 2002). When the low-mass main sequence companion (usually a K-M type star, M ∼ 0.6 M , see

Paczynski, 1976) overfills its critical tidal radius (or Roche lobe), the outermost layers flow from the inner Lagrangian point L1 and form an accretion disk. Ultimately, a frac-tion of this material accumulates on the dwarf surface in ∼ 105 yr, at typical rates of 10−10 −10−8 M yr−1, where it is gradually compressed and heated by the high surface

gravity. Higher accretion rates would increase the temperature due to compressional heating, decreasing the degeneracy and potentially inhibiting the development of the TNR. Thermodynamical conditions are such that the accreted envelope soon enters in a electronic degenerate state. As temperatures rise, as a consequence of continuing ac-cretion, nuclear burning begins, mostly through pp chains and the CNO cycle. Proton

(8)

captures dominate nuclear reactions, since even for extragalactic sources there is no ev-idence that the gas is hydrogen-poor. The initiation of nuclear reactions in a degenerate medium leads to a TNR. In particular, 12C(1H, γ)13N is the most important triggering reaction for the TNR (Starrfield et al., 1972; José and Hernanz, 2008), even if in low metal-licity systems it is more likely to be 14N(1H, γ)15O (José et al., 2007; Shen et al., 2009). When temperatures finally reach the Fermi level (around 0.02-0.05 GK), the expansion begins, but at this point nuclear burning timescales are much shorter than the sound crossing time (few 102 s). During the final stages of the TNR, when temperatures are higher than 0.1 GK, nuclear activity is dominated by the hot CNO cycle (see Iliadis, 2015; José, 2016) which generates considerable amounts of short-lived β-decay isotopes, such as13N(τ1/2 =866 s),14O(τ1/2 =102 s),15O(τ1/2 =176 s)and17F(τ1/2 =93 s). In more massive ONe novae, nucleosynthesis further extends to the NeNa-MgAl region (José et al., 2001). The flux generated by nuclear reactions is so high that radiative and conduc-tive processes cannot transport it efficiently to the overlying, colder regions, so vigorous convective (buoyant turbulence) motions set in (with typical turn-over time scales of few seconds) that spread the newly created unstable species throughout the envelope. The subsequent decays generate enough energy to expel the envelope, leading to an explo-sion with typical ejecta velocities of few thousands km s−1 and ejected masses ranging between ∼ 10−7 M and∼ 10−4 M (Starrfield et al., 1972; Starrfield et al., 2016; José,

2016). Since proton captures on these unstable species are almost inhibited, classical no-vae are expected to produce very peculiar overabundances (relative to solar) for13C,14N,

15N and17O (see e.g. Zinner, 1998; Starrfield et al., 1998; José and Hernanz, 1998; José et

al., 2006; Starrfield et al., 2016; Starrfield et al., 2016; Nittler and Ciesla, 2016). How-ever, the rate of classical nova explosion inferred from observations within the Galaxy (30−50 yr−1, Shafter, 2002; Shafter, 2017) reduces the role played by these systems in the context of galactic chemical evolution (José and Hernanz, 1998).

The detailed evolution from the accretion stages to mass ejection depends on the particular choice of initial conditions (Starrfield et al., 2012; Starrfield et al., 2016). For instance, higher dwarf masses achieve higher pressures at the base of the envelope (due to the M-R relation, see Salaris and Cassisi, 2005), leading to a more violent explosion. Moreover, in ONe novae, the development of the TNR is delayed, since 12C(1H, γ)13N releases a less amount of energy, so an higher mass is accreted, leading again to a more violent outburst with respect to the CO counterpart. Similar effects are due to lower dwarf luminosities (or temperatures), which increase the inward conductive flux and the overall degeneracy at the base of the envelope, and lower accretion rates, which limit the temperature rise due to compressional heating.

1.1

Hydrodynamical modeling of the Thermonuclear Runaway

Stage

Most of hydrodynamical models to date are based on 1D, spherically symmetric and non-rotating simulations (Starrfield et al., 1972; Prialnik et al., 1978; José and Hernanz, 1998; Yaron et al., 2005; Denissenkov et al., 2013; Starrfield et al., 2016; Rukeya et al., 2017). These simulations are able to reproduce the overall energetic and thermodynam-ical history of the event along with a detailed nucleosynthesis, from the early hydro-static accretion stages to mass ejection, and to reproduce the main observed properties of a nova outburst. However, these models face several limitations. For instance, even if the accreted material is expected to have solar composition (at least for disk popula-tion systems) the high metal content inferred in the observed ejecta (Z ∼ 0.2−0.5, see Livio and Truran, 1990; Shore et al., 1994; Gehrz et al., 1998; José et al., 2004) cannot

(9)

be explained only by CNO-breakout reactions at temperatures set by the TNR (0.1-0.4 GK). For decades, some sort of mixing mechanism between the accreted (solar) enve-lope and the underlying white dwarf has been thought to take place, such as diffusion-induced mixing (see, e.g., Prialnik and Kovetz, 1984; Fujimoto and Iben, 1992), shear mixing (Durisen, 1977; Kippenhahn and Thomas, 1978; MacDonald, 1983; Livio and Tru-ran, 1987; Kutter and Sparks, 1987; Fujimoto, 1988), convective overshoot-induced flame propagation (Woosley, 1986) and mixing by gravity wave breaking on the white dwarf surface (Rosner et al., 2001; Alexakis et al., 2004), but none of these models can reproduce the metallicity enhancements. This observational constraint is usually addressed in the 1D models by artificially varying the chemical composition of the accreted solar material by mixing various amounts of white dwarf composition matter without specifying any mechanism. This pre-mixing approximation is justified post hoc by the explosion evolu-tion. Nonetheless, an higher initial abundance of12C increases also the energy generated by12C(1H, γ)13N, which ultimately leads to a much less violent TNR (José et al., 2016).

Moreover, some issues regarding how the TNR actually initiates (probably a point-like ig-nition over some mass zones) and propagates throughout the envelope as a deflagration front, cannot be addressed with 1D spherically symmetric codes.

Since the 1990’s, multi-dimensional codes have been implemented to study mix-ing processes at the core-envelope interface durmix-ing the TNR stage, to overcome this prob-lem. In particular, Glasner and Livne, 1995 were the first to note vortical mixing buoy-ancy induced by temperature fluctuations at the base of the envelope, but it was only until Casanova et al., 2010 that Kevin-Helmoltz instabilities induced by buoyant convec-tive motion triggered by the TNR had been proven to be efficient enough to dredge-up high metal content material from the underlying white dwarf at level that agree with observations (followed by Glasner and Truran, 2012; Glasner et al., 2014, even if these authors did not consider a realistic composition for the dwarf substrate).

Within the context of partial differential equations involving time derivatives, the discretization procedure (see Press et al., 1992; Richtmyer and Morton, 1994; Toro, 2009) can be represented with two main approaches, explicit or implicit. In explicit schemes, variables are update by using quantities calculated at the previous time step. This method is simple to implement, but the time step ∆t must satisfy the Courant-Friedrichs-Levy (CFL) condition (i.e. has an upper limit), which states that the homogeneity of the zoning cannot be assured if acoustic waves are not fast enough to solve the spatial resolution element within∆t. In contrast, implicit schemes are unconditionally stable, but they are much harder to implement, since an iterative procedure must be implemented to solve the system of differential equations. Multi-D models used to simulate classical nova ex-plosions are based on codes that work with time-explicit schemes and mesh-refinement techniques to resolve critical features along the computational domain (FLASH and VUL-CAN, see José, 2014 for a recent review). Since the time steps are usually very short (as imposed by the CFL condition), they can only be used to study limited time domains. Furthermore, to limit the overall cost of the simulation, these models work with restricted nuclear networks which consider just a handful of isotopic species to account for the en-ergy generation, so they are not feasible for studying the nucleosynthesis in detail.

1.2

Classical Nova Nucleosynthesis

Classical novae explosions can either erupt on a CO or ONe white dwarf, which is the result of the post-AGB evolution of a star with initial mass  11 M (see, e.g., Ritossa

et al., 1996; Garcia-Berro et al., 1997; Ritossa et al., 1999). In binary systems, these objects may experience a common envelope event that shrinks the orbit and eventually strips

(10)

the outermost layers of 1H and4He from the compact object (see, e.g., Paczynski, 1976;

Iben and Tutukov, 1985; Halabi et al., 2018). Left-over CO cores are composed of12C and

16O, while in ONe’s it is mainly16O and 20Ne, with minor traces of 23Na, 24Mg, 25Mg

and27Al (Ritossa et al., 1996). Stellar evolution calculations have shown that CO dwarfs

should be less massive than those with ONe composition, with a threshold for the final mass set somewhere near 1.1 M (Gil-Pons et al., 2003). The initial mass threshold is also

not well constrained (∼ 7−8 M , Althaus et al., 2010; Karakas and Lattanzio, 2014).

The difference in mass means that CO novae should achieve lower temperatures and densities than their ONe counterparts during the final stages of the TNR. For the same reason though, the expansion phase in CO nova systems should be longer for CO than ONe white dwarfs. This also means a longer nucleosynthesis duration. Moreover, the higher concentration of12C in COs also leads to the development of less violent TNRs.

For CO models, the cold accretion stage is dominated by pp chains (mostly

1H(1H, e+

νe)2H) with a contribution from the pep reaction, (1H(1H e−, νe)2H, see

Star-rfield et al., 2009) and12C(1H, γ)13N, followed by13N decay into13C. When temperatures reach the Fermi level (0.02-0.05 GK), nuclear energy is mainly released by the cold CNO cycle (Iliadis, 2015), especially12C(1H, γ)13N, 13N(β+)13C, 13C(1H, γ)14N. By the time

the TNR is fully developed (T ∼ 0.1GK), proton capture on13N is faster than β+decay and the hot CNO1, CNO2, CNO3 cycles dominate the chemical evolution (see, e.g., Il-iadis, 2015; José, 2016). Nuclear activity in the A>20 region is mainly restricted to proton capture on20Ne and, to a minor extent,21,22Ne,23Na, 24,25,26Mg and27Al. At thermody-namic condition found in novae,26Al must be treated as two separated states: the long-lived ground state (τ1/2 = 0.72 Myr) and a short-lived isomeric state (with τ1/2 = 6.4 s).

This distinction is fundamental for a correct study of26Al in nova explosions (José et al., 2001).

In ONe novae the evolution is quite similar during the accretion stage, but the higher temperatures achieved during the final stages of the TNR (usually T ∼ 0.2−

0.4 GK, depending on the particular system) make it possible to have a considerable nuclear activity beyond neon. In fact, it extends up to 40Ca for the most massive ONe

novae (Casanova et al., 2018). The higher initial concentration of16O increases the rate of

16O(1H, γ)17F, which is followed by17F(1H, γ)18Ne. In the NeNa-MgAl region, the

nu-clear path starts with20Ne(1H, γ)21Na, followed by21Na(1H, γ)22Mg. In the SiCa mass

region, important leakages from the NeNa-MgAl region are mainly due to

26gAl(1H, γ)27Si and 27Al(1H, γ)28Si. The principal path to higher mass elements

syn-thesis is through 30P(1H, γ)31S, followed by 31S(1H, γ)32Cl and the subsequent decay 32Cl(

β+)32S or31S(β+)31P followed by31P(1H, γ)32S, depending on the thermodynamic conditions.

A detail discussion of nova nucleosynthesis is given in Chapter 3 for two systems (1 M CO and 1.25 M ONe).

1.3

Necessity of Turbulent Mixing Effects on Light Isotopes

Pro-duction

Light element nucleosynthesis starts with 3He(4He, γ)7Be, which increases 7Be

concen-tration during the accretion stage. Both 6Li and 7Li are very fragile at temperatures

> 0.01 GK and get depleted before the beginning of the TNR. As temperatures rise above 0.05 GK, both 7Be(1H, γ)8B and 3He(3He,1H1H)4He limit 7Be production. At

T > 0.1 GK, the photo-disintegration reaction8B(γ,1H)7Be is particularly efficient and balances the destruction reactions, resulting in a pseudo-equilibrium abundance for7Be,

(11)

which is then spread throughout the accreted envelope by convective motions (see Her-nanz et al., 1996). At the beginning of the expansion, when both temperatures and densities start decreasing, the only important reaction for beryllium nucleosynthesis is

7Be(1H, γ)8B. Finally, after the ejection stage, the remaining7Be decays through electron

capture (τ1/2 ∼53 days) to7Li.

Recent 3D models of the TNR stage have shown that convective transport of en-ergy in the semi-degenerate envelope enters in a not fully-developed turbulent regime, which presents an anisotropic component along the gravity axis and is continuously driven by nuclear energy production, mostly proton captures on CNO nuclei (see Fig. 1.1, Casanova et al., 2016). Turbulent eddies undergo a vorticity cascade (whose energy trans-port is still directed downscale) along with a broad spectrum of time and length scales for fluid motions, which alters energy transport and compositional mixing. The velocity spectrum (see Fig. 1.2) is characterized by broad wings which account for less frequent but large deviation events that partially mix the hot inner regions and cold outer ones. Light isotopes, whose Coulomb barrier is the weakest against proton captures, should be strongly affected by this property. The usual picture given by the Cameron-Fowler mech-anism (see Cameron, 1955; Cameron and Fowler, 1971), in which7Be is created at the base of the envelope and then gently dredged-up where it is eventually going to decay to7Li, must be revisited. In a more realistic physical representation, during the final stages of the TNR, the pseudo-equilibrium abundance of7Be is spread throughout the envelope, but once it reaches the outer regions, it is gradually forced to mix back into high temper-atures zones, where it is eventually destroyed. Moreover, even if the subsequent decays of14O and15O, which are not yet treated by 3D models, lower the temperature gradient

and shut down convection, turbulent mixing is expected to still be very efficient, at least until the envelope detaches from the white dwarf surface. The timescale for the decay of non-driven turbulence is set by molecular viscosity, and can be as high as months or years according to the thermodynamic condition of the envelope (José, 2016, even if more 3D models of this dynamical phase are needed to better constrain the decay timescale). This further mixing, which is not included in 1D models, increases the number of proton capture reactions on7Be. During this phase, temperatures are well below 0.1 GK and7Be abundance is no longer supplied by photo-disintegration of8Be. Thus, the final lithium overabundance relative to solar, as predicted by most published 1D studies of classical novae (100-10000, see, e.g., Arnould and Norgaard, 1975; Starrfield et al., 1978; Boffin et al., 1993; Hernanz et al., 1996; José and Hernanz, 1998; Starrfield et al., 2016; Starrfield et al., 2019) should be considerably reduced.

A more precise treatment of turbulent mixing and light element production is not important for the overall dynamical and energetic history of the explosion. These isotopes have very low abundances (<10−6in mass fractions), so they do not contribute

in any way to nuclear energy generation. However, understanding lithium production in classical nova explosions is important for cosmological parameter studies. In particu-lar,7Li is the only light element (among Li, Be and B) synthesized during the Big Bang

nucleosynthesis (BBN, see Alibes et al., 2002; Fields et al., 2014). Its primordial abun-dance has been calculated within the range 2.66-2.73 dex (Coc and Vangioni, 2014) and it is three times higher than the inferred Spite plateau value (2.2 dex) from observations of halo dwarfs (see, e.g., F. Spite and M. Spite, 1982; Bonifacio and Molaro, 1997). This discrepancy is better known as the lithium problem (see, e.g., Chiappini and Matteucci, 2001; Matteucci, 2010). During the history of the Galaxy,7Li abundance has increased to its present value (3.26 dex, see Lodders and Palme, 2009), as seen in the sharp increase from the Spite plateau in the (A(Li), [Fe/H]) diagram. Lithium is either produced by Galactic cosmic rays (GRCs, see, e.g., Romano et al., 2001; Prantzos, 2012) and stellar sources (mainly low-mass stars, novae and type II supernovae, see Rukeya et al., 2017

(12)

and references therein). For classical novae, considering the predicted overall lithium overabundance and mass ejection (10−5−10−4 M ) and the rate of explosive events

ob-served in the Galaxy, the amount of lithium produced by these system during the history of the Galaxy should be ∼ 20M . Since the Galactic content of 7Li is estimated to be

about 150 M (Hernanz et al., 1996), their role cannot yet be ruled out. Should turbulent

mixing mechanisms turn out to be very effective in reducing the final lithium yield, clas-sical novae may prove to be negligible contributors to the Galactic store and unimportant in resolving any cosmological tensions.

FIGURE 1.1: Spatial distribution of20Ne mass fraction abundance, as re-sulting from the 3D model of a 1.25 M ONe nova (2D slices in the xz plane, 635 s after the beginning of the simulation, Casanova et al. 2016).

−400 −200 0 200 400

v

[

km s

1

]

10−5 10−4 10−3 10−2 10−1 frequency

1

M CO FLASH

FIGURE1.2: Vertical velocity spectrum at the base of the envelope, around the time when Tbase ' 0.14 GK, as resulting from the 3D model of a 1 M

(13)

Chapter 2

Methods and Algorithms

Recent 3D simulations of classical nova explosions have shown that during the ther-monuclear runaway stage (TNR), turbulent convection proceeds on a broad range of time and length scales for energy transport and compositional mixing (see, e.g., Casanova et al., 2016). The time dependent vertical velocity spectrum (see Fig. 2.1) shows broad wings associated to high mixing rate events that can link zones with very different tem-peratures and densities. These effects, which are not included in the mixing-length the-ory (MLT) representation of convectively unstable media (Biermann, 1932; Böhm-Vitense, 1958; Salaris and Cassisi, 2008), can be especially important for those isotopes involved in proton capture reactions and whose chance of surviving the TNR depends on how long they are exposed to the critical thermodynamic conditions found at the base of the envelope (e.g. Boffin et al., 1993). For instance, light isotopes, such as6Li, 7Li and7Be, are efficiently destroyed at temperatures ranging between 0.02 and 0.1 GK on the typical dynamical timescales dictated by the explosion (e.g. José and Hernanz, 1998). Therefore, the effects of turbulent mixing on nucleosynthesis needs to be studied in more detail.

Multi-D models have also been fundamental to study the initiation of the TNR, the development of the deflagration front, and the time dependent dredging-up of metal rich material from the underlying white dwarf (see Chapter 1). Because these simula-tions are computationally very expensive and cannot trace the final stages of the explo-sion, they work with minimal, optimized nuclear networks which suffice to obtain the energy generation. Instead, one dimensional models can simulate the overall dynamical evolution from the early hydrostatic accretion phase to the final stages of the explosion, and the chemical evolution from1H up to40Ca in great detail. To explain the high metal

content in most of the observed ejecta (e.g. Gehrz et al., 1998), given the impossibility of treating three dimensional fluid-dynamical instabilities self-consistently in 1D, these models often assume that the accreted envelope has a chemical composition given by a mix of solar and white dwarf material. This mix, which renders the accreted envelope chemically homogeneous, is calculated with the so-called pre-mixing parameter ( fpre, see,

e.g., José, 2016), defined as

Xacc=

XWD+ fpreX

1+ fpre

(2.1)

where Xaccis the resulting pre-mixed accretion composition. However, an higher initial

content of12C increases the total energy released by the fastest reaction12C(p, γ)13N, ac-celerating the expansion and leading to a much less violent TNR (José et al., 2016). More importantly, most of 1D models are still based on some version of MLT (José, 2016; Star-rfield et al., 2016). This theory is based on the assumption that every thermodynamic quantity is constant within one pressure scale height Hp, defined (in hydrostatic

equilib-rium) as

Hp ≡ P

(14)

where P, ρ and g are the local pressure, density and gravity. When the radiative diffusion rate is too short to drive an eventual local energy deposit excess (due to high opacities or high fluxes, or both), super-adiabatic temperature gradients establish and convective motions develop. The transported energy, whose origin is thermal, not kinetic, links two regions with different thermodynamic properties, so the scale on which convection globally operates must be greater than Hp. This "ignorance" is put into a parametric (not

functional) form to account for the overall energetic effects, the mixing-length parameter (α). With this representation, the convective flux can be estimated as

Fc = 1

2ρCp∆Tvconv(α) (2.3) where Cpis the heat capacity at constant pressure,∆T is the temperature excess and

vconv(α) = 1 23/2 αHp r [− Gm Hp (∇ − ∇ad)(∂ lnρ ∂ lnT)P,µ] 1/2 (2.4)

The problem is that MLT has been developed for treating hydrostatic systems, which classical novae certainly are not, and it is incapable of predicting the spectrum for the ve-locity distribution of turbulent eddies under any circumstances and physical conditions. In case where turn-over convective motions happen on similar timescales as the fastest nuclear reactions, the usual procedure is to treat element mixing as a diffusive process (see e.g. Wood, 1974; Langer et al., 1985; Paxton et al., 2018) by solving

(∂X

∂t )mix = Dconv∇

2X (2.5)

Dconvcan be interpreted as a diffusion coefficient for convective mixing processes, and it

is often expressed as Dconv 'vconvHp. However, this representation ignores the variety of

timescales that naturally arise from the statistical distribution of velocities in 3D models, which cannot furnish a unique diffusion coefficient. Moreover, this problem cannot be addressed with a systematic variation of the mixing-length parameter α, since it only determines the overall magnitude of the convective flux.

In this thesis, the effects that turbulent mixing may have on nucleosynthesis are studied by means of a post-processing approach (Wanajo et al., 1999; Iliadis et al., 2002; Denissenkov et al., 2013; Iliadis et al., 2018): nuclear networks are computed using den-sity and temperature 1D profiles produced by dynamical models, while the diffusive problem is treated with the 3D velocity distribution functions. This method is based on the assumption that the overall dynamical and thermodynamical evolution predicted by 1D model is essentially correct, but the more precise information on mixing processes coming from multi-D models allow for a better physically consistent treatment of the nu-cleosynthesis without introducing unphysical and arbitrary parameters. The justification for this approach comes from the fact that the majority of mixing events happen in the core of the velocity spectrum, with very similar values to those predicted by MLT (see Fig. 2.1).

Both nuclear processes and mixing are treated with a fully stochastic algorithm (the Stochastic Simulation Algorithm, Gillespie, 1977) capable of simulating the different timescales for element migration set by convective turbulence and stiff networks in which reaction rates differ by several order of magnitude, as in case of classical nova explosions. In §2.1 the theory behind stochastic chemical kinetics and the Stochastic Simula-tion Algorithm is reviewed and applied for generic diffusive-reacting systems.

In §2.2 the SSA is illustrated by solving nuclear reaction networks, using assumed time dependent thermodynamical profiles.

(15)

−400

−200

0

200

400

v

[

km s

1

]

10

−5

10

−4

10

−3

10

−2

10

−1

10

0

frequency

1

M CO FLASH

v

c

(

MLT

)

FIGURE2.1: In blue: vertical velocity spectrum at the base of the envelope, around the time when Tbase '0.14 GK, as resulting from the 3D model of a 1 M CO nova (Casanova et al., 2016). In dashed red: MLT prediction of the convective velocity from the 1D SHIVA model (José and Hernanz,

1998).

Finally, in §2.3, the final implementation of the algorithm in the context of the post-processing approach is described.

2.1

Stochastic Chemical Kinetics

In principle, the time evolution of a reacting system can be described by tracing the posi-tions and velocities of each particle in the ensemble and updating the population species whenever a reaction occurs. In this classical representation, the system should evolve deterministically, in that only the initial state is required to give a complete description at any later time. In practice, however, when dealing with particle reactions (e.g. atomic, molecular, nuclear), quantum effects cannot be neglected. In particular, the uncertainty principle ensures that the outcome of each reaction is only known in terms of probability distribution functions. For instance, the encounter between two particles can lead to sim-ple scattering or the formation of a bound system, each process weighted with a different probability. Even for internal particle changes a deterministic scheme is not feasible. For example, consider an atomic upper energy level i that can spontaneously decay to a set of lower states {j}. Even if these processes are completely spontaneous and indepen-dent on the external conditions, each transition has a different firing probability and a different decay rate, determined by the matrix element Vij(Sakurai, 1994). Therefore, not

only it is impossible to exactly predict which reaction will be fired next, but also the time at which the next reaction will occur. It is clear then that generic reacting systems can evolve in different configurations with assigned probabilities, regardless of the nature of the reactions. Treating their dynamical evolution with a deterministic approach could

(16)

lead to quantitatively and qualitatively wrong conclusions. A stochastic method is there-fore required in order to account the random fluctuations in the species populations and to consistently quantify the uncertainty that such stochasticity inevitably introduces.

2.1.1 The Chemical Master Equation

A reacting system is represented as composed of particle species Si (i=1, ..., N) with

population numbers (Qi(t) ∈ N)that are functions of time t. Moreover, if the system

is well-mixed and in thermodynamical equilibrium, particles spread all over the system volume V by construction and the vast majority of the interactions are elastic reactions. The velocity spectrum is represented by the Maxwell-Boltzmann distribution, depending only on temperature and particle mass. These considerations enormously simplify the problem, since it is possible to consider only those reactions that change the population numbers: Ωj (j = 1, ..., K). Finally, a reaction j, whenever fired, changes each Qi by

a certain integer number νji, whose vectorial representation is the so-called state-change

matrix(∈NNN)(Gillespie, 1977).

Since the system does not evolve deterministically, it is not possible to give an an-alytical form to predict the exact evolution for each single Qi(t). One solution is to study

the evolution in time of the probability of having a certain state Q = (Q1(t), ..., QN(t))

at some time t, given the state Q0 = (Q1(t0), ..., QN(t0)) at a previous arbitrary time

t0. The only quantities required to compute this evolution are the propensity functions

aj(j=1, ..., K), defined as the probability of having one reaction of type j fired through-out the ensemble, in the infinitesimal interval of time[t, t+dt]:

dPj([t, t+dt]) ≡ aj(Q(t))dt (2.6)

The chemical master equation (CME) has been formulated precisely in order to solve this problem (see, e.g., A. McQuarrie, 1967; Gillespie, 1992a; Gardiner, 2009; Kampen, 1961):

∂P(Q, t|Q0, t0) ∂t = N

j aj(Qνj)P(Qνj, t|Q0, t0) −aj(Q)P(Q, t|Q0, t0) (2.7)

where νj is the jth row of the state-change matrix. Eq. 2.7 follows from the definition of

the propensity functions and the rules of probability. P(Q, t|Q0, t0)gives an indication

on how the knowledge of the system changes in time.

The CME requires specifying the propensity functions for each reaction. Usually, particle reactions can either be unimolecular, where only the internal state of a component changes in time, or bimolecular, where two particles collide to form one or more prod-ucts1. For unimolecular processes as β+ reactions, since the number of changes in an arbitrary interval of time is distributed according to the Poisson distribution (Leo, 1994), the interval of time required for a change in the internal state of a single particle follows an exponential distribution:

∆t∼ 1

τe

t/τ (2.8)

where the associated timescale τ is given by the Fermi golden rule (Bethe and Morrison, 1956). For a single particle (type i) undergoing the unimolecular reactionΩj, the

proba-bility of having such change in the infinitesimal interval of time[t, t+dt]is dt/τj. If Qi

1Higher order reactions are possible, but they are often a combination of unimolecular/bimolecular

pro-cesses. One example is the 3α reaction, which is actually a two-steps process: αα→8Beand8Be

α→12C γ

(17)

particles of type i are present at time t, the probability of having one such reaction in the whole ensemble is Qidt/τj. Thus, the propensity function for unimolecular processes is

aunij =Qij (2.9)

or expressed as a function of the reaction rate Rj ≡1/τj[s−1]:

aj = QiRj (2.10)

For bimolecular reactions, the condition of thermodynamic equilibrium ensures the exis-tence of a reaction rate constant cj[s−1], usually a function of temperature and density (or

system volume), such that the probability of having one interaction between two particles is cjdt (Gillespie, 1977):

cj ≡<σj(v)v>V−1 (2.11) where σj(v)is the interaction cross section for process j as a function of the relative

ve-locity v between the colliding particles2. For two interacting species (e.g. types m and n with populations Qm and Qn) the probability of having one reaction j in the system is

QmQncjdt, so the propensity function for bimolecular processes is

aj = QmQncj (2.12)

To prevent double counting for identical interacting particles:

aj =1/2 Qm(Qm−1)cj. (2.13)

By solving the CME it is possible to predict the time evolution of P(Q, t|Q0, t0)and of any

quantity h(Q)that depends only on the populations. However, the CME can be viewed as a set of coupled ordinary differential equations (ODE’s) (Gillespie, 1992a), one for each combination of population species(Q1, ..., QN), and there are only few cases for which it

can be solved analytically. Studying the time evolution of< h(Q) >has no advantage, since it is defined as

< h(Q) >=

Q

h(Q)P(Q, t|X0, t0) (2.14)

In particular, the time evolution of<Q>can be inferred from <Q>

∂t =

j νj < aj(Q) > (2.15) If{Rj}is a set of only unimolecular reactions, then< aj(Q) >= aj(<Q>)and the set

of ODE’s is closed (i.e. < QiQj >6=< Qi >< Qj >). But if some of the reactions in the

network are bimolecular, the correlation terms<QmQn >cannot be neglected in Eq. 2.15

and the set of ODE’s for the moments of Q is not closed. In the particular case where the fluctuations in each species population are negligible (meaning< Q>' Q) the system evolves deterministically and the set of equations becomes

∂Q

∂t = −

j aj(Q) (2.16) Eq. 2.16 is also known as the reaction rate equation (RRE) and it has been used success-fully in traditional kinetic chemistry since mid 19thcentury (Jost, 1966). This deterministic

(18)

approach, where species populations are not necessarily integers, works when the popu-lations are so large that the fluctuations are asymptotically infinitesimal (Gillespie, 2007). However, when the populations are not very large or differ by orders of magnitude from each other, random fluctuations are not negligible and the CME must be solved instead.

2.1.2 The Stochastic Simulation Algorithm

There are no simple analytical solutions to the CME, but it is possible to simulate the histories of Q with a stochastic algorithm. The average of the realizations of such an al-gorithm should converge to the solution of Eq. 2.15. The stochastic simulation alal-gorithm (SSA) has been formulated to solve exactly the CME, in the sense that the probability of occupying a point in the parameter space (Qi,[t, t+dt]), as simulated by the

algo-rithm, should resemble the solution of the CME (Gillespie, 1977; Gillespie, 2007). Instead of asking what the probability is of having the system in a certain state, the algorithm stochastically generates the time interval∆t during which a single reaction occurs in the whole system. Finally, the algorithm samples the reaction of type j that takes place at that interval. Since both∆t and j are random variables, their joint probability distribution p(∆t, j|Q, t)must be found. The probability of having the reaction j in the infinitesimal time interval[t, t+∆t+dt]is then

dP(∆t, j) = p(∆t, j|Q, t)dt. (2.17) p(τ, j|Q, t)follows from the laws of probability (Gillespie, 2007):

p(τ, j|Q, t) =exp(−a0(Q)τ)aj(Q) (2.18)

where a0 =∑ j

aj.

One implementation of the SSA is the direct method (Gillespie et al., 2013): • At time t compute the propensity functions aj(Q);

• sample∆t from an exponential distribution with rate a0=∑ j

aj;

• calculate the firing probability for each single reaction j as pj = aj(Q)/a0;

• weighted sampling of the index of the next reaction, using pj;

• update the populations according to the fired reaction: QQ+νj;

• update the time: t→t+∆t;

• reiterate the process until a certain imposed stopping condition is reached.

This algorithm can be applied to any reacting environment (e.g. cellular mechanisms, stellar interiors, prey-predator systems) with the condition that the reaction rates(cj, Rj)

remain constant in time and that species populations spread homogeneously within the volume V. For most physical applications, this condition implies that the temperature does not change in time (or in space) and that chemical gradients are negligible, so it is a single box model with V fixed.

Example: Decaying System

To illustrate how the SSA actually works, it is possible to study one of the few systems for which Eq. 2.15 has an analytical solution . Consider a degrading system composed by

(19)

a single population (A) whose particles can undergo a decaying reaction with constant rate R [s−1]:

A→∅ (2.19)

Since the system has only one species and only one reacting-channel, the propensity func-tion is a scalar, a=QAR. The state change matrix is also a scalar, ν11 = −1 (see §2.1.1).

For this system, Eq. 2.15 has an analytical solution for<Q>:

<Q(t) >=Q0e−Rt (Q(t =0) =Q0) (2.20)

and for stdev(Q)(Erban et al. 2007):

stdev(Q(t)) =

q

Q0e−Rt(1−e−Rt) (2.21)

The RRE solution is the same as for<X>. Fig. 2.2 shows several realizations of the SSA along with the analytical solutions given by Eq. 2.20 and Eq. 2.21. Different histories are obtained every time the simulation is repeated. The multiple realizations yield the time dependent simulated distribution function p(XA = X, t|X0, t0)|sim (see Fig. 2.3). Since

the SSA has been formulated to describe a system governed by the CME, it must also simulate the random fluctuations according to Eq. 2.21, even at low population numbers (see Fig. 2.4).

The stochastic evolution ends when all particles have decayed. This is a crucial dif-ference between the stochastic and deterministic approach. Since the SSA treats every species population as integers variables, when particles have all degraded no further evolution is possible. On the other hand the RRE approach admits non-integer solutions, so zero pop-ulations are never reached. This can lead to large deviations from the stochastic behavior (see Fig. 2.5).

Since the number of particles in the ensemble drops, the signal to noise ratio decreases until random fluctuations dominate the evolution of the system. The simulated distribution should resemble the solution of the CME, which in this case is a binomial distribution (Erban et al. 2007):

p(QA =Q, t|X0, t0) =Q0

Q 

e(−RQt)(1−e−Rt)(Q0−Q) (2.22)

so the signal to noise ratio should scale as as∼ (Q)12. Hence, for Q=1 a subsequent zero

can occur.

As shown in Fig. 2.6, by increasing the repetitions, the simulated distribution converge to the analytical solution, but since the SSA is a discrete and finite algorithm, it is restricted to regions in the (t,QA) plane with frequency determined by the total number

of entries. Lower repetitions will inevitably sample regions closer to the solution of Eq. 2.20, reducing the mean deviation and introducing a bias in the population counting.

(20)

0

1

2

3

4

5

6

t

[

s

]

0

20

40

60

80

100

Q

SSA

SSA realizations

<

Q

>

CME

1

CME

envelope

FIGURE2.2: Five SSA realizations (in dashed red) along with the analyti-cal solution for the mean (in blue) and standard deviation (light blue enve-lope) (Eq. 2.20 and Eq. 2.21). The rate constant and initial population are

respectively R=1 s−1and QA(0) =100.

0

1

2

3

4

5

6

t

[

s

]

20

40

60

80

100

Q

SSA

0.0

0.2

0.4

0.6

0.8

1.0

normalized intensity

FIGURE2.3: Simulated distribution function normalized to the maximum at each time interval. 10000 trajectories have been generated for this

(21)

20

40

60

80

100

Q

SSA

−4

−2

0

2

4

(

Q

SSA

<

Q

>

CM E

)

CM E

time

percentile: 16-84

0.0

0.2

0.4

0.6

0.8

1.0

normalized intensity

FIGURE2.4: Relative SSA displacement from the mean CME solution, nor-malized to the CME deviation, as a function of the population numbers. Being a degrading system, time increases as the population decreases. The white lines represents the 16-84 percentile envelope. Same parameters as

in Fig. 2.2.

20

40

60

80

Q

SSA

−1.5

−1.0

−0.5

0.0

0.5

1.0

1.5

(

QSSA QRRE

)

QRRE

time

percentile: 16-84

0.0

0.2

0.4

0.6

0.8

1.0

normalized intensity

FIGURE2.5: Relative displacement of the SSA realizations from the RRE solution as a function of the population number. Same parameters as in

(22)

0 1 2 3 4 5

t

[

s

]

0 20 40 60 80 100

Q

SSA

N

ITER

=100

0 2 4

t

[

s

]

0 20 40 60 80 100

Q

SSA

N

ITER

=10000

0.0 0.5 1.0 1.5 2.0 2.5

log

10

p

CM E

p

SSA

p

CM E

FIGURE 2.6: Relative displacement between the generated distribution function and Eq. 2.22. Two different number of entries have been tested:

102(left), 104(right). Same parameters as in Fig. 2.2.

2.1.3 Non-Stirred Systems

The algorithm described in §2.1.2 simulates the evolution of a reacting system if the re-action rates of the unimolecular/bimolecular processes (Rj, cj) are homogeneous and

if any concentration gradient can be neglected within V. If these conditions fail, the SSA must be modified. In case of spatial dependent distributions, one approach is to divide the original system into M sub-volumes that can be considered to be well mixed. Furthermore, all the physical quantities are supposed to be constant within each zone (Chaturvedi et al., 1977; Gillespie et al., 2013; Gillespie and Seitaridou, 2013). With this representation, particle species must be distinguished not only by their type i, but also by the volume element k they occupy at any moment in time. If the sub-volumes are completely decoupled from each other, M versions of the SSA run independently, since they describe different ensembles. On the other hand, if particle are free to move between these compartments, by some diffusive or mixing processes, the SSA must also account for these effects.

Example: Thermal Diffusion

For thermal diffusion, particles move in space due to their thermal kinetic energy and randomly deviate in their trajectory at each collision within the ensemble. The time evo-lution of < Qi > at each point in space is given by the diffusion equation (see, e.g.,

Einstein, 1905; Smoluchowski, 1906; Gillespie and Seitaridou, 2013):

< Qi >

∂t

(23)

The diffusion coefficient Di [cm2 s−1]determines the timescale required for a particle of

type i to cover a length L:

tL=

L2 Di

. (2.24)

This equation represents the deterministic approach to the diffusive problem and is solved by a Green function of the appropriate spatial dimension and boundary conditions (Man-delis, 2001). However, not only is diffusion a stochastic process, but as remarked in §2.1.2, the population numbers are discrete variables. A deterministic approach is not desirable for treating the evolution of a system far from the thermodynamic limit. Again, to over-come this problem, the SSA can be used to correctly simulate the stochastic nature of diffusive systems. Once the ensemble has been divided in M compartments, particle species must be distinguished by their types and sub-system. Diffusion is then treated within the algorithm as a particular reaction that, whenever fired, moves one particle from one zone to the other:

Qij Qik ∀(j, k) (2.25)

where j,k are the interacting zones. For 1D problems, the diffusive rate between two adjacent zones is estimated as

Rd(i) = Di

∆x2 [s

−1] (2.26)

where∆x is their fixed spatial separation (Gillespie et al., 2013). Di can differ for each

component, and for thermal diffusion it can be calculated as λ(i)cs(i), where λ(i)is the

mean free path and cs(i)is the thermal velocity for species i. Since Rd(i) is the

proba-bility per unit time that a single particle from compartment j moves to compartment k, diffusion can be treated as a unimolecular process. The respective propensity function is

a(ij→ik) =QjRd(i). (2.27)

Every other aspect of the algorithm is the same as in §2.1.2. Each time a diffusive reaction ij → ik is sampled, Qij → Qij−1 and Qik → Qik+1. Fig. 2.7 shows the deterministic

solution3of Eq. 2.23 for a single particle species diffusing in 1D along with the stochastic evolution given by the SSA. In the particular case where particles diffuse within a box with rigid walls, the total number of propensity functions is(2×M−2)4. Unlike the

deterministic solution, which converges smoothly to the equilibrium configuration, ran-dom fluctuations due to stochastic diffusion become more important and persist even when steady state is reached after one diffusive time scale:

tdi f f =

L2

D (2.28)

where L is the system length. At equilibrium, the average flux between two zones is zero, but not its mean dispersion. The distribution shown in Fig. 2.7 flattens only because the occupancy probability must be independent of position, in the absence of other processes. Thus, even if the deterministic approach can describe the overall evolution of the system, it fails in describing the details due to the inherent randomness of the diffusive process. However, since no strict condition is imposed on the total number of compartments, the systematic effects due to resolution must be checked. Fig. 2.8 shows three stochastic realizations for the same system using different spatial discretization of the sub-zones. A greater number of compartments better resolves the fluctuations in the populations,

3The deterministic solution has been obtain with a Rapson-Newton solver (Press et al., 1992). 4Since particles cannot migrate beyond boundaries.

(24)

while, in the low resolution limit, the signal to noise ratio increases due to the greater number of particles per zone. However, no systematic deviation is present among the different resolutions considered.

The ability of the SSA to solve the stochastic fluctuations comes at a cost. The number of steps required to evolve the system on the diffusive timescale tdi f f is

Nstep 'tdi f f/<∆t> (2.29)

where<∆t >is the mean time step sampled by the algorithm, which is proportional to 1/a0. Since a0=∑ aj ' Maj and aj 'Rdi f f < Q>, the total number of steps required is

Nstep '<Q> ML2/∆x2. (2.30)

where<Q>is the average population across the system. For the parameters used in Fig. 2.7, the algorithm requires almost 106steps to evolve. In contrast, the implicit numerical scheme used to get the deterministic solution only requires 103 steps, but completely neglects the granularity and the details that naturally arise with the stochastic approach.

0.0 0.5 1.0

x

/

L

0 5000 10000 dQ dx

[

cm

1

]

log

10

(

t

/

t

diff

) = 8.000

SSA RRE 0.0 0.5 1.0

x

/

L

0 5000 10000 dQ dx

[

cm

1

]

log

10

(

t

/

t

diff

) = 4.024

SSA RRE 0.0 0.5 1.0

x

/

L

0 2000 4000 dQ dx

[

cm

1

]

log

10

(

t

/

t

diff

) = 2.012

SSA RRE 0.0 0.5 1.0

x

/

L

1000 2000 3000 dQ dx

[

cm

1

]

log

10

(

t

/

t

diff

) = 0.018

SSA RRE

FIGURE2.7: Stochastic (in red) and deterministic (in blue) time evolution of a diffusing system (D = 1 cm2/s) with initial chemical distribution as in the upper left panel. Reflecting boundaries have been imposed. The system length and spatial resolution are respectively L =1 cm and∆x =

(25)

0.0 0.5 1.0

x

/

L

0 5000 10000 dQ dx

[

cm

1

]

log

10

(

t

/

t

diff

) = 8.000

x

[

cm

]

0.01 0.1 0.05 0.0 0.5 1.0

x

/

L

0 5000 10000 dQ dx

[

cm

1

]

log

10

(

t

/

t

diff

) = 4.024

x

[

cm

]

0.01 0.1 0.05 0.0 0.5 1.0

x

/

L

0 2000 4000 dQ dx

[

cm

1

]

log

10

(

t

/

t

diff

) = 2.012

x

[

cm

]

0.01 0.1 0.05 0.0 0.5 1.0

x

/

L

1000 2000 3000 dQ dx

[

cm

1

]

log

10

(

t

/

t

diff

) = 0.018

x

[

cm

]

0.01 0.1 0.05

FIGURE2.8: Stochastic evolution for the system described in Fig. 2.7 using different spatial resolutions.

Example: Diffusive-Reacting System

The generalization to diffusive-reacting system is straightforward, since every reaction contributing to the chemical evolution of the system must be included in the network along with its propensity function. Again, particle types must also be distinguished by the compartment they occupy.

Fig. 2.9 shows the time evolution of a single species in a 1D reacting and diffusive system, in which particles are decaying with constant rate α but only in a sub-portion[0, a]of the spatial domain[0, L]. The corresponding RRE for< Qi >is

< Qi >

∂t =Di∇

2< Q

i > −α< Qi >χ[0,a] (2.31)

where χ[0,a]is zero everywhere except in[0, a], where it is equal to 1.

Both the diffusive and reacting propensity functions are formally unimolecular. When different physical processes happen at the same time, it is possible to determine which one dominates the system evolution by comparing their reaction probabilities:

pj =

aj

a0

. (2.32)

For this system, the ratio between decaying and diffusive probability is pdec

pdi f f ' α RDIFFM

(2.33)

which, considering the parameters used in Fig. 2.9, is approximately 10−3. This ratio is very small, so particles disappear on a time scale much longer than diffusion and spread uniformly over the whole system while decreasing in number on the slower decay

(26)

timescale. Thus, diffusion and chemical reactions decouple, since any spatial variation in the population numbers is washed out by diffusive motions. This is the so-called instant-mixing approximation (Salaris and Cassisi, 2017): within a certain time step, the chemical evolution is simulated for each compartment and then artificially mixed by substitut-ing the average of the whole system. For instance, this approximation is often used to study the hydrogen core burning phase in upper main sequence stars (Kippenhahn and Weigert, 1990; Iliadis, 2015; Iben, 2012). These objects burn hydrogen in non-degenerate environments through the CNO cycle, whose efficiency is very sensitive to temperature variations5. Hence, most of the energy is produced in a very narrow inner regions and super-adiabatic temperature gradients result. The generated flux is so high that it can-not be transported only by photons, making the medium convectively unstable. Even if convection is not the same process as particle diffusion, it is very efficient in mix-ing chemical composition of different spatial regions. Since the typical hydrogen con-sumption timescale is much longer than any convective motions turnover timescales6, the abundance patterns are well represented by the instantaneous mixing approximation representation and convectively dispersive motions do not have to be studied explicitly

8. However, when nuclear burning timescales are about the same as mixing timescales

(see Fig. 2.10), no simplification is allowed and both nuclear and diffusive processes have to be solved at the same time. For instance, since during the final stages of the thermonu-clear runaway in a Classical Nova explosion, turbulent eddies have typical turnover time of a few seconds, which is comparable to the timescales set by the fastest proton capture reactions on CNO nuclei (Starrfield et al., 2016), these two processes cannot be decoupled.

0.00 0.25 0.50 0.75 1.00

x

/

L

−4 −3 −2 −1 0

log

10

t

[

s

]

0 20 40 60 80 100 120

<

Q

>

RR E 0.00 0.25 0.50 0.75 1.00

x

/

L

−4 −3 −2 −1 0

log

10

t

[

s

]

0 20 40 60 80 100 120

Q

SSA

FIGURE 2.9: Deterministic (left) and stochastic (right) evolution for a 1D diffusive-reacting system with L = 1 cm, using 50 compartments. At the beginning of the simulation 100 particles are contained in each compart-ment. The diffusion coefficient is D = 1 cm2s−1. Particles are degraded only in[0, 0.02 cm]with decay constant α = 10 s−1. The diffusive rate is

RDIFF =2500 s−1according to Eq. 2.26.

5For these reactions, the energy generation per unit mass e scale with the temperature as T(15−20). 6For a 2 M

main sequence star, the hydrogen timescale consumption7is∼ 1 Gyr, while the typical

turnover timescales for turbulent convective eddies range between 1−102days.

(27)

0.00 0.25 0.50 0.75 1.00

x

/

L

−4 −3 −2 −1 0

log

10

t

[

s

]

0 20 40 60 80 100 120

<

Q

>

RR E 0.00 0.25 0.50 0.75 1.00

x

/

L

−4 −3 −2 −1 0

log

10

t

[

s

]

0 20 40 60 80 100 120

Q

SSA

FIGURE 2.10: Same system as in Fig. 2.9, but with decay constant α =

105s−1.

2.2

Stochastic Simulation Algorithm for Nuclear Networks

The SSA can also be used to treat the time evolution of isotopic species interacting through a nuclear reaction network. This is one of the innovations of this thesis. In dealing with nuclear reactions in stellar interiors, the usual procedure is to discretize the object (e.g. a star, or a nova envelope) in a certain number of mass or radial shells (or mesh, José, 2016), such that each physical quantity (e.g. temperature, density, chemical composi-tion) is assumed to be constant within each layer. In this context, there are two main representations: in the Lagrangian description, the computational grid is attached to the fluid (mass zoning), while in the Eulerian the grid is fixed in space (radial zoning). It is important to note that equal mass zones can have very different spatial extents. Under either choice, a discrete particle counting algorithm such as the simple SSA cannot fol-low the time evolution of each single zone, since a ridiculous number of baryons would be required9. A workaround is to follow a sub-portion of the layer, with a much finer volume and smaller number of particles, as a representation of the entire system. Instead of choosing the sub-system volume, it is more convenient to fix its mass content in terms of number of baryons N10. In fact, even for nuclear processed matter, this quantity is conserved (Griffiths, 2013). The volume is then implicitly defined once N is chosen. With this representation, the form of the propensity function for bimolecular processes given in Eq. 2.11 needs some modifications. Unlike unimolecular processes, these reactions are sensitive to the thermodynamic condition of the system or whatever might be the systemic governing parameters. They also explicitly depend on V:

V = M/ρ (2.34)

9Typical mass zones used in most 1D models of Classical Novae range between 10−12M

and 10−9M .

Thus, the total number of particles contained in such systems range between 1043and 1046. 10Here, the difference in mass between neutrons and protons is neglected.

(28)

4 5 6 7 8 9

log N

10−8 10−6 10−4 10−2

X

min 1

H

7

Be

12

C

16

O

20

Ne

32

S

40

Ca

56

Fe

FIGURE 2.11: Minimum mass fraction abundance traceable by the SSA, as a function of the total number of baryons (several nuclear species are

shown).

where M is the total mass contained in the volume V and ρ is the mass density. M can be well approximated as11:

M =N mamu (2.35)

where mamu is the atomic mass unit. It is possible then to give an expression for the

reaction rate cjin terms of ρ and N:

cj = NA< σv> (ρ/N) (2.36) where NAis the Avogadro number. However, the choice of N is completely arbitrary and

affects the average time step sampled by the SSA as well as the overall computational cost of the algorithm. Furthermore, it determines the lower limit in mass fraction abundance that the algorithm is able to trace, remembering that for a particle species i, the mass fraction abundance is defined as the ratio between the mass content of i and the total mass contained in the system. This lower limit is reached when just one particle of type i remains in the system. In terms of mass fraction:

XMI N(i) =mi/N. (2.37)

where mi is the mass number. This is one of the main limitations of the SSA: since it

is a discretized algorithm, an higher number of baryons must be studied to get a lower threshold (Fig. 2.11).

In order to treat the stochastic evolution of a generic nuclear network, the SSA needs some input parameters:

• T and ρ;

• The total number of baryons N;

Riferimenti

Documenti correlati

Le tesi svolte sul Viaggio del poeta perugino sono centrate e persuasive cosi come pure ben messe a fuoco sono le tematiche, la lingua, le allegorie, la posizione del poeta verso

Dopo aver calcolato la posizione, la velocità può essere calcolata in base alla variazione del valore delle coordinate di posizione nel tempo di campionamento, tramite un ciclo

Succes- sivamente alla scelta della proposta più consona per questa tipologia d’opera si è proceduto al suo dimen- sionamento definitivo valutando e confrontando tra loro

These observations allow us to derive physical properties such as stellar mass, star formation rate, age of the stellar population, dust attenuation, metallicity, and

181 Siffatta soluzione garantirebbe, comunque, maggioranza e opposizione, poiché, secondo l’art.72, 3° comma, sarebbe sempre possibile per il Governo e per le minoranze (un decimo

Main areal distribution of the Kok Formation in the Western Carnic Alps with indication of the stratotype (asterisk) and of the reference sections (squares). Asterisk: Cellon

di Lingue e Letterature Straniere e Culture Moderne dell’Università degli Studi di

Come dire nome, età, indirizzo. Quanto alle città nominate, pur aven- do una collocazione geografica sono spazi vuoti, irrilevanti per lo svol- gimento del racconto e creati solo