• Non ci sono risultati.

Computational simulation of lattice defects in graphite

N/A
N/A
Protected

Academic year: 2021

Condividi "Computational simulation of lattice defects in graphite"

Copied!
118
0
0

Testo completo

(1)

Politecnico di Milano

SCHOOL OF INDUSTRIAL AND INFORMATION ENGINEERING Master of Science – Nuclear Engineering

Computational simulation of

lattice defects in graphite

Supervisor

Prof. Marco G. BEGHI

Candidate

Violetta TOTO – 878472

(2)
(3)

Abstract

The High Luminosity LHC upgrade will increase the energy stored in LHC circulating beams by almost a factor of two: from 360 to 680 MJ. A consequent rethinking of the components with which the beam is in con-tact is of vital importance for the accelerator operation. A comprehensive and integrated R&D activity is devoted to the study and development of graphitic materials and electrically conductive coatings able to withstand the impact of high intensity particle beams. In fact a material able to fulfill all the requirements for the HiLumi upgrade does not exist. In particular novel collimator materials candidates were proposed.

This thesis project stems from the necessity of some computational-aided modeling of radiation damage in carbon-based materials.

Radiation damage in materials and macroscopic behavior modeling in general is a complex task relying on a careful multi-scale approach. At this preliminary stage of the research a rather small scale simulation method is employed. A Frenkel Pair Accumulation method in a Molecular Dynamics simulation is performed in order to obtain a damaged structure and study its properties. Both perfect cells and grain boundaries were explored. The responses to tensile stresses are investigated. A novel methodology for the introduction of intercalated atoms (H and He) is proposed.

Keywords: Graphite, Radiation Damage Simulation, Molecular Dy-namics, Frenkel Pair Accumulation

(4)
(5)

Sommario

Nei prossimi anni il potenziamento di LHC - denominato High Lumi-nosity - aumenterà di quasi un fattore due l’energia contenuta nel fascio accelerato: da 360 a 680MJ. Di conseguenza, particolare attenzione va posta su tutti i componenti in contatto con il fascio in modo da consentire l’operatività stessa dell’acceleratore.

Una attività di ricerca e sviluppo estesa ed integrata è preposta allo stu-dio di materiali grafitici e rivestimenti conduttivi in grado di sopportare l’impatto di fasci di particelle ad alta intensità. Infatti un materiale in possesso di tutti i requisiti necessari per l’HiLumi upgrade non esiste. In particolare, nuovi materiali sono stati proposti come candidati per i collimatori.

Questo progetto di tesi nasce dalla necessità di una modellazione com-putazionale del danno da radiazione in materiali a base di carbonio. Il danneggiamento da radiazione nei materiali e in generale la modellazione del comportamento macroscopico di un materiale è un compito arduo e si basa su un approccio multi-scala. In questa fase preliminare della ricerca si è deciso di utilizzare una metodologia ad una scala piuttosto piccola. La metodologia dell’accumulazione di coppie di Frenkel in un contesto di Molecular Dynamics è stata utilizzata per ottenere una strut-tura danneggiata e studiarne le proprietà. Sono state studiate sia celle grafitiche perfette che celle contenenti un bordo grano. La risposta a sforzi di tensione è stata simulata. Viene proposta e testata una nuova metodologia per l’introduzione di atomi intercalati (H e He).

(6)
(7)

Estratto

Il più grande e potente acceleratore di particelle che sia stato mai pro-gettato, costruito e utilizzato è il Large Hadron Collider, CERN. Svariati ambiti scientifici traggono vantaggio diretto dagli esperimenti possibili grazie all’acceleratore: chiaramente la fisica delle alte energie, la chimica, la scienza dei materiali, la medicina e la biologia. Tuttavia anche la progettazione e produzione dei componenti per l’acceleratore stesso sono di grande impatto per l’avanzamento scientifico e industriale anche in ambiti molto diversi come la sicurezza, la sanità, l’energia, l’ambiente, la cura del patrimonio artistico - solo per citarne alcuni.

Negli anni le performance di LHC sono state costantemente migliorate, sia in termini di energia del fascio che di luminosità. Oggi si sta lavorando a un importante aggiornamento della macchina - chiamato High-Luminosity LHC - che appunto ha come obiettivo quello di incrementare la lumi-nosità di un fattore 10 rispetto al valore nominale attuale. Questo ha sostanzialmente lo scopo di favorire l’osservazione di fenomeni rari: infatti la luminosità è proporzionale al numero di collisioni che avvengono in un dato intervallo di tempo, aumentare questo parametro significa poter estrarre una quantità di dati maggiore dagli esperimenti.

Un tale incremento dell’energia immagazzinata nei fasci di particelle su una macchina così complessa richiede una valutazione attenta in fase di progettazione. Diversi dei sistemi attualmente presenti non saranno in grado di operare correttamente nelle condizioni previste da HL-LHC e quindi si rende necessario il loro miglioramento o sostituzione per garantire l’operabilità stessa dell’acceleratore.

Il progetto ARIES (Accelerator Research and Innovation for European Science and Society) è la cornice che racchiude le ricerche dedicate allo sviluppo di materiali e tecnologie in grado di soddisfare i nuovi requisiti dell’aggiornamento.

Tra i sistemi che richiedono una modifica troviamo anche il sistema di collimazione, le cui funzioni principali sono la pulizia del fascio e la pro-tezione passiva della strumentazione e degli altri componenti necessari

(8)

al funzionamento dell’acceleratore. È infatti importante sottolineare che già nella attuale versione di LHC l’energia contenuta nei fasci in fun-zionamento nominale sarebbe sufficiente a fondere 500kg di rame e 10−9

di questa energia basterebbe per provocare il quenching di un magnete superconduttivo.

Viste le energie altissime delle particelle, le sezioni d’urto sono molto piccole e quindi i collimatori sono componenti macroscopici dell’ordine di 1m di lunghezza. Per HL-LHC una nuova struttura è stata progettata per accomodare facilmente il materiale assorbente: questo permette una certa flessibilità nella possibilità di sviluppare e utilizzare nuovi materiali assor-benti in futuro e garantisce una più semplice produzione dei componenti. La scelta di questi materiali a contatto diretto con il fascio è un compito arduo basato su molti criteri: facilità di produzione, disponibilità del ma-teriale, costi, tempi di produzione ma anche resistenza al danneggiamento da radiazione, robustezza meccanica all’impatto di fasci ad alta energia, conducibilità elettrica, stabilità geometrica nel resistere a shock termici. Un materiale che possieda tutti questi requisiti (a volte contraddittori) non esiste: per questo è iniziata una importante fase di ricerca e sviluppo su nuovi materiali compositi. Diversi materiali sono stati proposti e testati, i due candidati più promettenti sono i compositi rame-diamante (CuCD) e Molibdeno-Grafite (MoGr). Quest’ultimo è ottenuto tramite Spark Plasma Sintering e il prodotto finale è costituito per la maggior parte da grafite con inclusioni di MoC. Il ruolo del Molibdeno è cruciale per una migliore grafitizzazione e promuove una migliore orientazione dei grani.

La verifica sperimentale del comportamento di questi materiali di nuova produzione in risposta alle condizioni nominali è impossibile. Questo lavoro di Tesi nasce dalla necessità di una modellazione computazionale del danneggiamento da radiazione in questi materiali grafitici e si colloca nella più ampia cornice di simulazioni in corso presso il Work-Package17. La modellazione degli effetti macroscopici dell’irraggiamento o più in generale della simulazione di materiali è un ambito di ricerca molto ampio e composto da molte metodologie, algoritmi (e loro implementazioni) che operano a scale temporali e spaziali diverse e per loro natura descrivono fenomeni diversi. Negli anni è cresciuta la consapevolezza nella comunità scientifica della necessità di un approccio integrato e multiscala per la corretta riproduzione dei dati sperimentali, e di conseguenza l’affidabilità del modello. Questa considerazione è tanto più vera per le simulazioni di irraggiamento dove i fenomeni sono complessi e spesso si concorrono in

(9)

Estratto modo non lineare alla modifica di una proprietà macroscopica: il continuo scambio di informazioni, parametri e indicazioni sui fenomeni rivelanti, accoppiato con una fitta validazione sperimentale, ha dato prova di effica-cia in diversi ambiti della ricerca nucleare, come ad esempio gli elementi di combustibile.

Essendo questo lavoro di Tesi un primo approccio al problema, è stata scelta una scala di simulazione relativamente piccola. L’obiettivo princi-pale è quello di ottenere alcune geometrie ’irraggiate’ e studiarne alcune caratteristiche.

Innanzitutto le geometrie simulate sono puramente grafitiche, di cella perfetta o con bordi grano: questa generalizzazione del MoGr a grafite è dovuta alla grande predominanza dei grani grafitici rispetto ai carburi nel materiale prodotto.

Le simulazioni di irraggiamento nei materiali in genere prevedono la computazione della cascata collisionale. L’accumulo di diverse cascate collisionali in una cella di materiale simula quindi il progressivo dan-neggiamento della struttura. Tuttavia questo tipo di simulazioni sono computazionalmente molto impegnative.

La metodologia dell’accumulazione di difetti di Frenkel è stata recente-mente proposta in letteratura e utilizzata per alcuni ceramici e per la grafite. Questa trae vantaggio dall’ipotesi dei modelli di evoluzione del danno da radiazione su grafite: ovvero che il danno da radiazione su grafite consista sostanzialmente in difetti di punto. In uno schema di Molecular Dynamics viene quindi implementata questa metodologia che consiste nel scegliere un atomo in modo casuale e spostarlo in un altro punto casuale della cella, lasciare rilassare la struttura e ripetere questi passaggi fino a che non si raggiunge il livello di dpa desiderato. La simulazione viene condotta tramite il codice LAMMPS utilizzando le sue implementazioni del barostato e termostato di Nosé-Hoover e Berendsen. Il potenziale interatomico scelto ed utilizzato per questo lavoro di Tesi è LCBOP: la sua accuratezza nella descrizione è stata ampiamente dimostrata in letteratura.

Una consistente parte di questo lavoro è consistita nella corretta ripro-duzione dei risultati riportati in letteratura attraverso l’ottimizzazione dei parametri di input necessari. Solo la procedura ottimizzata verrà mostrata e impiegata nel testo.

Questa è stata utilizzata su celle di diverse dimensioni e come detto di bordi grano. In letteratura si riportano i risultati fino a valori di dpa di 0.14. In questo lavoro la procedura di accumulazione di difetti di Frenkel

(10)

è stata utilizzata fino a 0.5dpa, mostrando le limitazioni dovute alla scelta del barostato di Berendsen. Quest’ultimo ha in ingresso il parametro di bulk modulus come una costante e procedendo nell’inserimento di difetti di Frenkel questo non è più in grado di descrivere accuratamente la cella. È iniziato uno studio preliminare di possibili soluzioni: il barostato di Berendsen viene impiegato per consentire un rilassamento rapido della struttura tra l’iniezione di difetti, una sua rimozione inevitabilmente im-plica la necessità di tempi di simulazione più lunghi.

Il ben noto fenomeno di allungamento lungo la direzione perpendicolare ai piani e la contemporanea riduzione dell’area basale è stato verificato. L’implementazione di un criterio di curvatura permette la distinzione tra zone ordinate e disordinate e dimostra la sua migliore capacità di dis-criminare strutture amorfizzate rispetto al mero numero di coordinazione. Dopo una fase iniziale accumulo di difetti di punto isolati, si verifica una progressiva increspatura dei piani grafenici guidata dagli interstiziali che in-terconnettono due piani adiacenti e dagli stress residui dovuti alle vacanze.

Figure 1. Dimensional changes for 12800- and 102400-atoms supercells with Frenkel Pair Accumulation up to 0.12dpa

(11)

Estratto

(a) Fraction of atoms per θi (curvature criterion)

for 12800- and 102400-atoms supercells up to

0.12dpa (b) Fraction of atoms per CN for 12800- and 102400-atoms supercells up to 0.12dpa

Due diverse metodologie per il calcolo della risposta alla deformazione unidimensionale sono state implementate e testate: la prima consiste nella graduale deformazione della cella di simulazione e minimizzazione dell’energia, la seconda consiste nel deformare la cella imponendo l’evoluzione sotto un barostato e termostato di Nosé-Hoover.

L’applicazione di questi due metodi a geometrie ’irraggiate’ fino a 0.26dpa mostra chiaramente la dipendenza della risposta dal dpa.

Infine viene proposta una metodologia per l’inserimento di atomi di idrogeno ed elio fino a 860appm, valore compatibile con le più aggiornate stime. Sono stati messi a punto e testati potenziali ibridi: per l’inserimento di idrogeno le interazioni carbonio-carbonio sono modellizzate con LCBOP, carbonio-idrogeno e idrogeno-idrogeno con REBO; per l’inserimento di elio le interazioni C-C ancora con LCBOP, He-He con un potenziale di Beck e C-He con un potenziale di Lennard-Jones standard.

Viene utilizzato termostato e barostato di Nosé-Hoover con barostato di Berendsen per un rilassamento veloce: l’intervallo tra inserimenti succes-sivi non può essere inferiore a 1ps. È necessario garantire una distanza minima del nuovo atomo inserito dal primo vicino di almeno 0.5Å per gli atomi di idrogeno e 1Å per elio per evitare la divergenza della struttura. Gli atomi di idrogeno inseriti formano immediatamente un legame con i piani grafenici o le zone amorfe e continuano ad oscillare intorno alla loro posizione di equilibrio.

Gli atomi di elio, sebbene non possiedano energia cinetica al momento dell’inserimento, possono provocare in alcuni casi la distorsione dei piani grafenici. La loro mobilità tra i piani grafenici o nelle zone più danneg-giate delle geometrie ’irragdanneg-giate’ non è trascurabile. Maggiori risorse computazionali saranno necessarie per lo studio della loro evoluzione oltre i 200ps.

(12)
(13)

Contents

Abstract iii Sommario v Estratto vii Contents xiv 1 Introduction 1

1.1 The High Luminosity LHC upgrade and its challenges . . 1

1.2 Focus on collimators . . . 2

1.2.1 Materials choice . . . 4

1.3 Motivations and goals of the Thesis work . . . 7

2 Theory and Methodology 9 2.1 Materials . . . 9

2.1.1 Graphite . . . 9

2.1.2 MoGr . . . 10

2.1.3 MoGr and the need of generalization to graphitic materials . . . 11

2.2 How to model? . . . 13

2.2.1 Multiscale modeling . . . 13

2.2.2 Irradiation damage simulation in literature . . . . 15

2.3 Molecular dynamics . . . 18

2.3.1 Equation of motion . . . 18

2.3.2 MD algorithm . . . 19

2.3.3 Molecular mechanics codes . . . 21

2.4 Potential . . . 23

2.4.1 Graphite: LCBOP . . . 23

2.4.2 H intercalation: adding REBO . . . 24

2.4.3 He intercalation: customized potential . . . 24

(14)

2.6 Quenching . . . 26

2.7 Defects injection . . . 26

2.7.1 Frenkel pair accumulation . . . 26

2.7.2 Intercalating H and He . . . 28

2.8 Analysis tools . . . 29

2.8.1 Curvature criterion . . . 29

2.8.2 Stress-strain analysis . . . 31

2.8.3 Thermal conductivity . . . 33

3 Results and Discussion 37 3.1 Quench . . . 37

3.2 Frenkel pair accumulation . . . 40

3.2.1 Pristine . . . 40

3.2.2 Grain boundary geometry . . . 52

3.2.3 Discussion . . . 55 3.3 Stress-strain . . . 57 3.3.1 Pristine . . . 58 3.3.2 Grain Boundary . . . 60 3.4 H intercalation . . . 64 3.5 He intercalation . . . 68 Conclusions 75 3.6 Summary and Remarks . . . 75

3.7 Perspective and open issues . . . 76

Acknowledgements 77

A Hardware and Software used 79

B Script quench 81 C Script FPA 83 Acronyms 85 List of Figures 89 List of Tables 91 Bibliography 93

(15)

Chapter 1

Introduction

1.1

The High Luminosity LHC upgrade and

its challenges

The largest and most powerful particle accelerator ever designed, built and operated is the CERN Large Hadron Collider (LHC). Several branches of scientific research are constantly pushed to the frontiers of human knowledge thanks to the various experiments and facilities present at CERN. Many examples can be listed: of course the high energy physics, but also biological, chemical, material science and medicine studies are performed.

Not only the experiments carried out at this facility contribute to the scientific advance, but also the design and manufacturing process of the accelerator itself explores and delivers new technologies and materials that have broad applications in fields such as energy, environment, cultural heritage, industry, healthcare, security.

(16)

The LHC performances in terms of both peak luminosity and beam en-ergy has been continuously improved since 2010. In order to extend the discovery potential, however, a major update is needed. In the 2020s it will increase its luminosity, thus its collision rate, by a factor of five beyond its design value. The integrated luminosity design goal is an increase by a factor of ten. Such an upgrade on a highly complex machine must be carefully studied and designed: several systems will need to be improved or changed because they might not be able to withstand the new harsher condition becoming vulnerable to aging or faulting or even being a bottleneck to normal operation. It is foreseen that the necessary developments will require about 10 years to prototype, test and realize. The novel configuration is called High Luminosity Large Hadron Col-lider (HL-LHC) and will heavily rely on a number of key innovative technologies representing exceptional technological challenges. These include among others: cutting-edge 11–12 T superconducting magnets; very compact with ultra-precise phase control superconducting cavities for beam rotation; and long high-power superconducting links with zero energy dissipation; new technology for beam collimation.

The Accelerator Research and Innovation for European Science and So-ciety (ARIES) project is the integrating activity project dedicated to develop new materials and technologies to enable accelerator infrastruc-tures to cope with the new requirements.

In particular the Work Package 17 (WP17): Materials for extreme thermal management (PowerMat) is devoted to the study and development of graphitic materials and electrically conductive coatings able to withstand the impact of high intensity particle beams. It is also required to analyze applications of the developed technologies beyond collimators to more industrial domains such as aerospace, avionics, high-end electronics, gas turbines, advanced braking systems.

The research activity of this Work Package is organized in 5 tasks: coordi-nation and communication, materials development and characterization, dynamic testing and online monitoring, simulation of irradiation effects and mitigation method, broader accelerator and societal application.

1.2

Focus on collimators

The collimation system has the obvious primary function of cleaning the particle beam: during the regular machine operation several mechanisms

(17)

1.2. Focus on collimators — such as collision in the interaction points (beams burn up), dynamics changes during the operation cycle (orbit drifts, optics changes, beam-beam effects...) and dynamic instabilities — cause a number of particles to drift away from the beam core and to populate the beam halo, i.e. the tails of the radial distribution of the particles around the beam axis. In addition to these unavoidable and continuously created regular beam losses, when an error in the operation or a device failure cause a sudden loss of the beam we have an abnormal loss.

Both types of losses are carefully monitored all around the accelerator ring, and the beam is dumped if dangerous losses above a certain threshold are detected.

Another fundamental function of the collimation system is to passively protect sensitive equipment from losses caused by an erroneous machine operation or a faulty device. Lastly in order to ensure clean data acquisi-tion in experiments the minimizaacquisi-tion of collimaacquisi-tion-related background is needed.

The energy stored in the nominal operation beams in the present version of the LHC would be sufficient to melt about 500kg of copper, and a super-conducting magnet quench would need just approximately 10−9 of

the total beam energy. The collimation system is then absolutely needed to prevent serious damages to the machine. [1,2]

Because of the very high energies, cross sections are very small: hence collimators are massive devices of the order of up to 1m of length. Also, to fulfill the various functionalities, the collimation system is implemented as multi-stage, relying on various technologies and families, as primary, secondary and tertiary collimators.

The primary collimators require a low-Z material for which the energy deposition rate is low: with a high-Z material the energy deposition rate would be so high that it would severely compromise the device operation. On the other hand, the tertiary collimators rely on high-Z materials in order to effectively stop the low fraction of escaping particles.

For the HL-LHC operation a new design of collimators has been studied and proposed: it can be used for primary, secondary and tertiary col-limators by simply replacing the adsorber material. This flexibility in the implementation enables further improvement during the years while assuring a straightforward manufacturing. It employs a 1m long active jaw consisting of smaller blocks (8 or 10) of the desired material clamped against a supporting housing with screws. The beam is directly

(18)

inter-acting with two parallel collimation jaw assemblies, and these are cooled by means of a cooling circuit. Lastly a precise movement mechanism is needed to align and center the jaws at any time with the actual orbit of the beam, constantly changing during the energy ramp. [3]

Figure 1.2. New HL-LHC collimator design [3]

1.2.1

Materials choice

Materials choice for Beam Intercepting Devices (BID) in the context of the LHC and in view of the HiLumi upgrade is based on numerous criteria: production feasibility, material availability, cost, production timeline but also resistance to radiation damage, mechanical robustness against high en-ergy beam impacts, electrical conductance for low resistive-wall impedance contribution, geometrical stability to withstand thermal shocks.

It is apparent that meeting these requirements can’t be achieved by simply optimizing single parameters, not to mention that some of these are contradictory, e.g. low resistivity materials are mostly high density materials not adequate to withstand high beam loads without damage. A series of indicators were formalized [4,5] such as the Thermo-mechanical Robustness Index (TRI), the Thermal Stability Index (TSI), the RF impedance Index (RFI), the Radiation Damage Index (RDI), the Cleaning Efficiency Index (CEI). Analyzing individually these indicators and other choice-driving parameters is beyond the scope of this work, nonetheless a mere list helps providing a picture of the complexity of the collimation

(19)

1.2. Focus on collimators materials choice.

As today the materials currently used for the active part of the LHC jaws are:

1. Carbon Fiber Carbon (CFC) composite: carbon fiber reinforcement in a graphite matrix; widely used in many field of application for their thermal resistance and ability to maintain strength at high temperatures, high specific strength and modulus of elasticity, ther-mal shock resistance, high therther-mal conductivity and low neutron absorption cross section. For LHC primary and secondary collima-tors a commercial type is used: the fibers are randomly disposed in the y-z plane and create several layers parallel to each other along x.

2. Tungsten Heavy Alloy: W grains (95%wt) are cemented by a NiCu

matrix that confers a much better machinability compared to plain tungsten and does not deteriorate significantly its properties. Such an high-Z material is used for LHC beam adsorbers and tertiary collimators in order to be able to efficiently stop the beam from damaging downstream experimental instrumentation. [5]

3. Copper-based Alloy strengthened by aluminium oxide ceramic parti-cles (Al2O3 0.3%wt): the precipitates increase the radiation damage

resistance, especially from neutrons, mechanical strength at high temperature and resistance to thermal softening of oxygen-free cop-per, hinder dislocation movement, prevent grain growth. This is used widely in particle accelerator components where both high temperature and high radiation conditions are present: it is the used in the LHC collimation system for the cooling system. [6] A material able to fulfill all the requirements for the HiLumi upgrade does not exist.

Hence CERN launched a large and ambitious R&D program in

collaboration with industries and researchers to develop novel composite materials for this purpose, ideally to combine the properties of metal and transition metal-based ceramics — i.e. high mechanical strength, low resistivity — with those of diamond or graphite — good thermal properties, low density.

As of the beginning of this thesis work, an extensive overview of explored novel composite materials was published [5] and two promising

(20)

candidates were identified: Copper-Diamond composite (CuCD) and Molybdenum-Graphite composite (MoGr).

CuCD is produced starting from diamond particles (60%v), copper

powder (39%v) with a small addition of boron by means of conventional

Hot Pressing - the latter introduced to cement the two main components by forming a stable boron carbide on the diamond external layer and dissolving into the copper matrix. In this composite, copper is chosen for its excellent electrical and thermal conductivity and its good ductility, diamond also contributes to the thermal conductivity while reducing the density and the coefficient of thermal expansion, the boron carbides contribute to the mechanical strength as well as the thermal conductivity by assuring perfect adhesion between the two main phases.

MoGr is developed by CERN and Brevetti Bizz and is produced via Spark Plasma Sintering starting from a molybdenum powder, graphite flakes and in some grades carbon fibers as a reinforcing phase and a nucleation site for enhanced graphitization. Mo and graphite show chemical affinity for each other: when heated up to 1000°C the hexagonal Mo2C can be formed. During this reaction, carbon atoms diffuse inside

the Mo BCC lattice interstitials and a grain is totally transformed when the amount of carbon reaches about 33%at; as the carbon concentration

increases the MoC carbide is formed. The starting molybdenum powder fraction is low and at the end of the sintering process no metallic Mo has been left: the final product is graphite with MoC carbides inclusions. Molybdenum carbides are dispersed in the graphite matrix and can help the catalytic graphitization process of carbon under particular conditions of temperature and pressure [7]. In fact the main function of Mo is to catalyze the re-crystallization process of the graphite flakes, resulting in a better grains orientation.

More than 30 different grades of MoGr were produced in the last years with a broad range of composition and production cycle parameters. Several experimental thermo-physical, mechanical and electrical characterization were carried out both at CERN and other research institutes on pristine samples of both candidates in various grades. In addition to this, experimental tests were carried out at the CERN HiRadMat facility [8] to replicate the exposure to high-intensity short beam impacts as a result of accidental scenarios: the loss of a full

injection batch is considered the most severe failure event for collimators. For the Hi-Lumi upgrade the injection train is expected to double its

(21)

1.3. Motivations and goals of the Thesis work intensity and lower the emittance with respect to the LHC one (288 bunches, 1.1x1011 p/b at 440GeV with σ=0.35mm).

However long-term exposure to radiation has a destructive effect on materials too: altering the microstructure through defects formation, gas production by nuclear reactions, and trapping, phase changing and chemical bonds rupture can lead to properties degradation, hence reduce performance and even lead to premature failure of the device. According to some preliminary simulation [9] a total of about 1016 protons are

expected to be lost in a primary collimator over one year of operation. Radiation hardness test campaigns were performed at GSI and BNL facilities for the described novel composite materials, and guided further tweaking of composition or production parameters in order to optimize and improve their stability.

Despite the massive efforts in these irradiation campaigns and the promising insights they offer on the material behavior under various types, fluence and energy beams, it is of fundamental importance to link the experimental results with low-energy and high fluence beams with the LHC case. In fact the LHC collimators, and even more the HL-LHC, function in an environment where a small fraction of the jaw volume is subjected to smaller proton fluences but much larger energies.

1.3

Motivations and goals of the Thesis work

This thesis work stems from the necessity of some computational-aided modeling of radiation damage in carbon-based materials, in particular the MoGr.

It is manifest that an experimental facility that can replicate the HL-LHC beam condition is not achievable, and that it is not plausible to wait for the construction of the upgrade to test these novel materials since collimators are a key component to the accelerator operation itself. Hence a large ded-icated simulation campaign in the WP17 has started to understand and extend the knowledge of materials response to this very harsh environment. It will be extensively reviewed in the next chapter that modeling the macroscopic behavior of a material is undoubtedly a complex task that needs a careful multi-scale approach and benefits from the continuous flow of information — i.e. parameters or relevant phenomena — between scales.

(22)

At this preliminary stage of the research, it is also of fundamental im-portance to focus on the general behavior of the material, meaning the dominant component of the composite: graphite. In fact, in the produced MoGr grades the MoC carbides are a low fraction hence the material properties are mostly determined by the graphite behavior.

It is very well known that graphite is one of the most extensively studied nuclear material and that the models for the evolution of the irradiation damages rely on the assumption that primary damages produced by irra-diation are point defects.

Hence this thesis goal is to tackle the effect of the irradiation dose starting from a rather small scale. Exploiting some recent methods and results re-ported in literature, the modeling is performed by means of a Frenkel pair accumulation methodology in a Molecular Dynamics (MD) simulation.

(23)

Chapter 2

Theory and Methodology

In this chapter an extensive review of studied material, modeling strategies and tools, irradiation damage simulations, molecular dynamics algorithms and codes is outlined.

Also the specific codes, methodologies, potential and post-processing tools used are examined.

2.1

Materials

2.1.1

Graphite

Graphite is a very widely used material: lubricants, electrodes, batteries, nuclear reactors and many other applications. It is an allotropic form of carbon showing a layered planar structure: atoms in plane are bonded covalently and arranged hexagonally — i.e. graphene layers — and stacked along the other direction. The cohesive energy of graphene — characterizing intralayer bonding — is 8.06eV/atom, the interlayer binding energy of a bilayer graphite is only 0.02263eV/atom.

Due to its structure, graphite shows very different properties in the parallel and perpendicular direction of the graphene layers. The delocalized electrons in the basal plane are free to move, providing high thermal and electrical conductivity. The bonding between layers is very weak because it is only provided by Van der Waals forces, hence graphene planes can easily slide one on top of each other.

It is usually produced by means of a catalytic graphitization: graphitic materials show low density, excellent thermal conductivity in the direction aligned with basal planes, low coefficient of thermal expansion and high

(24)

Figure 2.1. Unit cell of graphite

operational temperatures.

The place of extraction or the production process can heavily influence the macroscopic behavior of the material. Not only the size distribution and the relative orientation of crystallites can have a large impact on the device properties due to the inherent anisotropic nature of graphite. Real graphite also differs from the ideal structure because of the presence of several defects due to thermal and chemical changes or stacking faults or foreign atoms.

2.1.2

MoGr

As introduced in the previous chapter, several grades of MoGr were devel-oped and are still undergoing further improvements both in composition and in production parameters. In these type of composites, molybdenum carbides are dispersed in the graphite matrix and under specific conditions can improve the catalytic graphitization of carbon.

From optical and Raman spectroscopy measurements and diffraction pat-tern analysis valuable information about phases, atomic arrangement, cristallography, residual stresses, grain size and texture of samples can be obtained. The term texture refers to the distribution of crystallographic orientations of a polycrystalline sample: no texture meaning the relative

(25)

2.1. Materials

Figure 2.2. Optical microscope image of perfect graphite in MoGr (MG-6530Aa grade) [10]

grain orientation is fully random, some degree of texture meaning that grains are preferably oriented along a direction. All of these aspects are very sensitive to the production process, e.g. pressure and temperature, powder diameter. An extensive review of properties of graphitic materials investigated at CERN has been published. [10]

2.1.3

MoGr and the need of generalization to graphitic

materials

Raman spectroscopy measurements performed on materials investigated at CERN in various position inside the unirradiated samples showed the simultaneous presence of

• perfect graphite

• polycrystalline graphite

• polycrystalline graphite and disorder

• polycrystalline graphite, disorder and metal carbides

thanks to the identification of the G, D and D’ peaks - or their broadening. Optical microscope images of early MoGr grades depicting the

above-mentioned situations show perfect graphite grains larger than 20µm in diameter but also much finer meshes — e.g. for the

(26)

Being this a first approach to the irradiation damage simulation and graphite the main component of this composite, it was chosen to limit the research on purely graphitic geometries. The methodology, with an appropriate change of geometry and potential, can be applied to carbide structure too.

(27)

2.2. How to model?

2.2

How to model?

Studying materials and trying to predict their behavior under irradiation can mean studying almost anything: such an endeavour to understand their properties can naturally include broad and disparate goals, phenom-ena and physics. At the same time, modeling can have many different meanings in science, from abstract mathematics to analytical effective theories to computer simulation.

It is of fundamental importance to recognize that materials exhibit phe-nomena on an extremely broad range of both temporal and spatial scale. This disparity in scales and the interdisciplinary nature of the field make modeling materials challenging — and incidentally exciting.

2.2.1

Multiscale modeling

Furthermore, throughout the years the scientific community has grown more and more an awareness of the absolute need for a multiscale modeling paradigm: not only because of the fact that a single researcher or lab with sufficient background to cover all phenomena and all time and space scales cannot exist, but also because a complete understanding of a material doesn’t rely only on a rigorous treatment of phenomena at each of these scales alone but rather through analysis of the interaction between scales. This is even more important for the study of irradiation damage where atomic scale phenomena can add up non-linearly to macroscopic properties or behavior and where experimental data are not always easily obtained or it is possible to decorrelate the effect of different concurring mechanism. The complete and correct study of irradiated materials is then composed of various and numerous interlinked steps.

Starting from 102-103 atoms electronic structure methods can be used,

the most common one for solid systems being Density Functional The-ory (DFT). Ab initio DFT calculations allow the prediction and calcula-tion of material behavior on the basis of quantum mechanical consider-ations, and is used to investigate the electronic structure of atoms and molecules in the condensed state, normally in the ground state. This kind of computation is performed both on known materials in order to extract useful information about its theoretical behavior and ultimately lead to a better formulation of empirical potentials and on theoretically-possible

(28)

novel molecules or nanostructures in order to study their stability and con-figurations. Scaling up to larger systems, up to 106-109 if computationally

available, above-mentioned empirical potentials are used: atoms interact through parameterized analytical potential, and the system evolution is calculated integrating the equations of motion. In the most common version of MD the trajectories of atoms and molecules are determined by solving Newton’s equations numerically where the forces between particles and their potential energies are calculated via interatomic potentials or molecular mechanics force fields.

From atomic scale methods the fundamental obtained result is the total internal energy of the system: from this energy, its derivatives and the simulation of the system evolution in time various data can be extracted. Just to mention a few: relative stability of various configuration or phases, equilibrium structures, cohesion energies and formation enthalpies, for-mation energies of point and extended defects in materials, activation energies for defects migration, mechanical properties (elastic constants, bulk modulus), vibration modes (phonons), solutes dynamics (incorpora-tion, migration activation energies, precipitation and resolution).

Climbing up the scales, relatively recent computational such as rate theory, point defect model, dislocation dynamics can give precious insights on phenomena up to several grains and tens of µm.

Finally finite elements analysis and multiphysics simulation can provide insights on the overall performance of the studied device.

It is at this point important to clarify a somehow trivial question: each spatial and temporal scale, methodology and code implementation focuses on just a few aspects — namely physics and phenomena — of the problem. In fact computational feasibility is a major bottleneck for simulations and it can easily take up to thousands of core-hours to simulate just a few picoseconds on a large atomic system. And physically, especially in the radiation damage field, numerous complex coupled phenomena intervene at very different scales.

This is the reason why an integrated and complete approach to

multiscale modeling of irradiated materials relies heavily on the synergy between separate simulations (atomic, mesoscale, thermodynamics) and is constantly bridged to the experimental results, both for the

indispensable code validation and to possibly guide the experimental activity. Simulations of higher scale models can only rely on data, parameters and mechanisms obtained from smaller scale. On the other

(29)

2.2. How to model?

Figure 2.3. Multiscale modeling scheme

hand higher scale models can give insights and indications on selecting significant phenomena and orient investigations.

Through this continuous flow of information one can identify and

decorrelate phenomena at relevant scale and produce necessary data for the development and validation of macroscopic models complex and long to obtain experimentally, complementing examinations of irradiated materials and irradiation experiments. In particular, MD simulations is used to expose mechanisms and processes, explain experiments and suggest directions of research.

2.2.2

Irradiation damage simulation in literature

Given the diversity and complexity of commonly used materials in ir-radiated environments, the characterization and understanding of the evolution under irradiation is a challenging task. Simulating radiation damage using MD is not a new concept: in fact radiation damage has been studied through the simulation of cascades both in metals and ionic solids and oxides [11,12] as soon as computationally feasible.

For metals, either first principles calculation or MD can provide a fairly accurate description of the formation of point defects and extended de-fects. These atomic level descriptions allow the elaboration of predictive thermodynamic models to describe the evolution of these materials under irradiation. For ceramics this approach is complicated by various reasons:

(30)

firstly this category includes a wide range of materials — which makes difficult a generalization of thermodynamic models and calculation — and also because of the complex nature of their chemical bonds — possibly partly ionic and partly covalent, and for metal carbides, even metallic. Hence an accurate computation of band gap in ceramic is a difficult task, but a necessary one to predict formation energies and concentration of defects. In contrast to metals, impurities in ceramics have an important influence in the calculations of the position of Fermi level.

Huge efforts in the last decades went into the reliable calculation — and its validation against difficult to perform measurements — of threshold displacement energies and of topology of displacement cascades.

The latter can be explored with widely used MonteCarlo codes or via MD. It is of course of fundamental importance to understand the cascades topology in order to understand defect production: the Primary Knock-on Atom (PKA) can transfer enough energy to displace other atoms from their sites and those can have similar collisions, producing a branching structure of successive collisions. The final structure clearly depends on the PKA energy and attempts at finding a relation between cascade energies and the number of defects.

As an example, some snapshot of atoms displaced more then 2Å from

Figure 2.4. Snapshots of atoms displaced more than 2Å during the displacement cascade by a PKA of 80 keV. [13]

(31)

2.2. How to model? their initial position during a displacement cascade are shown in Figure 2.4. Usually one can recognize three major phases: the collision phase (here 0.10 and 0.15ps) where at least half of the PKA kinetic energy is transferred, next the number of displaced atoms increases in each subcas-cade, while the average distance covered by displaced atoms drops (here 1.85ps), then no new subcascades are formed and this usually corresponds to the thermal spike and finally the restoration phase (here at 4.60ps) where many displaced atoms return to their initial position or fit in an-other crystalline site.

Various publications on displacement cascades and the dependence on energy, especially on nuclear fuel, can be found, as an example: [13–18] In the recent years, an alternative has been developed: Ab Initio Molecular Dynamics (AIMD) computes the forces acting from electronic structure calculations that are performed "on-the-fly" as the molecular dynamics trajectory is generated. This type of algorithms of course have a much higher computational cost than MD but can account for charge transfer phenomena not adequately described by empirical potentials. [19,20]

(32)

2.3

Molecular dynamics

As mentioned, MD is a computational technique for the computation of equilibrium and transport properties of a classical many-body system [21]. Starting from initial coordinates and velocities it integrates the equations of motion numerically, resulting in a new set of coordinates and velocities at each time step. The ensemble of particles that are simulated interact in an explicit manners, implemented via the Interatomic Potential (IP), and under specified conditions.

It is often pointed out that MD simulations share some operative aspects with real experiments: a sample of the material is prepared, connected to a measuring instrument and the property of interest is measured in a certain time interval. In the MD case preparing a sample means selecting a model system consisting of N particles and solving Newton’s equations of motion for it until the system is equilibrated, i.e. properties of the system no longer change with time. The next step is the measurement of the property of interest: if this is subjected to statistical noise the longer it is averaged the more accurate the measurement.

2.3.1

Equation of motion

As said, MD simulation method is based on Newton’s second law: the atoms acceleration is calculated starting from the force acting on them. The direct result of integrating the equation of motion is the atoms trajectories, hence positions and velocities, evolving in time.

Fi = miai (2.1)

where Fi is the total force acting on atom i = 1, 2, 3, ...n with mass mi.

The force can be expressed as the gradient of the potential energy V, assumed to depend only on atom positions

Fi = −∇iV (2.2)

and combining eq 2.1 and eq 2.2 we simply obtain −dV

dri

= mi

d2ri

dt2 (2.3)

Hence having the initial positions of atoms, the initial distribution of velocities and calculating the acceleration by the gradient of the potential

(33)

2.3. Molecular dynamics energy function one can calculate the trajectory of an atom.

In order to measure an observable quantity, it has to be expressed as a function of the positions and momenta of the particles in the system. As an example, temperature in a (classical) many-body system is expressed via the equipartition of energy over all degrees of freedom that enter quadratically in the Hamiltonian of the system. The theorem of equiparti-tion of energy states that molecules in thermal equilibrium have the same average energy associated with each independent degree of freedom of their motion and the average kinetic energy is

h1 2mv 2 ai = 1 2kBT (2.4)

where m is the mass, va the velocity of the particle where a labels the

translational degree of freedom, kB the Boltzmann constant, T is the

temperature.

Equation 2.4 can be used as an operational definition of the temperature. However, in practice, the total kinetic energy of the system would be measured and divided by the number of degrees of freedom.

During an MD simulation, the instantaneous temperature fluctuates as the total kinetic energy

T (t) = N X i=1 mivi2(t) kBNf (2.5)

These fluctuations occur as a direct result of the finite size of the cell and to get an accurate estimate of the temperature one should average over many fluctuations.

2.3.2

MD algorithm

The general and main stages during a MD simulation are highlighted schematically in Figure 2.5. The core steps are of course the force compu-tation on all particles and the integration of the equations of motion.

(34)

Figure 2.5. A flow chart symbolizing the different stages in a MD simulation

There are several methods of integrating the equation of motion during an MD simulation, in this work a standard velocity-Verlet integrator is employed. The starting point for a Verlet algorithm is a Taylor expansion of the coordinate of a particle around time t:

r(t + ∆t) = r(t) + v(t)∆t + F(t) 2m ∆t 2 + ∆t 3 3! ...r (t) + O(∆t4 ) (2.6) r(t − ∆t) = r(t) − v(t)∆t + F(t) 2m ∆t 2 ∆t3 3! ... r (t) + O(∆t4) (2.7) where F(t) refers to the force and r is the cartesian coordinate.

The sum of equations 2.6 and 2.7 gives us: r(t + ∆t) + r(t − ∆t) = 2r(t) + F(t) m ∆t 2 + O(∆t4) (2.8) or r(t + ∆t) ≈ 2r(t) − r(t − ∆t) + F(t) m ∆t 2 (2.9)

The estimate of the new position contains an error that is of order ∆t4,

(35)

2.3. Molecular dynamics Their subtraction can be used to derive the velocity:

r(t + ∆t) − r(t − ∆t) = 2v(t)∆t + O(∆t3) (2.10) or v(t) = r(t + ∆t) − r(t − ∆t) 2∆t + O(∆t 2 ) (2.11)

The velocity-Verlet algorithm starts with position and velocity expansions: r(t + ∆t) = r(t) + v(t)∆t + 1

2a(t)∆t

2+ (2.12)

v(t + ∆t) = v(t) + 1

2∆t[a(t) + a(t + ∆t) + ... (2.13) Each integration cycle:

1. Calculate velocities at midstep v(t + ∆t

2 ) = v(t) + 1

2a(t)∆t

2. Calculate position at the next step r(t + ∆t) = r(t) + v(t + ∆t 2 )∆t

3. Calculate accelerations at next step from the potential 4. Update the velocities v(t + ∆t) = v(t + ∆t

2 ) + 1

2a(t + ∆t)∆t

2.3.3

Molecular mechanics codes

Several implementations of molecular mechanics codes have been pro-posed and produced in the recent years. They differ from each other for: main focus (biomolecular simulations, generic, ...), proprietary or open source, integrated molecule builder and visualizer, algorithms and potentials implemented, computational scalability. For this thesis work MD simulations the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) software was used [22]. It is ditributed by Sandia National Laboratories, is an open source code and includes potentials for solid-state materials, soft matter and coarse-grained or mesoscopic systems. It is implemented to run on single processors up to high par-allelization on High Performance Computing (HPC) infrastructures. In the following, LAMMPS implementation of a specific algorithm will be marked with a different font for clarity purposes.

In the following, the employed ensembles will be referred as nvt (canoni-cal), npt (isothermal-isobaric), nve (microcanonical): these are intended

(36)

as their LAMMPS implementation. fix nve creates a system trajectory consistent with the microcanonical ensemble performing constant NVE integration to update position and velocity for atoms in the group each timestep. fix nvt and npt perform time integration on Nosé-Hoover style non-Hamiltonian equations of motion which are designed to generate po-sitions and velocities sampled from the canonical and isothermal-isobaric ensembles respectively.

(37)

2.4. Potential

2.4

Potential

One key aspect for the success and accuracy of MD simulations is the type of interatomic potential used, i.e. how to approximate the interatomic interactions. Although classical MD methods are a valuable tool for understanding defect production mechanisms, the simulation is strongly dependent on the quality of the interatomic potential used.

In fact a conclusion drawn from several studies for the Threshold Dis-placement Energy (TDE) is that the quality of the used potential is the main source of error rather than the simulation algorithm.

2.4.1

Graphite: LCBOP

Graphite shows strong sp2 covalent bonds within a layer and weak

dis-persion and orbital repulsion interactions between layers. IP for carbon system have been the object of several efforts. Early studies are rep-resented by the bond-order Tersoff [23,24] and reactive empirical bond order (REBO) [25–27] potentials. Although they can correctly describe strong covalent bonds, they do not take into account dispersion interac-tions. This is the reason why they are short-ranged.

Various IPs have been proposed to extend the description. Adaptive intermolecular reactive empirical bond order (AIREBO) [28] potential implements a 6-12 Lennard-Jones (LJ) [29] term to describe dispersion. Another strategy is to add Morse [30] term as it has been done for the Long-range Bond-Order Potential for Carbon (LCBOP) [31] and AIREBO-M [32] potentials. Another IP worth mentioning is the complex and more complete Reactive Force Field (ReaxFF) [33]. The above-mentioned IPs are known to be inadequate for the description of energy variations for different relative alignments of layers because the short-range Pauli repul-sion between overlapping π orbitals of adjacent layers is not included. The Kolmogorov-Crespi (KC) [34] and the dihedral-angle-corrected registry-dependent interlayer potential (DRIP) [35] can address this, but they require an a priori fixed assignments of atoms into layers, hence cannot be used for this thesis purposes as well as other similar applications. [36] In recent years there has been a development in machine learning potentials but generally speaking their prediction accuracy outside their training set - also referred as transferability - is low. For carbon sys-tems both Gaussian approximation potentials (GAP) [37,38] and neural

(38)

networks (NN) potentials [39,40] have been developed but these do not account for interlayer interactions. New hybrid potentials have been proposed to overcome this limitation [41].

In this work C-C interactions are only calculated via the LCBOP potential. This IP was designed to describe different coordination states of carbon, enabling the study of liquid phase or phase transformations. It is known to have some limitation and it has been refined quite profoundly since its first formulation [42].

Its ability to describe amorphous phases, high temperature behavior and most of all sp2 to sp3 or sp changes are of fundamental importance for

this work goals: in fact radiation promote bonding modification and it has been verified that the formation energy changes for common defects are reasonably consistent with ab initio data [43,44].

It has been used for graphitizations of amorphous carbons [45]. It has also been benchmarked on thermal expansion coefficient of free-standing graphene, suggesting that it is a promising candidate for radiation-induced dimensional changes. [46]

A new and improved version called LCBOP-II [47] offers a much-improved description and has been successfully applied to a range of challenging systems such as the liquid state, the triple point of carbon [47] and ripples in graphene [48], but unfortunately is not available in LAMMPS, and hence LCBOP-I is used in this work.

2.4.2

H intercalation: adding REBO

One of the phenomena occurring in the studied device is the introduction of H atoms in the material, hence the necessity for a suitable potential. As already mentioned, I decided to always describe C-C interactions with LCBOP. The remaining H-C and H-H interactions are described via a modified REBO potential in which C-C interactions are switched off. This customized potential is built with an hybrid/overlay pair style in LAMMPS.

2.4.3

He intercalation: customized potential

In the radiation damage field one of the most frequently observed and stud-ied phenomena is the structural material embrittlement by He. Nonethe-less, with the development of advanced nuclear reactor concepts, both for

(39)

2.5. Timestep fission and for fusion (plasma facing materials (PFM)), new and more extensive studies on complex atomic-level mechanisms that affect the microstructural evolution of materials were recently published.

Just as an example, molecular dynamics simulations on the He interaction with Ni [49] or W [50] or Pd alloys [51] have been carried out using Embedded Atom Model (EAM) potentials.

Focusing on the He-C interaction, a tight-binding model has been pro-posed in 2008 [52] for He in carbonaceous systems, but to the best of my knowledge never implemented in LAMMPS nor used in any other MD studies.

For the demonstrative purposes of this work, it has been decided to build a very rough customized potential, via the LAMMPS hybrid pair style as follows:

- for the carbon-carbon interaction the usual and aforementioned LCBOP

- for the helium-helium interaction a well known and established potential as the Beck potential [53]

- for the helium-carbon interaction a very basic LJ potential as ex-tensively described by Carlos and Cole [54] and exploited in several studies in its various versions.

Table 2.1. He-C customized potential cutoff values

Interaction Potential Cutoff for interaction

He-He Beck 8.0Å

He-C Lennard-Jones 6.85Å

The LJ potential has the usual form of E = 4h σ r 12 − σ r 6i r < rc where  = 1.4 · 10−3eV and σ = 2.74Å.

2.5

Timestep

The chosen timestep is of 0.1fs [55–59]. It is known that for liquid simulations a timestep of 0.01fs is needed [45].

(40)

2.6

Quenching

An amorphous state has been generated by quenching a supercell from 10’000K to 1K.

The perfect supercell is firstly relaxed via static procedure of energy minimization. Then the 104K temperature can be achieved via various

procedures, in this work two different methods were used: artificially creating an ensemble of velocities using a random number generator at the specified temperature with a uniform distribution (zeroed linear momentum), ramping the temperature from 1K up under a canonical (nvt) ensemble with a damping parameter of 0.01ps (100 timesteps). The structure is equilibrated in two phases: after the artificial creation of the 10’000K temperature, the system is equilibrated under an nve ensemble with a thermostat for 10ps and under an npt ensemble for 50ps. Finally the structure is quenched with a cooling rate of 1014K/s.

This might seem as a non-realistic rate, but computational time is an obvious limit. In fact, in many MD procedures very high rates are employed, in this work the reader will encounter a similar issue on the stress-strain simulations. In the literature one may find rates of 1013 K/s

for metals [60] or between 1 and 100 K/ps in carbides [61] with a different potential with respect to the one used in this work.

2.7

Defects injection

Radiation damage MD studies in materials heavily concentrated on the cascade simulation: as mentioned the topology and the TDE were studied. Within the frame of ARIES, and in particular the WP17 it was clear the necessity to not overlap with already ongoing and advanced MonteCarlo simulations. In fact the research direction was pointing towards an higher temporal scale and concentrate on the modified behavior following the damage.

This is the reason why I decided to use the Frenkel pair accumulation methods for this work.

2.7.1

Frenkel pair accumulation

As already stated, displacement cascades account for single irradiation events. In order to study the effect of irradiation dose or dose rate in a material two methodologies can be applied.

(41)

2.7. Defects injection The first trivial one is to simply overlap cascades, starting various cascades within the same sample, i.e. simulation box. This is manifestly (core) time consuming, but it can actually provide useful insights on the clustering of point defects [62].

The second one consists in the accumulation of point defects (Frenkel pairs). This methodology has proven effective in oxide simulations [63,64] and can be applied for graphite because a single displacement cascade does not induce direct impact amorphisation [43]. In fact, it has been shown that only Frenkel pairs are created and survive, hence as a computational shortcut it is assumed that accumulating cascade can be simplified to accumulating point defects.

Figure 2.6. Frenkel pair accumulation algorithm

The algorithm is briefly summarized in the Figure 2.6. A graphite supercell is created, initialized and equilibrated. A carbon atom is randomly picked in the cell and instantaneously moved to a new

randomly-generated position. To avoid non-physical effects — hence a simulation instability — the new position must be at least 0.5Å from other atoms.

This modification introduces local stresses and the potential energy artificially increases around each defect: to compensate the increased strain and temperature of each defect the system is relaxed under

constant temperature and pressure. In this work Berendsen barostat and thermostat for fast relaxation is used in an npt ensemble.

The use of two thermo- and barostats, namely the Nosé-Hoover and the Berendsen, instead of the usual coupling Berendsen and fix nve or nvt has been exploited in the literature [65]. After many simulation campaigns

(42)

testing different ensembles, it has been chosen for this work because it proved to better reproduce the literature results.

The time interval between Frenkel pair injections ranges from 0.1ps up to 1ps, corresponding to a very high dose rates, much higher than experimental dose rate (10−3 dpa/s). This temporal scale cannot account

for long range thermal diffusion, but it’s compatible with low mobility below 200°.

2.7.2

Intercalating H and He

A similar methodology was developed for the insertion of H and He atoms in the supercell. In a pristine or already damaged structure, a random new position was generated and a new atom created in that position. A minimum distance from the first neighbor of at least 0.5Å for H and 1Å for He is assured. Following each insertion, a relaxation under constant temperature and pressure (npt+Berendsen) of 1ps is performed.

(43)

2.8. Analysis tools

2.8

Analysis tools

In this section some analysis tools implemented or simply investigated are outlined and briefly discussed. Both post-processing and additional MD simulations are included.

2.8.1

Curvature criterion

A rough indicator of local disorder is the Coordination Number (CN): for perfect graphite (sp2) each atom CN is 3, for atoms around a single

vacancy is 2, it is 4 for sp3 bonds in defects like the Wallace interstitials,

it is 1 if dangling and 0 if isolated. The cut-off radius for CN calculation is set to 1.8Å.

However many type of defects in graphite cannot be discriminated only on the CN basis: point defects can wrinkle graphene layers [66] or some defect keep sp2 bonds like the Stone-Wales. Hence the curvature parameter as

defined by Chartier et al. [43] is implemented as follows via the OVITO’s Python scripting interface [67].

(44)

For each atom with a CN equal to 3: 1. list the three first neighbours

2. define the vectors between neighbour 1-2 and 1-3, namely ~n12 and

~n13

3. calculate the cross product ~n12 × ~n13

4. store the result ~ai for each atom

then, for each atom i and its j neighbours the local curvature is calculated as the average cosines of the angle θi as :

cos θi = 1 3 3 X j=1 |~ai · ~aj| k~aikk~ajk (2.14)

and the curvature radius:

Ri =

dr

tan θi (2.15)

where dr = 1.42Å is the average distance between two atoms.

In perfect graphite Ri is infinite and θi is zero, for the smallest fullerene

C36 θi ∼ 29° [68]. 32° is chosen as threshold angle to discriminate ordered

and disordered graphite. For those atoms not possessing 3 neighbors a fictitious value of θi = 89° is assigned.

(45)

2.8. Analysis tools

2.8.2

Stress-strain analysis

Many methods have been proposed and explored in the recent litera-ture in order to accurately simulate the stress response to an imposed deformation. A rather intuitive one is to simply apply a velocity to atoms at the ends of a sample: this typically yields a harsh change to the sample like necking. An alternative is the artificial deformation of the box and its equilibration via minimization or thermo- and barostatting. Two main methods to study the stress-strain curve depending on the dpa have been implemented in this work.

For the first method the main steps are the following:

1. the pristine or damaged supercell is retrieved and set as the initial geometry

2. an initial equilibration is performed using npt under constant tem-perature of 300K and pressure of 0bar for 10ps

3. the value of the initial supercell length along the deformation axis L0 is stored

4. costant temperature of 300K and pressure along the non-deformation axes is set to 0bar under fix npt

5. the simulation box is deformed with an engineering strain rate (erate) of 10−2 ps−1 or 1010 s−1 for 20ps

LAMMPS calculates this using L(t) = L0(1 + erate ∗ dt) where dt is the

timestep and the used fix deform applies the strain homogeneously to the entire simulation box.

The second method consists of the following steps:

1. select two regions at the boundary of the simulation box along the deformation box

2. setting fixed boundary conditions on these edge atoms by setting forces to zero

(46)

3. the box is then gradually and incrementally deformed along the desired direction by increasing the box boundaries and performing an energy minimization on the system by iteratively adjusting atom coordinates using the conjugate gradient method

Lebedeva et al. [69,70] tested the accuracy in the graphene elastic properties calculations of available carbon empirical potentials with an algorithm very similar to the one just described.

It is stated that LCBOP is the most adequate potential for simulations of in-plane deformations. It has also been shown [71] that the newer implementation LCBOPII [47] - that includes torsional terms - better describes ripples in graphene.

For visualization purposes, in the 3.3 section the compute stress/atom results will be used. This outputs a 6-element vector for the stress tensor for each atom coming from a kinetic energy contribution and the virial contribution due to interatomic interactions [72]. As defined in

LAMMPS, per-atom stress is the negative of the per-atom pressure tensure and is in units of pressure times volume. In order to have units of stress, one could divide this value by a per-atom volume, but this is not easy to compute or well defined in a deformed solid or liquid. This also means that the sum for all atoms of the diagonal components of the per-atom stress tensor divided by volume and dimension of the system is equal to the negative total pressure of the system.

Focus on the strain rate

Computational time is the limit for every MD simulation: it is very com-mon — if not absolutely necessary — to simulate a small sample of atoms but also with time scales (hence rates) that are not feasible in reality. This issue has already been mentioned above for the irradiation dose rate and affects the elastic properties simulations too.

This is the reason why in literature they are often referred to as ultra-high strain rate/velocity simulations.

Hence in literature one can find strain velocities from 0.1 up to 6

Å/ps [73,74], or 107 − 108 s−1 [75], or 106 − 107 s−1 [76], up to 109 s−1

and 1010 s−1 [77], 0.1/ps [78], 0.001/ps [79,80]. Many other examples

(47)

2.8. Analysis tools

2.8.3

Thermal conductivity

A preliminary literature review has been carried out, operative methods and insights to calculate the phononic contribution to thermal conductiv-ity are here briefly summarized in order to enable further development.

1. manual thermostatting

The first method simply consists in setting two regions at the boundaries of the simulation box, or in the middle and at the end of a periodic box, at two different temperatures via a thermostat: the energy added to the hot region should equal the energy subtracted from the cold one and be proportional to the heat flux moving between the regions [81, 82]. The temperatures of the hot and cold regions is output with the compute temp/region. The temperature profile is calculated computing the kinetic energy per atom and averaging with fix ave/chunk.

2 a e b. fix heat or fix ehex

In place of the thermostats applied on each of the two regions, the fix heat [81] or fix ehex [82] commands are used to add/subtract specified amounts of energy to both regions. In both algorithms (non-translational) kinetic energy is constantly swapped between regions (reservoirs) to impose a heat flux onto the system. The equations of motion are therefore modified if a particle i is located inside a reservoir, the energy swap is modelled by introducing an additional thermostatting force to the equations of motion [82].

In order to solve the full equations of motion these algorithm must be coupled with another integrator: they only integrate the additional thermostatting forces.

3. Muller-Plathe - fix thermal/conductivity

The third method employs the Muller-Plathe algorithm [83] which induces a temperature gradient exchanging kinetic energy between two particles in different regions of the simulation box.

This approach is referred to as a reverse non-equilibrium MD (rNEMD): the NEMD methodology forces a temperature gradient and measure the resulting heat flux. Here a heat flux is imposed and the resulting temperature gradient analyzed.

(48)

Fix thermal/conductivity implements this algorithm and computes the total kinetic energy transferred by the swaps: dividing this by the cross-sectional area of the simulation box and time one obtains the heat flux. The ratio of heat flux to the slope of the temperature profile is proportional to the thermal conductivity.

4. Green-Kubo - compute heat/flux

The fourth method is based on the Green-Kubo (GK) formula which relates the ensemble average of the auto-correlation of the heat flux to the thermal conductivity. The heat flux can be calculated from the fluctuations of per-atom potential and kinetic energies and per-atom stress tensor in a steady-state equilibrated simulation - hence this is very different from the preceding non-equilibrium methods, where energy flows continuously between hot and cold regions of the simulation box.

The heat flux in this case can be output via the compute heat/flux and then calculate the auto-correlation with a fix ave/correlate. The thermal conductivity κ is obtained from the heat flux J

κ = V kBT2 Z ∞ 0 hJx(0)Jx(t)i dt = V 3kBT2 Z ∞ 0 hJ(0) · J(t)i dt (2.16) Use with caution

Many MD simulations limitation on the capacity of providing accurate thermal conductivity values have been pointed out in the recent literature. This is due to various reasons.

The computational limitations have an obvious impact on the result: the experimental values are obtained from a measurement on a macroscopic sample, meaning a non-homogeneous material. Defects, grain boundaries, impurities are well-known to have an impact on such properties, and MD simulations actually offer the opportunity to investigate large and complex (hence more realistic) structures — computational time is the limit. But one always have to take into account that most of the empirical potentials used are obtained by fitting macroscopic quantities — structural, me-chanical — which are not necessarily connected to third-order (or higher) derivatives of the energy with respect to the atomic displacements. This of course is reflected in a poor description of anharmonic forces, hence conductivity properties: MD is then used to gain physical insights on properties such as dependency on temperature or system size or structure rather than reliable values [84].

Figura

Figure 1. Dimensional changes for 12800- and 102400-atoms supercells with Frenkel Pair Accumulation up to 0.12dpa
Figure 2.4. Snapshots of atoms displaced more than 2Å during the displacement cascade by a PKA of 80 keV
Figure 2.5. A flow chart symbolizing the different stages in a MD simulation
Figure 3.2. Potential energy evolution in a quench simulation, total potential energy divided by number of atoms
+7

Riferimenti

Documenti correlati

More in detail, a new optical microscope is introduced which relies on the integration of an acoustic optofluidic lens in order to achieve high-speed 3D imaging in both

L’analisi dei dati suggerisce l’esistenza di uno spostamento degli sbocchi della produzione dal mercato interno, in crisi, verso i mercati esteri, molto più

Attraverso la ricostruzione della vita di Carlo d’Aragona e Tagliavia, duca di Terranova e principe di Castelvetrano, Magnus Siculus, la monografia di Lina Scalisi

However, we consider a structural transformation model with a home production sector, and estimate the model using the value added of home production together with

Such measurements confirm the low losses for 100 µm radius structures, and thus the possibility of high performances sensors based on ring resonators.. (Left) Bending losses of

El knowledgeability (capacidad de conocer) y la información a los ciudadanos, por excepción, se convierte en la regla de acción de los poderes públicos. La claridad, la

Mentre però nei sistemi di procedura civile degli Stati affinchè l'istanza sia ammissi- bile il fatto deve solitamente essere stato ignorato per dolo del- l'