• Non ci sono risultati.

Barkhausen noise in random Ising models

N/A
N/A
Protected

Academic year: 2021

Condividi "Barkhausen noise in random Ising models"

Copied!
110
0
0

Testo completo

(1)

POLITECNICO DI MILANO

FACOLT `

A DI INGEGNERIA INDUSTRIALE E DELL’INFORMAZIONE

Corso di Laurea Magistrale in Mathematical Engineering

Barkhausen noise in random Ising models

Advisor:

Prof. Paolo BISCARI

Co-Advisor:

Prof. Maurizio ZANI

Author:

Matteo METRA

Matr. 841699

(2)
(3)

Acknowledgements

Arrivato in fondo a questo percorso, vorrei ringraziare tutte le persone che ne hanno fatto parte e che mi hanno permesso di raggiungere questo obiettivo. In primo luogo, il Professor Paolo Biscari che mi ha trasmesso la sua passione per la meccanica statistica e successivamente mi ha dato la possibilit`a di svol-gere questo lavoro lanciando quotidianamente le simulazioni che gli inviavo. Un sentito ringraziamento anche al Professor Maurizio Zani, `e stato solo grazie ai suoi studi sperimentali ed alle sue idee che ho iniziato questo progetto.

Grazie alla mia famiglia che mi ha sempre supportato e sopportato in questi anni.

Grazie ai miei amici di sempre Moru, Edo, Dami e ormai anche Anita, anche se ”di sempre” non `e. Siete stati una valvola di sfogo durante tutti questi anni, mi avete insegnato a credere in me stesso e, soprattutto, mi avete insegnato l’importanza delle letture. Se non fosse per voi, ora non sarei la persona che sono. Enjoy.

Grazie a Legra e Claudia con tutti i pomeriggi che ho passato a studiare con loro quest’ultimo anno sono diventati una seconda famiglia. Grazie ad Elia, che, se non si fosse sempre alzato presto per prendere l’unico tavolo vicino ad una presa in aula studio, probabilmente non avrei mai potuto lavorare sulla tesi. Grazie a tutti i miei amici, Goghi, Sofia, Marta, Sacco, Ire e Cristina per essermi stati vicini in questi impegnativi anni universitari.

Infine, vorrei ringraziare tutte le persone che non ho citato esplicitamente, ma che mi hanno sostenuto ed aiutato durante questo percorso di studi, o anche solo per una sua parte.

(4)

Sommario

Nonostante l’effetto Barkhausen rappresenti una delle principali prove dell’esistenza di domini magnetici, solo recentemente `e stata esaminata sperimentalmente l’influenza della temperatura sulle sue propriet`a. Lo scopo di questa tesi `e riprodurre questa relazione tra le propriet`a dell’effetto Barkhausen e la temper-atura attraverso modelli della meccanica statistica, con l’obiettivo di trovare un legame tra i fenomeni microscopici interni ad un magnete e gli effetti macro-scopici.

A questo scopo, abbiamo dapprima creato un codice C++ per simulare numeri-camente tre differenti modelli di Ising con presenza di disordine: il Random Field Ising Model, il Random Bonds Ising Model ed il Coercive Ising Model. Successivamente, abbiamo calibrato il valore del disordine introdotto in cias-cun modello allo scopo di soddisfare le caratteristiche dell’effetto Barkhausen. Questo processo `e stato condotto seguendo due differenti approcci: l’analisi delle propriet`a di isteresi magnetica, e quella delle discontinuit`a nella magnetiz-zazione.

In conclusione, abbiamo evidenziato l’esistenza di una innovativa relazione tra la temperatura e il disordine necessario per riprodurre l’effetto Barkhausen. Questa relazione ha consentito di confermare che i risultati numerici da noi ot-tenuti sono coerenti con i dati sia sperimentali, che teorici.

Parole Chiave: effetto Barkhausen, modello di Ising, Random Field Ising Model, Random Bonds Ising Model, Metodo Monte Carlo, meccanica statistica, cicli di isteresi.

(5)

Abstract

Even if the Barkhausen noise is one of the main evidences of magnetic domain’s existence, only recently the temperature influence on its proprieties has been tested experimentally. In this thesis, we aim at reproducing the relationship between the Barkhausen characteristics and the temperature through different statistical-mechanical models, in order to discover a link between the micro-scopic details and the macromicro-scopic effects.

To achieve this target, first we created a C++ code to study from a numerical point of view three random Ising models: the Random Field Ising Model, the Random Bonds Ising Model and the Coercive Ising Model. Then, we calibrated every model’s disorder in order to mimic the Barkhausen noise properties. In order to fulfill this task, we carried out two different analyses: the study of the magnetization jumps’ distribution and the study of hysteresis properties. Finally, for each model, we obtained an innovative relationship between the temperature and the disorder needed to reproduce the Barkhausen noise. This relationship confirmed that our numerical results are in agreement with both the experimental and the theoretical data.

Keywords: Barkhausen noise, Ising Model, Random Field Ising Model, Ran-dom Bonds Ising Model, Monte Carlo Method, statistical mechanics, hysteresis loop.

(6)

Contents

1 Introduction 1

2 Barkhausen noise 3

2.1 Introduction to the physical phenomenon . . . 3

2.2 Avalanches . . . 4

2.3 Universality of the Barkhausen noise . . . 5

2.4 Experiments . . . 6

2.5 Influence of the temperature . . . 7

2.6 Coercive Force . . . 8

2.7 Thin films . . . 8

2.8 Negative jumps . . . 9

2.9 Targets of our study . . . 10

3 Ising Model 11 3.1 Introduction . . . 11

3.2 Hamiltonian and Boltzmann probability . . . 11

3.3 Expected value . . . 12

3.4 Markov chains . . . 13

3.5 Metropolis algorithm . . . 14

3.6 Lattice and boundary conditions . . . 15

3.7 Metropolis algorithm applied to Ising model . . . 16

3.8 Mean magnetization . . . 18

3.9 Convergence analysis . . . 18

3.10 Cluster algorithms . . . 19

3.11 Phase transitions . . . 19

3.12 Temperature-driven phase transitions . . . 20

3.12.1 Critical temperature . . . 22

3.13 Hysteresis loops . . . 23

3.13.1 Area analysis . . . 24

3.13.2 Slope analysis . . . 26

3.14 External field-induced transitions . . . 26

3.15 Ising model from a computational point of view . . . 27

3.15.1 Random number generator . . . 27

3.15.2 Spins storing . . . 28

4 Random field Ising model 31 4.1 Description of the model . . . 31

4.2 Temperature-driven phase transitions . . . 32

4.2.1 Numerical data . . . 32

4.2.2 Results . . . 32

4.3 Hysteresis loops . . . 33

4.3.1 Numerical data . . . 33

4.3.2 Hysteresis loops at fixed T . . . 34

(7)

4.3.4 Negative jumps . . . 40

4.4 External field-induced transitions . . . 41

4.4.1 Zero-temperature simulation . . . 41

4.4.2 Out-of-equilibrium simulations . . . 42

4.4.3 Numerical Data . . . 43

4.4.4 Avalanches analysis . . . 44

4.4.5 Avalanche analysis through hysteresis properties . . . 47

4.4.6 Power-law coefficient . . . 49

4.4.7 Coercive force . . . 51

4.4.8 Convergence analysis and lack of straight line . . . 52

5 Random Bonds Ising Model 56 5.1 Description of the model . . . 56

5.2 Temperature-driven phase transitions . . . 57

5.2.1 Numerical data . . . 57

5.2.2 Results . . . 57

5.3 Hysteresis loop . . . 58

5.3.1 Numerical data . . . 58

5.3.2 Hysteresis loops at fixed T . . . 58

5.3.3 Hysteresis loops at fixed R . . . 60

5.4 External field-induced transitions . . . 63

5.4.1 Numerical Data . . . 63

5.4.2 Avalanches analysis . . . 63

5.4.3 Avalanche analysis through hysteresis properties . . . 66

5.4.4 Power-law coefficient . . . 67

5.4.5 Coercive force . . . 69

5.4.6 Convergence analysis . . . 70

6 Coercive Ising model 73 6.1 Description of the model . . . 73

6.2 Absence of the Hamiltonian in coercive Ising models . . . 74

6.3 Existence of an equilibrium . . . 74

6.4 Relation between Random Field Ising Model and Coercive Ising Model . . . 75

6.5 Temperature-driven phase transitions . . . 77

6.5.1 Numerical data . . . 77

6.5.2 Absolute value of the magnetization . . . 78

6.5.3 Critical temperature . . . 80

6.5.4 Influence of R . . . 82

6.5.5 Convergence analysis . . . 82

6.6 Hysteresis loop . . . 83

6.6.1 Numerical data . . . 83

6.6.2 Hysteresis loops at fixed T . . . 83

6.6.3 Hysteresis loops at fixed R . . . 85

6.7 External field-induced transitions . . . 87

(8)

6.7.2 Numerical comparison between Random Field Ising Model

and Coercive Ising Model at T = 0 . . . 87

6.7.3 Avalanches analysis . . . 88

6.7.4 Power-law coefficient . . . 91

6.7.5 Coercive force . . . 91

6.7.6 Convergence analysis . . . 92

7 Conclusions and comparison with physical results 95 7.0.1 Hysteresis loops properties . . . 95

7.0.2 Critical disorder and power law coefficient . . . 96

7.0.3 Coercive force . . . 98 7.1 Final Remarks . . . 99 7.2 Future developments . . . 100 7.2.1 Area Analysis . . . 100 7.2.2 Cluster Algorithm . . . 100 References 101

(9)

1

Introduction

In 1919, Barkhausen [1] discovered that ferromagnetic materials produce a jerky noise when magnetized by an external field smoothly changing in time. The noise is irregular, in contrast with the regularity of the applied field. This phe-nomenon took the name of ”Barkhausen noise” and it was the first indirect evidence of the existence of magnetic domains. But it was just in the 1949, several years later, that Williams and Shockley [2] related the Barkhausen noise to the irregular fluctuations in the motion of magnetic domain walls generating magnetization avalanches.

Many experiments have been carried out to study it and the interesting charac-teristics related o it. The striking result is that size and lifetime distributions of Barkhausen avalanches follow power-laws. This is the typical feature of a critical phenomenon which is independent of either the microscopic or the macroscopic details, in fully respecting the relevant symmetries related to the systems. For this reason, we can expect to obtain the correct large-scale behavior, such as the Barkhausen effect, from simple microscopic models. Following this idea, in 1995, Sethna [3] suggested that Barkhausen properties could be reproduced through a model of interacting spins, the Random Field Ising Model. This model was theoretically appealing and easily studied from a numerical point of view thanks to the simplified rules for its evolution. Recently, several different models have been proposed for the Barkhausen analysis. Most of those are based on spin lattice models with different kind of disorder such as random bonds [4], vacan-cies [5], random anisotropy [6]. All these models have a common feature: they all refer to zero-temperature.

However, Puppin and Zani [7] recently investigated the temperature dependence of Barkhausen avalanches in a thin Fe film discovering the following relation: the amplitude of Barkhausen avalanches is in the average smaller for lower tem-perature with respect to the higher temtem-perature.

For this reason, the main target of this thesis is the investigation of the temper-ature dependence of Barkhausen noise properties in three random Ising models. The first model that we took into account is Random Field Ising Model (chapt. 4), this was the first model considered to reproduce Barkhausen properties and it is the simplest, not trivial variation of the Ising model that considers a ran-domness in its parameters.

The second model that we analyzed is Random Bonds Ising Model (chapt. 5). In this model, disorder is introduced in the complex interactions between mag-netic domains.

Finally, the last model that we studied is Coercive Ising Model (chapt. 6). In this model, we consider a random friction opposing the spins flips. The lack of a theoretical background about it makes it the most innovative model that we took into consideration.

Then, after a preliminary study of every model, our main task was to calibrate the critical disorder that is necessary to reproduce Barkhausen noise proper-ties. So, we carried out two different analyses: the study of the magnetization jumps’ distribution, in order to have a better approximation of the results; and

(10)

the study of hysteresis properties, to obtain results at high values of the tem-perature. In this way, we got for every model the innovative relation between critical disorder and temperature. In addition, thanks to these relations, we have been able to evaluate some macroscopic properties of ferro-magnets. In particular, we analyzed the coercive force and the power law coefficient of the Barkhausen avalanche distribution.

All the models are coherent with experimental results, but, Random Bonds Ising Model seems to be the closest to physics among them. On the contrary, Coercive Ising Model results are acceptable, but worst than the other ones. In particular, the comparison between experimental data and its results do not jus-tify the complexity of this new model and the lack of a theoretical background. Nevertheless, this model has several peculiarities that distinguish it from any other Ising model, above all, related to temperature-driven phase transitions. Consequently, we deeply studied them because we realized that they could be useful either to reproduce other phenomena either as they could be useful for the possible future creation of an hybrid model that combines both random bonds and a coercive term.

(11)

2

Barkhausen noise

In this chapter, we will describe Barkhausen noise from a physical point of view, reporting experiments that are possible to find in literature and physical properties related to it. We will do this in order to understand the phenomenon and to be able to reproduce it through statistical-mechanical models.

2.1

Introduction to the physical phenomenon

Ferromagnetic materials possess two stable equilibrium configurations, symmet-ric with respect to spin flip. External magnetic fields favour one or the other configuration, so they can be used to flip a system through its bistable states. More precisely, when an increasing external field is applied to a ferromagnetic material, the magnetization changes through the nucleation and motion of do-main walls. This evolution is not smooth, in contrast with the regularity of the applied field, and in particular the evolution of the domain walls, generates the so-called Barkhausen noise, that may be recorded throughout the magnetization process.

Figure 1: Experimental values of the magnetization in a ferromagnetic material subject to an external field. M is the magnetization and B the external field.

This behavior can be better understood by considering the evolution of mag-netization. Let us start from a system configuration of the kind represented in Fig. 2 (A), which shows a meta-stable equilibrium position reached by the system during its magnetization. While a fraction of the magnetization is al-ready aligned with the external field, the other fraction is still opposing the field. The meta-stability of this state is due to the presence of pinning sites

(12)

that are defects like non-magnetic inclusions in the volume of a ferromagnetic material, or dislocations in crystallographic structure. The pinning sites have high magneto-static energy and it is energetically favourable for domain wall to intersect more with defects. Thus, by increasing the value of the applied field, the system remains in the same state and the only produced effect is a bowing of the domain wall, as represented in Fig. 2 (B). This configuration does not change until a particular value of the applied field is reached. When the energy of the system is sufficient to overcome the potential barrier, a jump occurs which brings the system into a new meta-stable state, as indicated in Fig. 2 (C). The external field needed to produce the wall depinning is BC. The amplitude of

the avalanche which brings the system from the state in Fig. 2 (B) to the state in Fig.2 (C) is ∆S and it causes a variation in the magnetization equal to ∆M .

Figure 2: Magnetization in a ferromagnetic material subject to an external field.

From a physical point of view, this behavior depends on the material we are working with. In crystalline materials, the disorder is due to the presence of vacancies, dislocations or non-magnetic impurities. In polycrystaline materials, we should add to these defects the presence of grain boundaries and variations of the anisotropy axis in different grains. Finally in amorphous alloys, disorder is primarily due to internal stresses and the random arrangement of atoms. It is the main aim of this thesis to reproduce some or most of the observed exper-imental features associated with the Barkhausen noise generation in simulation of suitably modified Ising models. In view of the above physical motivations for the origin of the intermittent evolution of domain walls, we will study models with a random disorder in their parameters. These models can be represented by nonlinear systems with many degrees of freedom, so that there will often be a large number of meta-stable states and any combination of these local con-figurations will be possible. Jumps are the rearrangements that occur as our system shifts from one meta-stable state to another.

2.2

Avalanches

Avalanches occur in various sizes and in a random sequence. In particular, we are generally interested in the distribution of the avalanche sizes. Indeed,

(13)

the size distribution P (∆M ) of such jumps often follows a power law P (∆M ) ∼ (∆M )−τ, where the exponent τ characterizes the universality class of the avalanche dynamics (Fig. 2.2).

This property makes the phenomena really interesting. An important property

Figure 3: Experimental probability distribution of ∆M .

of power laws is their scale invariance. Given a relation P (∆M ) ∼ (∆M )−τ, scaling the argument ∆M by a constant factor c causes only a proportional scaling of the function itself.

P (c∆M ) ∼ c−τ(∆M )−τ ∝ P (∆M ).

This behavior shows that the relative likelihood between small and large events is the same, no matter how small or large are the events themselves. In addition, as for the dynamical process that generates the power-law, it has been proven via renormalization group theory that system with the same scaling exponent share the same dynamics. For this reason we can look for phenomena similar to this one in order to study them through our results on Barkhausen noise.

2.3

Universality of the Barkhausen noise

There are several physical systems that share the same properties of a magnetic solid under the action of an external field. Their common feature is that they respond with discrete events of a broad range of sizes even if pushed slowly. For example, earth responds with violent and intermittent earthquakes as two tectonic plates rub past one another. A piece of paper emits intermittent, sharp noise as it is slowly crumpled or rumpled. Similar behavior is observed in a

(14)

wide variety of other phenomena, such as superconductor vortices, microfrac-tures and charge-density waves.

In particular, the number of earthquakes of a given size forms a power law called the Gutenberg–Richter law and it is surprisingly in accordance with predictions based on simple physical models (Fig. 2.3).

The properties of very small earthquakes probably depend in detail on the kind

Figure 4: Histogram of number of earthquakes in 1995 as function of their magnitude.

of dirt in the crack. The very largest earthquakes will depend on the geography of the continental plates. However, the smooth power-law behavior indicates that there is a relation between these events, independent of either the micro-scopic or macromicro-scopic details.

Once we realize that there exists a connection between magnets and earthquakes, it is important to understand how to make profitable the study of Barkhausen noise. Short term predictions of earthquakes are probably impossible using this approach. However, since the large-scale behavior of these phenomena relies on only a few emergent material parameters (disorder and external field for our model of magnetism), long-term predictions can be made.

2.4

Experiments

Even through our work does not aim at studying experimentally the Barkhausen noise, it is our aim to test the predictions coming from our numerical simulations against the results observed in previous experiments. It is therefore important to have an idea on how the shown physical results have been obtained, in order to be as close as possible to reproduce them.

Mainly, all physical data in this thesis have been taken from the studies by Maurizio Zani and Ezio Puppin [7]. They studied a thin Fe film 900A long,

(15)

grown on MgO. The film thickness has been measured by a quartz microbal-ance and purity has been checked by XPS and Auger spectroscopy. The sample was placed at the center of a pair of magnetizing coils in Helmoltz configuration and the magnetic signal has been revealed by a Kerr ellipsometer in longitudinal geometry. One of the most innovative aspects of this experiment has been the study of temperature, that has been checked using a cryostat. The main results of the paper are recalled in section 2.5.

2.5

Influence of the temperature

Most of the models for Barkhausen noise refer to zero temperature simulations, whereas most experimental data on Barkhausen noise have been obtained at room temperature. Therefore, it is important to understand if and how the temperature affects this phenomenon.

In fact, for 3D magnets, the temperature effects may strongly vary from material to material. For example, for pure metals like Fe and Ni Barkhausen noise properties show only slight changes with temperature, while ferromagnetic alloys have a strong dependence on T [8]. On the other hand, Puppin and Zani tested the statistical properties of the Barkhausen noise in a thin Fe film for different temperatures and they proved that there is a significant difference at T = 10K and T = 300K [7]. In fact, Barkhausen fluctuations are present in both cases, but the amplitude of the jumps appears to be on average smaller for lower temperature, with respect to the case of higher temperature. In both cases, the experimental curves can be fitted with power-laws, but the critical exponents are different. At T = 10K, P (∆M ) ∼ (∆M )−1, while at T = 300K, P (∆M ) ∼ (∆M )−1.8, Fig. 2.5.

Figure 5: Probability distribution of ∆M at T = 300K and at T = 10K.

We will try to model this behavior by several models. In particular, in sec-tion 7.0.2, we will discuss which model reproduces it better.

(16)

2.6

Coercive Force

Several physical systems exhibit coercive forces. The forces, just like friction in mechanical systems, oppose any modification in the system configuration. For ferromagnetic materials, the coercive force Bc is the intensity of the applied

magnetic field necessary to induce a sign-reversion transition in the overall mag-netization after the magmag-netization of the sample has been driven to saturation. The coercive force Bc that is necessary to invert the magnetization of a

ferro-magnetic material is influenced by the temperature too. In particular, Gaunt [9] created a model predicting that Bcdepends on T according to the following law,

B1/2c = Bco1/2(1 − CT2/3),

where C is a constant depending on the size and shape of the potential well, while Bco is the coercive force at zero temperature. This result has been obtained

thanks to a model of domain wall pinning by random inhomogeneities. In this model, the domain wall separating oppositely magnetized regions are treated as a deformable membrane and they may be pinned at magnetic inhomogeneities. The coercive field is the critical magnetic field Bcrequired to release the wall or

dislocation from the pins. Experimental results from [7] confirm this behavior.

Figure 6: Temperature dependence of the coercive force.

We will compare the results that we will obtain through several models with physical results in section 7.0.3.

2.7

Thin films

The literature on the Barkhausen noise considers mainly three dimensional ma-terials, while 2D thin films are much less known and the number of papers reporting a complete set of experimental results on this latter is very limited. The main reason is that thin film phenomena are more problematic: stray fields’

(17)

influence in the direction perpendicular to the sample is non negligible, and it creates complex patterns in the magnetized areas. Many questions remain to be solved and this makes the 2D modeling an interesting and promising field of study. In addition we would like to reproduce the Puppin and Zani experiment [7], that has been done on a thin Fe film. We will consequently work on a 2D model.

Since we will approximate a thin magnetic film with a 2D model, it is impor-tant to know whether the physical results depend on the film thickness, or not. Wiegman [10] proved that there is no strong thickness dependence, as shown in Fig. 7.

Figure 7: Values of the critical exponents τ and α for permalloy films as a function of thickness.

2.8

Negative jumps

Normally, the jumps’ final magnetization is aligned with the external field. How-ever, the presence of the so-called negative Barkhausen jumps has also been observed. After these jumps, the local sample magnetization, that is initially aligned with the external field, points against the field itself. In 1930 Becker [11] predicted the existence of negative jumps. Their first experimental evidence was published by Kirensky [12] in 1951 with a study on polycrystalline Ni. One of the most interesting results was published by Puppin and Zani [13], who proved that the statistical distribution of negative jumps is still a power law, with a critical exponent similar to the one of positive jumps (Fig. 8).

Puppin and Zani also supposed that negative jumps take place only in a few regions of the sample surface, where there are local defects. Once we will have verified that negative jumps are present also in our models, we could check this hypothesis, since it is easier to realize where local defects are in a mathematical model rather than in a real magnet.

Nevertheless we also have to consider that our results might be spoiled by this phenomenon. Indeed we might misinterpret an avalanche whose area is ∆M as the sum of a bigger avalanche and a negative one. These problems that may arise from negative jumps will be explained deeply in section 4.3.4.

(18)

Figure 8: Statistical distributions for the amplitudes of positive ∆M+and

neg-ative ∆M− jumps.

2.9

Targets of our study

The aim of this thesis is to reproduce some or possibly most of these phenom-ena related to Barkhausen noise, in order to possibly create a link between the micro-structural details of the material under study and the macroscopic be-havior of ferromagnetic materials.

In particular, we will study this phenomenon by three spin models, because they are the most important models to reproduce magnets. With respect to what is possible to find in literature, we will consider temperature’s influence and a 2D case. Temperature has almost never taken into account in models for the Barkhausen effect, first of all because it requires a huge computational time compared to the zero temperature simulation (section 4.4.1). Second, because till Zani e Puppin [7] studies it was considered not relevant [8]. On the other hand, in literature, 2D Barkhausen noise model analysis is still preliminary, because renormalization groups theory gives important results in more than 2 dimensions. In 2D, renormalization group theory does not fit the avalanche size distributions [14]. Thus we will try both to reproduce Zani and Puppin experi-ments (that are in 2D) and also to give an analysis of this important topic that has not been deeply studied yet.

In addition, we will also try to obtain as many information as possible about some variation of Ising model where a randomness in the parameters is taken into account. We will have a particular focus on the influence of temperature when there is no external field and hysteresis loops’ properties.

(19)

3

Ising Model

In this chapter, we will describe the Ising model and the Metropolis algorithm necessary to solve it from a computational point of view. We will show the importance of this algorithm and the computational choices that we took to optimize it for our aim. Finally, we will present phase transitions and what they represent from a physical point of view. We will also describe how we will analyze some characteristics of the model, such as, the critical temperature and hysteresis properties.

3.1

Introduction

The Ising model is a statistical-mechanical model of a magnetic system exhibit-ing a para-ferromagnetic phase transition. The basic premise behind it is that the magnetism of a bulk material depends on the combined magnetic dipole moments of many atomic spins within the material. The model postulates a lattice with a magnetic dipole or spin on each site. A spin is as simple as pos-sible: it has two possible states, either pointing up or pointing down. From a mathematical point of view, they are scalar variables that can take only two values si = ±1. In a real magnetic material, the interaction between

neigh-boring spins tends to induce a parallel alignment of the neighbors. The model mimics this by including a term in the Hamiltonian that is proportional to the sum of the products sisj of spins that are nearest neighbors in the lattice. In

the simplest case, all neighboring spins interact with the same intensity, which amounts to multiply all products sisj by the same interaction energy J . This

term appears in the Hamiltonian with a negative sign −JP sjsi. Thus, we

will always consider J > 0 in order to make the spins want to line up with one another. Instead, J < 0 would reproduce the behavior of an anti-ferromagnetic material, as adopted to model neural networks. We also introduce an external magnetic field B coupling to the spins. Its sign represents the direction of the external field, to which the spins tend to align.

There are more complex and realistic variations of the Ising model. For example in the XY Ising model, the spins si may have any orientation, thus their value

is si = (sin θ, cos θ). Otherwise, in the random field Ising model (that we will

consider in chapt. 4), the external field depends on the position of the spin to which it is applied. Or in the random bonds Ising model (that we will consider in chapt. 5), the interaction J between the spins is not constant.

3.2

Hamiltonian and Boltzmann probability

The energy of a configuration ν, i.e. the Hamiltonian, takes the following form:

H(ν) = −BX i si− J X <i,j> sjsi,

(20)

where ν is defined by the vector of all spins s = (s1, s2, ..., sN) and < i, j >

indicates that sites i and j are the nearest neighbors. From now and on, we will consider J = 1 to simplify the notation, unless otherwise stated.

According to one of the basic principles of statistical mechanics, for a system in thermodynamic equilibrium in the canonical ensemble (that is, at prescribed temperature and magnetic field), , the probability of finding it in the state ν must be given by:

P (ν, β) = e

−βH(ν)

Z ,

where β = kT1 , k is the Boltzmann constant and Z is the partition function of the model, expressed by:

Z =X

ν

e−βH(ν).

To simplify the notation, we denote by ν a possible configuration of the system, and byP

ν

the sum over all possible configurations.

3.3

Expected value

In principle, the knowledge of P and Z as a function of the external parameters (temperature and magnetic field) can tell us anything we might want to know about the macroscopic behavior of the system.

For example, if we were interested in computing the expected value of any observable quantity Q(ν), we would have to average the quantity of interest over all states {νk} of the system, weighting each state with its own Boltzmann probability, that is:

hQi = P ν Q(ν)e−βH(ν) P ν e−βH(ν) .

In reality, there are a total of 2N states for a lattice with N spins in it, thus

making really complex to compute the expected values, even if a powerful com-puter is used.

So that, we are interested in developing a numerical method enabling us to run a simulation on a large lattice in a reasonable computational time. The first idea is to consider only M possible states of the system {ν1, . . . , νM}, and to

approximate the expected value hQi by:

QM = M P k=1 Q(νk)e−βH(νk) M P k=1 e−βH(νk) . (1)

Unfortunately in most numerical calculations it is possible to sample only a very small fraction of the total number of states. The sums appearing in (1) may be

(21)

dominated by a small number of states, thus making the approximation of the mean value very rough, if not incorrect most of the times.

The solution that we exploit in Monte Carlo simulations is to select M states such that a particular state νk is chosen according to a probability P (νk) =

e−βH(νk )

Z . In this way our estimator for hQi becomes,

QM = 1 M M X k=1 Q(νk).

This process is called importance sampling. The main disadvantage of this tech-nique is that there are statistical errors in the calculation. This is due to the fact that we do not include every state in our calculation, so that there will be statistical noise in the partition function. On the other hand, if we find a way to select M states νk with the proper probability, we will be able to compute

average values in a reasonable computational time.

3.4

Markov chains

The following step is to understand how to select states so that each one ap-pears with its correct Boltzmann probability. This is not trivial, since one cannot randomly choose them, and accept or reject them with a probability equal to P (ν) = e−βH(ν)Z . In fact, this would be not better than randomly sam-pling states as in our original scheme.

The basic idea that we will exploit to address this issue is to simulate the ran-dom thermal fluctuations of the system from state to state over the course of an experiment according to the transition probability P (ν → µ). The advantage of this technique is that if we start from a configuration ν, and we generate each new state µ from the preceding one with a transition probability equal to the ratio of the individual probabilities, the partition function cancels. This makes the computation feasible in an acceptable time and, when it is run for enough iterations, it will eventually produce a succession of states that will appear with the exact probabilities, for any choice of the initial configuration of the system. This generating engine for the set of states is called Markov process, and the states of the system create a Markov chain. Therefore, we may use Markov theory in order to be sure that the equilibrium distribution of states will be the Boltzman distribution.

First of all, in a true Markov process, all transition probabilities P (ν → µ) should satisfy two conditions that are:

• they should not vary over time;

• they should depend only on the properties of the current states ν and µ, and not on any other state the system has passed through.

(22)

Then, since we want to make the Markov process generate some state ν when handing a system in the state µ, the transition probability will also have to satisfy the following constraint:

X

ν

P (µ → ν) = 1.

Finally, our Markov process has to satisfy two further conditions. The first one is ergodicity, i.e. the requirement that it should be possible for our Markov pro-cess to reach any state of the system from any other state, if we run it for long enough. This because every state ν appears with some non-zero probability in the Boltzmann distribution. Reversely, if the state ν was not reachable from another state µ then the probability of finding ν in our Markov chain of states would have been zero.

The last and possibly most important condition we must ensure is detailed bal-ance, that can be expressed by:

P (ν)P (ν → µ) = P (µ)P (µ → ν).

The importance of this condition relies on the fact that it both defines the evo-lution of our system and eliminates possible limit cycles in the evoevo-lution of our system, too. The detailed balance condition is equivalent to:

P (ν → µ) P (µ → ν) =

P (µ) P (ν) = e

−β(H(µ)−H(ν)).

It is precisely this condition which ensures that the equilibrium distribution of states in our Markov process is exactly the Boltzmann distribution.

3.5

Metropolis algorithm

The requirements indicated in section 3.4 leave to us a good deal of freedom on the choice of transition probabilities. In order to simplify this problem, we express them as the product of two functions:

P (ν → µ) = g(ν → µ)A(ν → µ).

Where g is the selection probability, given an initial state ν, that our algorithm generates a new target state µ. A is the acceptance ratio and it implies that if we start from a state ν and our algorithm generates a new state µ from it, we should accept that state with a probability A(ν → µ). If we accept it, we will move our system in the state µ, and we will repeat this process to find the next state. Thus, our algorithm will start from a given initial state ν, then we will choose another random state µ with the probability g(ν → µ) and the system will go into µ with a probability A(ν → µ) or it will stay in the state ν with probability 1 − A(ν → µ).

Since we will always work with single-flip algorithms, we consider only those states µ which differ from the current one by the flip of a single spin. Moreover,

(23)

the selection probabilities g(ν → µ) for each of the possible states µ are all chosen to be equal to each other. Considering that there are N different spins that we could flip, and hence N possible states µ that we can reach from a given state ν, it derives that:

g(ν → µ) = (1

N, if ν and µ differ by one spin,

0, otherwise.

This choice for g gives to us the obvious result indicated below:

e−β(H(µ)−H(ν))= P (ν → µ) P (µ → ν)= g(ν → µ)A(ν → µ) g(µ → ν)A(µ → ν) = N A(ν → µ) N A(µ → ν) = A(ν → µ) A(µ → ν). (2) Our constraints still leave to us a good deal of freedom on how we can choose the acceptance ratios. For example, a simple choice could be:

A(ν → µ) = A0e−

β

2(H(ν)−H(µ)).

The constant A0 cancels out in (2), so we can choose any value we like for it,

except that being A(ν → µ) a probability, it should never be greater than one. Although, this wouldn’t be a very good choice because the acceptance ratio are very small for almost all moves, even if we chose A0 as big as it is allowed to

be. The lower the acceptance ratios are, then the slowest our algorithm will be. Therefore, when a constraint as the one in (2) is given, the strategy enabling to maximize the acceptance ratios, and therefore producing the most efficient algorithm, is to assign to the largest of the two ratios the greatest value possi-ble, that is one, and then adjust the other ratio to satisfy the detailed balance constraint,

A(ν → µ) = (

e−β(H(µ)−H(ν)), if H(µ) − H(ν)) > 0,

1, otherwise. (3)

In other words, if we select a new state which has an energy lower or equal to the current one, we will always accept the transition to that state. If it has a higher energy, then we will accept it with the probability given in (3). This is the Metropolis algorithm for the Ising model with single-spin flip dynamics.

3.6

Lattice and boundary conditions

Until now, we have described a general Ising model. This model could be in any dimension and with any kind of boundary conditions. Since we want to reproduce the experiment by Zani and Puppin [7], we will work with a 2D lattice and any spin will be bounded to four other ones, as shown in Fig. 9.

The systems we are computationally able to simulate are composed by a limited number of spins (typically of the order of 104). Boundary and edge

(24)

Figure 9: On the left the bonds between spins; on the right an example of a possible output of the Ising model.

effects can be particularly difficult to estimate and predict in simulations of this size. In order to minimize boundary effects, we will apply periodic boundary conditions to our matrix, that is we will consider the first spin in a row connected to the last spin in the same row and viceversa, as if they were nearest neighbors. The same is true for spins at the top and bottom of a column, as represented in Fig. 10. This ensures that all spins have the same number of neighbors and local geometry; all spins are equivalent and the system is completely translationally invariant.

Figure 10: Periodic boundary conditions.

3.7

Metropolis algorithm applied to Ising model

Now, we can define the Metropolis algorithm to solve the Ising model. We will consider in our simulations square lattices of side L composed by N = L × L spins s(l, m).

(25)

Algorithm 1: Ising Algorithm Define initial state ν ;

for j = 1; j < M onte Carlo Iterations; j + + do l = random integer ∈ [1, L];

m = random integer ∈ [1, L];

µ is a state with all spins equal to ν except for s(l, m) = −s(l, m); if H(ν) < H(µ) then

The system will go to the state µ, i.e. s(l, m) = −s(l, m); end

else

p =random real ∈ [0, 1]; if p < e−β(H(µ)−H(ν)) then

The system will go to the state µ, i.e. s(l, m) = −s(l, m); end

end end

This algorithm could be optimized even more, in fact we could compute directly the variation of the Hamiltonian, instead of its value for each state µ of the system. This is much faster. For example, in our model, instead of computing all sums,

H(ν) = −BX i si− X <i,j> sjsi,

we could define ∆H(si) as the variation of the Hamiltonian considering only the

flip of the spin si, as indicated below:

∆H(si) = 2si(B +

X

<i,j>

sj).

We can finally rewrite the algorithm in a compact form, as: Algorithm 2: Ising Algorithm

Define initial state ν ;

for j = 1; j < M onte Carlo Iterations; j + + do (l, m) = random integers ∈ [1, L] × [1, L]; p =random real ∈ [0, 1]; if p < e−β∆H(s(l,m)) then s(l, m) = −s(l, m); end end

(26)

3.8

Mean magnetization

The most important quantity that we want to study in our model is the mean magnetization. In a given state ν, it is defined as:

M (ν) = P i sν i N , where N is the number of spins of our lattice andP

i

i is the sum of all spins

in the state ν. However, it is much better to notice that only one spin sj at

a time flips in the Metropolis algorithm, so that the change of magnetization from state ν to state µ is given by:

∆M = M (µ) − M (ν) = P i sµi −P i sνi N = 2s µ j.

Thus, we can compute the mean magnetization in the first configuration and then we can obtain it at each Monte Carlo iteration by its variation.

3.9

Convergence analysis

A basic requirement for an efficient simulation is that our Markov chain goes through a sufficiently large series of independent states. This in order to be sure that the probability of a state is exactly the Boltzmann one. Since we will work with single-flip algorithms, any configuration is strongly correlated with the previous one. Therefore, several iterations are needed before the correlation is lost. In particular, the quantity that measures how long it takes for the system to get from a state to another one that is significantly different from the first one is called correlation time, tc. To compute it, we need to define also the

auto-correlation function reported below:

χ(t) = Z

[m(τ ) − hmi] [m(τ + t) − hmi] dτ,

where t and τ are indicators for the Monte Carlo iterations and do not represent real times (we are looking for an equilibrium solution). Now, let’s consider a very small value of t. In this case, m(τ + t) ≈ m(τ ), so that χ(t) > 0. On the other hand, for sufficiently long times, we expect m(τ + t) and m(τ ) to be uncorrelated and χ to decay to 0. Furthermore, the auto-correlation function decays to zero with a leading asymptotic behavior of the type,

χ ∼ e−tct,

where tc is exactly the correlation time that we are looking for. Once we have

estimated tc, we will consider independent two states that have more than 3tc

(27)

every at least 3tc iterations.

The negative aspect of the auto-correlation function is that it is really slow to be computed and usually it is possible to compute it only at the end of a simulation. For this reason, we will consider independent states that have N iterations between them, and we will verify a posteriori that 3tc ≤ N on some

test cases. This choice speeds up significantly our simulations, and eventually turns out to be certainly on the safe side, as our results typically produced tc∼ 102, while N ≥ 103. In addition, writing results is the part of the algorithm

that requires more computational time. Thus, it is much faster to save just a few results or important quantities, rather than saving all of them and stop the algorithm a few iterations before.

3.10

Cluster algorithms

The Metropolis algorithm with single-spin flip dynamics is a great Monte Carlo algorithm for simulating the Ising model when the spins are uncorrelated, so that only a few nearest of them flip together. On the other hand, we will study temperature dependent phase transitions (section 3.12), where the correlation length diverges, and magnetic driven avalanches (section 3.14), where our aim is precisely to observe and study avalanches of large domains flipping simulta-neously. It is quite difficult for the algorithm to flip over one of these large domains, because it would have to do it spin by spin, and each move has quite a high probability of being rejected because of the ferromagnetic interactions between neighboring spins.

It turns out that by using different algorithms, we can greatly reduce the cor-relation time. The basic idea is to look for clusters of spins with the same orientation and then flip them all in one go. Algorithms of this kind are re-ferred to as cluster algorithms. For classical Ising models, the most effective cluster algorithms are the Wolff algorithm and the Swendsen-Wang algorithm (for a full explanation about them, you can refer to [15]). Cluster algorithms seem to be a great improvement for our algorithm, but we did not create any cluster algorithm to study the Coercive Ising Model (chapt. 6). Thus, if we worked with a cluster algorithm for just two models on the three that we will consider, we wouldn’t be able to compare the results.

3.11

Phase transitions

In our simulations we will often face situations where a magnetic material abruptly changes its macroscopic behavior when external conditions, such as the temperature or magnetic field are varied. The points at which this happens are called critical points, and they mark a phase transition from one state to another.

There are mainly two kinds of phase transitions. First order phase transitions involve a discontinuous behavior in thermodynamic properties. This happens because the two states coexist at the critical point that separates them, and usually the correlation length is finite. The classical example for a first order

(28)

phase transition is the Ising model in an increasing external field B at T < Tc.

When B is large enough, the mean magnetization reverses abruptly. From Fig. 11, it is possible to see that the magnetization is discontinuous and changes sign when B’s contribution in the Hamiltonian becomes greater than the contribu-tion of the nearest neighbors.

Figure 11: Lower branch of a hysteresis loop for T = 0.

On the other hand, second order phase transitions have a continuous be-havior because the two phases become identical at the critical point, and the correlation length becomes infinite. Another second order phase transitions’ feature is that their critical properties fall into a limited number of universality classes, that are defined not by detailed material parameters, but by the funda-mental symmetries of the system. This way, for example, we can understand the nature of the liquid-gas transition by studying the Ising model. An example of a second order phase transition is the transition from the para-magnetic phase to the ferromagnetic one by varying the temperature, see Fig. 12. This will be explained better in section 3.12.

3.12

Temperature-driven phase transitions

The first phenomenon that we will study by the Ising model is the temperature driven phase transition from the ferro-magnetic phase (when the mean mag-netization M ≈ ±1) to the para-magnetic one (when the mean magmag-netization M ≈ 0) in the absence of an external field. First of all, we know that if T → ∞ the state will be one in which all spins are random and uncorrelated, so that they will be randomly oriented up or down and the net magnetization will be zero on average. Instead, if T → 0, the state will be one in which all spins line up with one another, either all up or all down, because the interaction between them encourages nearby spins to point in the same direction. Consequently, the

(29)

Figure 12: Mean magnetization as a function of the temperature in a 2D Ising model.

magnetization per site will be either +1 or −1. From the literature [16], we know that there will be a sharp change in the magnetization, called phase transition, and it will take place at the Curie temperature TC. We call it critical

temper-ature, whose value in the particular case of the two-dimensional Ising model is known to be Tc = 2.269 ([16]). Above Tc, the system is in the para-magnetic

phase, in which the average magnetization is zero. Below it, the system is in the ferromagnetic phase and develops a spontaneous magnetization.

In particular, Onsager (see [16]) proved that the exact value of the mean mag-netization in the absence of an external field is,

M (T ) = 1 −  sinh 2 kT −4 . (4)

With Tc given by:

Tc=

2 k log (√2 + 1).

To be strictly correct, if we consider a finite-size lattice, the mean magneti-zation of the Ising model below the critical temperature is still zero, since the system is equally happy to have most of its spins pointing either down or up. Thus, if we average over a long period of time, we will find that the magneti-zation is close to +1 half of the time and close to −1 for the other half, with occasional transitions between the two, so that the average might still be close to zero. This is not physical, but it can be easily solved by averaging the mag-nitude |M | of the magnetization, and it will be almost always close to +1. The idea is that this would be the magnetization that we would obtain imposing an external field B and letting it go to zero. The problem of this technique is that

(30)

|M | will not be exactly zero for temperatures above the phase transition. This explains why in our results the magnetization above the transition temperature is still slightly greater than zero. The average magnetization in this phase is zero, but the average of M ’s magnitude is always greater than zero, because we are taking the average of a number which is never negative. Nevertheless, as we increase the lattice’s dimension, we expect this quantity to tend to zero, so that the numerical result and the exact solution will agree.

The study of temperature-driven phase transitions seems to be not related to the study of Barkhausen noise, but it gives us several a priori information on the model. This will help us saving computational time for simulations with B 6= 0. For example, thanks to these simulations, we can obtain the critical temperature for the phase transition as function of the parameters of the system. This way, we would avoid running useless simulations at a temperature higher than the critical one. It is important to remember that these simulations with no external field require about 100 time less computational time than the other ones, thus they are worth being done. In addition, Ising models with no external field have been widely studied, so it is possible to verify that our code and the model agree.

3.12.1 Critical temperature

Now that we have defined phase transitions, it is interesting to study how we can obtain Tc from our simulations. This is a non trivial task, because when

the systems are not infinitely large, the critical behavior is smeared out. This happens because our model’s magnetization depends on the lattice’s size. For example, the finite-size magnetization of an Ising system (that is close to the transition) defined on a hyper-cube with Ld spins is asymptotically given by:

mL∼ L

α νM [Lf

1

ν(T − Tc)].

This shows that data formL Lαν

simulated for different system sizes should cross in the large-L limit at one point, namely at T = Tc. Performing the finite-size

scaling analysis with the magnetization is not very practical, because neither α nor ν are known a priori. Consequently, we will study an adimensional quantity known as Binder cumulant, expressed by:

U4(L, T ) = 1 −

hm4i

3hm2i2.

As the system size goes to infinity,

lim

L→∞U4(L, T ) =

(2

3 if T ≤ Tc;

0 if T > Tc.

For large enough lattice sizes, U4(L, T ) curves cross at a fixed point with value

U∗(T∗), where T∗ is exactly the critical temperature. Hence, by making such

(31)

the universality class from the value of U∗ and obtain an estimate of Tc from

the location of the crossing point.

Fig. 13 shows the Binder cumulant, as a function of temperature for several system sizes. The vertical dashed line indicates the value of the critical temper-ature.

Figure 13: U4for different lattice sizes.

3.13

Hysteresis loops

Another interesting phenomenon that we will study is the formation of hys-teresis loop. When a ferromagnetic material is magnetized in one direction, it will not relax back to zero magnetization when the imposed magnetizing field is removed. It must be driven back to zero by a field in the opposite direction. This property of ferromagnetic materials can be exploited to create magnetic memory, for example, for audio tape recording, and for the magnetic storage of data on computer disks.

This phenomenon can be reproduced and studied through the Ising model. In or-der to do it, we will consior-der a lattice starting with all spins aligned (si= −1 ∀i)

in the presence of an external magnetic field B increasing slowly from B = −∞ to B = ∞ and back, with a fixed temperature for each simulation.

(32)

3.13.1 Area analysis

The hysteresis’ loop area is a particularly important quantity. Indeed, since the magnetic work is precisely the area underlying the magnetization-magnetic field curves, the area in an hysteresis loop represents the energy dissipated due to material internal friction. If we evaluate this property by the Ising model, it is interesting to notice that it depends on the temperature of the system, as we can see in Fig. 14.

Figure 14: Hysteresis loops for different temperatures.

At T = 0, the hysteresis loop is discontinuous and all spins flip when B = 4. This is in agreement with theoretical results, indeed, all the spins are initially equal to −1 and, at T = 0, a spin si will flip if and only if ∆H(si) < 0,

0 > ∆H(si) = 2si(B +

X

<i,j>

sj) = −2(B − 4).

This implies that the spins will flip exactly when the external field exceeds 4. With the increase in the temperature, hysteresis loops’ areas decrease till they become almost zero at the critical temperature. It is interesting to notice the relation between hysteresis’ areas and temperature as reported in Fig. 15.

The area follows an exponential law, in particular, with the relation Area ∼ e−2.15T. Fig. 16 is a semi-logarithmic plot of the fitted values compared to the results and confirms this behavior. Nevertheless, it is important to remember that, at T = Tc, the hysteresis’ area should become zero, so that we will probably

(33)

Figure 15: Plot of the hysteresis areas as a function of temperature.

lose the exponential behavior at Tc. Unfortunately, we can not study the area’s

behavior close to this critical value, since the hysteresis branches often overlap, so that the resulting area may be negative.

Figure 16: Semi-logarithmic plot of the hysteresis areas as a function of tem-perature.

(34)

3.13.2 Slope analysis

The slope of the magnetic field curves provides the magnetic susceptibility:

χ = dM dB.

Indeed, close to M = 0, the mean magnetization is linear with respect to B, and this is exactly the susceptibility relation. In the classical Ising model, we have a a sharp discontinuity: M is continuous till a certain value of the external field, to then abruptly increase. This behavior will change in random models.

3.14

External field-induced transitions

In the last kind of simulations that we will study, we will try to reproduce the Barkhausen noise’s experiments. As in simulations to study hysteresis loops, we will consider a lattice starting with all spins aligned (si = −1 ∀i) in the

presence of an external magnetic field B increasing slowly from B = −∞ to B = ∞. This kind of simulation is not interesting if we work with the classical Ising model. For example, at T = 0, this would be a first order phase transition: the magnetization reverses abruptly as the B’s contribution in the Hamiltonian becomes greater than the contribution of the nearest neighbors. In addition, if we study the probability of a Barkhausen avalanche, we wouldn’t get a power law distribution, since basically only large jumps are found.

Experimental results from physical samples do not exhibit such an abrupt be-havior. Indeed, a solid material that moves from a magnetic form to another one has no sharp transition at all. Therefore, in order to model the behavior of real materials, the classical Ising model must be supplemented by some sort of quenched randomness. In this thesis we will consider three different types of random Ising model: the random field Ising model (see chapt. 4), the random bonds Ising model (see chapt. 5) and the coercive Ising model (see chapt. 6). Our main result will be to find that disorder, and indeed a sufficient amount of disorder, is essential to generate the fluctuations which generate the inter-mittent Barkhausen noise. The nature of the disorder can be inferred from the microscopic structure of the material under study.

All models we will analyze exhibit the presence of a continuous transition at a critical point with disorder Rc and external field Bc. Indeed, at very large

disorder, the spin couplings constitute a small perturbation and are not relevant compared to B and the random source. For all disorders above Rc, this

qualita-tive behavior persists: all avalanches are finite, dying out after a number of spin flips that does not depend on the total number of spins N , giving rise to smooth hysteresis loops. For a very small disorder, an early spin will trigger almost the entire system to flip over. This can also be qualitatively extended to all disor-ders below Rc, where all spins in the system (even if the system size becomes

large) flip in a single event, thus generating an infinite or spanning avalanche. The critical disorder Rc is defined as the disorder at which the distribution of

(35)

infinite avalanche may arise.

3.15

Ising model from a computational point of view

3.15.1 Random number generator

Monte Carlo simulations rely on the random choice of a real number p ∈ (0, 1) and on the random choice of a spin. The random choice of spins does not cre-ate any problem in any programming language, since we just want to find two random integers belonging to [0, L]. On the other hand, generating random real numbers is not at all a trivial problem.

The standard libraries available in C programming language contains a random number generator (RNG) that generates (quasi)random numbers uniformly be-tween 0 and 1. It can be coded as,

s r a n d ( t i m e (NULL) ) ; RNG=rand ( ) / RAND MAX;

where the first line gives the time as a seed for the pseudo-random number generator. This is a quick solution, but unfortunately it is not acceptable for our purposes. In fact, rand() gives us an integer and then we are dividing it by its maximum possible output, since RAN D M AX = 215rand() is equal to zero with a probability p = 2−15 ∼ 3 · 10−5.

Figure 17: Histogram of rand()’s output and two enlargements. It is possible to see that the accuracy is quite low.

Therefore, any event with a probability lower than 10−5 will occur with the same frequency. This is not reasonable for our simulation, where we want to study events with low probability. A solution still using C could be to imple-ment a linear congruential generator of pseudo-random numbers, as indicated below:

(36)

R= t i m e (NULL ) ; RNG=rand ( ) / m; f o r ( i =1; i <MCMiter ; i ++){ R=(a ∗R+b ) % m; RNG=R/m; }

Clearly, whatever will be the choice of a, b and m, the sequence shall repeat itself after at most m steps. The sequence is therefore periodic, hence we must ensure that total of the random numbers required for any single simulation is less than the period. Usually m is chosen very large to permit this. We can always get a sequence with full period, if a, b and m are properly selected. In particular, we have to ensure that,

• m and b are relatively prime to each other; i.e. gcd(m, b) = 1; • a ≡ 1(mod p) for every prime factor p of m;

• a ≡ 1(mod 4) if m ≡ 0(mod 4).

Nevertheless, the best solution is probably to work with C++ and to use the library boost,

f l o a t RNG( mt19937 g e n e r a t o r ) {

s t a t i c u n i f o r m 0 1 <mt19937> d i s t ( g e n e r a t o r ) ; return d i s t ( ) ;

}

In this way, we are using the Mersenne Twister random number generator that generates a pseudo-random sequence {xi} 623-distributed to 32 bits accuracy,

with a full period of P = 219937− 1 elements. This means that if we consider

a 32-dimension unit cube and we divide each axis into 2623 pieces, then the P

elements of a full sequence {xi} are equally partitioned in 219937 small cubes.

For further information you can refer to [17].

3.15.2 Spins storing

As we previously mentioned, we will work on a 2D square lattice, and thus we have always referred to the spins as s(i, j). Nevertheless, from a computational point of view, we decided to store spins in a single vector instead of in a matrix. In this way, spins are defined as s(i) and the function to find their nearest neighbors is,

(37)

i n t N e a r e s t n e i g h b o r ( i n t i , i n t dim , short o p t ) { switch ( o p t ) { case 0 : i f ( i%dim==0) return i+dim −1; e l s e return i −1; case 1 : i f ( i <dim )

return i+dim ∗ ( dim − 1 ) ; e l s e return i −dim ; case 2 : i f ( ( i +1)%dim==0) return i −dim +1; e l s e return i +1; case 3 : i f ( i >=dim ∗ ( dim −1))

return i −dim ∗ ( dim − 1 ) ; e l s e

return i+dim ; }

}

Where s(i) and s(i + 1) are horizontal neighbors and opt represent the direction of the next spin,

• opt == 0 → left neighbor; • opt == 1 → upper neighbor; • opt == 2 → right neighbor; • opt == 3 → lower neighbor.

This choice has been done because we wanted to create a code open to possible improvements for the future. In particular, thanks to this choice, we might move to a 3D simulation or change the topology of the lattice, for example to a triangular lattice. This change would be really simple, we would have only to define a different nearest neighbors function, while all the other functions wouldn’t change. For example, if we wanted to work in 3D, the nearest neighbor function would be:

(38)

i n t N e a r e s t n e i g h b o r 3 D ( i n t i , i n t dim , short o p t ) { switch ( o p t ) { case 0 : i f ( i%dim==0) return i+dim −1; e l s e return i −1; case 1 : i f ( i %(dim∗dim)<dim )

return i+dim ∗ ( dim − 1 ) ; e l s e

return i −dim ; case 2 :

i f ( i >=dim∗dim ∗ ( dim −1))

return i −dim∗dim ∗ ( dim − 1 ) ; e l s e return i+dim∗dim ; case 3 : i f ( ( i +1)%dim==0) return i −dim +1; e l s e return i +1; case 4 : i f ( i %(dim∗dim)>=dim ∗ ( dim −1)) return i −dim ∗ ( dim − 1 ) ; e l s e

return i+dim ; case 5 :

i f ( i <dim∗dim )

return i+dim∗dim ∗ ( dim − 1 ) ; e l s e

return i −dim∗dim ; }

(39)

4

Random field Ising model

In this chapter, we will analyze the Random Field Ising Model (RFIM). We will start showing the model and studying temperature-dependent phase transitions. In particular, we will obtain numerical results coherent with theoretical ones. Then, we will describe the analysis of hysteresis properties, that we will carry out for all models. Finally, we will concentrate on the external-field driven jumps between equilibrium distributions. Even if nowadays RFIM is a widely analyzed model to reproduce the Barkhausen noise, most of the papers are focused on zero temperature simulations in 3D. On the contrary, we will work on the 2D case, and we will study the temperature’s effect on some macroscopic properties of magnets. In particular, we will obtain the original relation between the critical disorder and temperature and we will confirm that our results are coherent with experimental results.

4.1

Description of the model

Imry and Ma [18] designed Random Field Ising Model (RFIM) to study the influence of random impurities and disorder on the phase transitions. The main idea behind it is that the external field does not act on any spin in the same way: spins are under the influence of small, local magnetic fields. Such a complex system can be described by a multidimensional energy presenting many local minima, which correspond to meta-stable states. In 1995, Sethna et.al ([3]) have proposed this model as a prototype for hysteresis’ and avalanches’ studies. In RFIM, we will add to the classical Ising Hamiltonian a random field on each spin. As a result, Hf = − X i Bsi− X i e Bisi− J X <i,j> sjsi,

where si = ±1 are spins in a square lattice, < i, j > refers to spins that are

the nearest neighbors, J is the interactions’ strength. We consider J = 1, unless otherwise stated. The variables eBi are random external fields, chosen

independently from some Gaussian probability distribution. For simplicity, we consider R = var( eBi) and mean( eBi) = 0. We make this choice because having

mean( eBi) = eBM is the same as having a system with mean( eBi) = 0 and

B = B + eBM. It is important to notice that we will consider the disorder as

quenched, since it does not change in time while the material is magnetizing. The variation of the Hamiltonian due to the flip of the spin si is given by:

∆Hf(si) = 2si(B + eBi+

X

<i,j>

sj).

The Metropolis algorithm that we will use is the same as described in section 3.7, but we will consider this new definition of ∆Hf.

(40)

4.2

Temperature-driven phase transitions

We have already studied the magnetization’s behavior as a function of temper-ature in the classical Ising model (see [16]). Unfortunately, there is no exact solution in the case of RFIM, so we will not be able to compare our computa-tional results with the theoretical ones.

The main theoretical result on RFIM is that the model undergoes a phase tran-sition from a ferro-magnetic phase to a para-magnetic phase for large enough values of the randomness R. It has been proven by Aizenman and Wehr ([19]) that in the two-dimensions RFIM there is a critical value for Rc = 5.6 above

which there is an elimination of discontinuities in the thermodynamic expec-tation values of the conjugate quantities. The mean magnetization, at T = 0, becomes unstable to domain formation at a breakup length scale. Thus, we can check if this property is satisfied in our results.

4.2.1 Numerical data

Simulations have been performed considering a 100 × 100 square lattice. We assumed no external field: B = 0, ∆T = 0.1 and M CM iter = 104× 1002= 108.

4.2.2 Results

In Fig. 18 we can see the magnetization’s behavior as a function of temperature in the case of B = 0.

Figure 18: Mean magnetization as a function of T for different values of R and B = 0.

The mean magnetization decreases by increasing R. This is to be expected since there will always be some spins for which the contribution of eBiis negative,

(41)

because of the presence of the Gaussian randomness. These spins will be more likely to flip at lower values of T , and this will make the mean magnetization lower.

We also want to check if our results are compatible with the result by Aizenman and Wehr ([19]). In Fig. 19, we see that for R = 5.6 the mean magnetization is not equal to one, even if T = 0.

Figure 19: Mean magnetization as a function of T , and zoom close to T = 0.

It is out of the scope of this thesis to check this property for larger lattice sizes, but we expect M |T =0to decrease with the increase in the lattice’s

dimen-sion.

4.3

Hysteresis loops

4.3.1 Numerical data

As in the case of temperature driven phase transitions, the simulations have been done on a square lattice of side L = 100. The hysteresis loops data have been obtained averaging 20 tests. Temperature is always considered constant during the loop, while there is a continuous increase in the external field with ∆B = 10−4.

Since the algorithm was requiring a long computational time, we decided to increase the number of iterations when an avalanche started, in order to let it develop completely. In particular, we did it according to the relation:

M CM iter = (

starting iter if no spin flips;

starting iter × iter multi, if at least one spin flips.

We chose starting iter = 104 and iter multi = 100. This technique seems to

improve the computational performance by avoiding a waste of time when no spin can flip. Since our main interest in these simulations is to reproduce the Barkhausen noise, we checked the equivalence of the two methods comparing the histograms of ∆M in section 4.4.3.

(42)

4.3.2 Hysteresis loops at fixed T

The first question we address is whether the randomness R has an influence on the hysteresis loop. Thus we will consider how the hysteresis loop properties vary for a fixed value of T , as a function of R.

Figure 20: Hysteresis loops at T = 0.5 for various values of R.

From Fig. 20, three quantities seem to be worth studying: • the hysteresis area;

• the hysteresis slope; • the midpoint position.

The hysteresis loop area decreases with the increase in R. This is reasonable, as R gives the possibility to some spin to flip at a lower external field, and they can be followed by their nearest neighbors. In Fig. 21, this behavior is evident.

Figura

Figure 8: Statistical distributions for the amplitudes of positive ∆M + and neg- neg-ative ∆M − jumps.
Figure 16: Semi-logarithmic plot of the hysteresis areas as a function of tem- tem-perature.
Figure 20: Hysteresis loops at T = 0.5 for various values of R.
Figure 22: Position of B 2 and B 1 .
+7

Riferimenti

Documenti correlati

I Asymptotic shape of minimal clusters in the plane I Minimal partitions and Hales’s Honeycomb Theorem I Uniform energy distribution for minimal partitions.. I Towards a description

The results of tests performed in the rural areas of Tanzania show that a standardized use of the prototype can lower the dissolved fluoride from an initial concentration of 21

Table 7: the rheological data of viscosity, recovery and amplitude sweep tests of commercial fillers

Bottom: Quantification of NKCC1 protein (expressed as percentage of WT neurons treated with control mimic) showed NKCC1 downregulation mainly in Ts65Dn neurons after miR497a-5p

The neutral red dye is then extracted from the viable cells using an Ethanol solution and the absorbance of the solubilized dye is quantified using a spectrophotometer (Repetto

Anche la localizzazione delle lesioni sottolinea quanto l’Idrosadenite Suppurativa possa essere condizionante nella vita quotidiana: le lesioni (soprattutto perianali e

While in Stephen Hero it had been possible to analyse the different references to this movement, broadening from Stephen’s theorisation on the Romantic temper