• Non ci sono risultati.

Impact Monitoring of Near-Earth Objects: review of classical results and new tools for the optimized follow-up of imminent impactors

N/A
N/A
Protected

Academic year: 2021

Condividi "Impact Monitoring of Near-Earth Objects: review of classical results and new tools for the optimized follow-up of imminent impactors"

Copied!
85
0
0

Testo completo

(1)

Dipartimento di Fisica

Corso di Laurea Magistrale in Fisica

Impact Monitoring of Near-Earth Objects:

review of classical results and new tools for the

optimized follow-up of imminent impactors

Supervisor:

Prof. Giacomo Tommei

Internal supervisor:

Prof. Paolo Paolicchi

Candidate:

Maddalena Mochi

(2)

Contents

Introduction 2

1 Near Earth Objects: detection and orbits 6

1.1 Definition of NEOs . . . 6

1.1.1 Small solar system bodies . . . 6

1.1.2 The NEO population . . . 8

1.2 Observations . . . 12

1.2.1 Surveys . . . 13

1.2.2 The Fly-Eye telescope . . . 15

1.3 Discovery . . . 16

1.4 Orbit Determination . . . 18

1.5 Impact Monitoring . . . 21

2 Mathematical tools and algorithms 23 2.1 Öpik’s theory of planetary encounters . . . 23

2.2 Target planes . . . 24

2.3 Minimum Orbital Intersection Distance . . . 25

2.3.1 Minimal distance maps and their singularities . . . 27

2.3.2 Uncertainty of the MOID . . . 28

2.4 Sampling of the confidence region . . . 29

2.4.1 Geometric sampling . . . 29

2.4.2 Monte Carlo sampling . . . 34

(3)

2.6 Sampling the Admissible Region . . . 36

2.6.1 Sampling of the AR boundary . . . 36

2.6.2 Sampling of the AR interior . . . 37

2.7 Impact monitoring algorithms . . . 40

2.7.1 Classical Impact Monitoring . . . 41

2.7.2 Impact Monitoring for imminent impactors . . . 46

2.8 Software resources: OrbFit . . . 52

3 Analysis of the Admissible Region 54 3.1 A new meaning for the priority list . . . 54

3.2 Tool 1: MOID=0 curve . . . 55

3.2.1 Test cases . . . 56

3.3 Tool 2: End of Visibility . . . 65

3.3.1 Test cases . . . 66

3.4 Combining the results . . . 70

4 Conclusions and future of Impact Monitoring 72 4.1 Future work . . . 72

(4)

Introduction

The solar system is populated with, other than planets, a wide variety of minor bodies, the majority of which is represented by asteroids. Most of their orbits are comprised between those of Mars and Jupiter, thus forming a population namedMain Belt. However, some asteroids can run on trajectories that come close to, or even intersect, the orbit of the Earth. These objects are known as Near Earth Asteroids (NEAs) or Near Earth Objects (NEOs) and may entail a risk of collision with our planet. Predicting the occurrence of such collisions as early as possible, is the task of Impact Monitoring.

Dedicated algorithms are in charge of orbit determination and risk assessment for any de-tected NEO, but their efficiency is limited in cases in which the object has been observed for a short period of time, as is the case with newly discovered asteroids and, more worryingly, im-minent impactors: objects due to hit the Earth, detected only a few days or hours in advance of impacts (50). This timespan might be too short to take any effective safety countermeasure. For this reason, an improvement of current observation capabilities is necessary and, in this frame, theNEOSTEL Fly-Eye telescope is being realized (8).

The telescope, to be installed at Monte Mufara (Palermo, Italy), is expected to increase the number of NEOs and, consequently, imminent impactors detected per year (41), thus requiring an improvement of the methods and algorithms used to handle such cases.

Moreover, when few observations are available, the uncertainty associated to the orbit is large and rapid follow-up is necessary to confirm or dismiss the possibility of an impact.

In this scenario, the thesis work focuses on the development and testing of new Impact Monitoring tools dedicated to the observers, aiming to facilitate the planning of follow-up observations of imminent impactors.

(5)

Orb-Fit: a multipurpose code able to perform orbit determination and Impact Monitoring compu-tations.

This thesis work consists of four chapters, the contents of which are here briefly sum-marised:

• Chapter 1 gives background information about the NEO environment, briefly review-ing the dynamical and physical characteristics of the population (2). After a description of NEO observation techniques and main surveys, the Fly-Eye telescope is introduced, together with some predictions about its expected contribution to the Impact Monitor-ing field. Basic concepts about Orbit Determination and Impact MonitorMonitor-ing are briefly presented.

• Chapter 2 describes in detail the already existent mathematical tools used in the phases of orbit calculation and risk assessment. In particular, we start with some useful defi-nitions and reference frames, presenting the Minimum Orbital Intersection Distance (MOID) as a measure of the distance between the orbits of a NEO and the Earth, regard-less of the positions that the objects occupy on them (49). We then proceed describing the use of theConfidence Region (CR), the set of all possible orbits statistically com-patible with a given set of observations. The sampling of a one-dimensional representa-tive of the CR, the Line Of Variations (LOV), results in the generation of a swarm of Virtual Asteroids (VAs). Each VA is then propagated in time, in search for intersections between its trajectory and the cross section of the Earth. If they are found, the relative VA is recorded as a Virtual Impactor (VI), associated to a certain Impact Probability (32). We then show how in the case of imminent impactors the LOV sampling method is inad-equate (48) and the generation of VAs is better conducted over anAdmissible Region (AR), by sampling the Manifold Of Variations (MOV), that is the two-dimensional manifold in the plane of range-range rate, comprising all the orbits compatible with a given set of observations (35). Finally, the chapter reviews the algorithms previously and currently employed for Impact Monitoring (30), highlighting the difference between “classical” Impact Monitoring and the imminent impactors case. The OrbFit software and its features are then presented (1).

(6)

• Chapter 3 represents the core of our work, presenting the new tools that this thesis proposes to introduce to facilitate the follow up of newly discovered NEOs.

The first tool computes the MOID of a given object and gives its graphical representa-tion on the Admissible Region plane, allowing to identify at first glance the groups of VAs that might lead to a close encounter with the Earth. A fit of the points with near-zero MOID allows to draw a MOID=0 line that, if close enough to the nominal solution, shows if an impact is possible.

The second tool computes the remaining visibility time for a given object from an ob-servatory of choice.

Combining the outputs of the two tools, we can select the objects for which to solicit immediate follow-up, before they collide with our planet or become unobservable and, possibly, lost with a nonzero probability of impacting at a later close approach. We illustrate our results with some examples of how the tools perform on real-life cases. • Chapter 4 summarizes the results obtained, highlighting advantages and faults of the

new tools. Finally, further development possibilities are proposed, both for our tools and Impact Monitoring algorithms in general.

(7)

Chapter 1

Near Earth Objects: detection and orbits

1.1

Definition of NEOs

1.1.1

Small solar system bodies

According to the definition given by the International Astronomical Union (IAU) in (42), every solar system object orbiting around the Sun which is not a planet nor a dwarf planet, falls under the name ofsmall solar system body. This definition includes (but is not limited to) comets, TNOs (Trans-Neptunian Objects) and asteroids, this last being the most populated group. Asteroids, or Minor Planets, are defined as small bodies with an inner solar system orbit, i.e. with semimajor axis smaller than that of Jupiter: a ď 5, 203 AU. They are mainly rocky bodies, except for some rare metallic objects, with diameters ranging from525 km (diameter of Vesta, the largest known asteroid) to less than10 m. The shape of the objects with diameters comprised between10 m and 100 km is fairly irregular, suggesting a collisional evolution that resulted in “rubble pile” structures, held together by mutual gravitational attraction between their components, rather than monolithic asteroids. Bigger objects are instead self-gravitating and tend to be approximately spherical, while decametric size objects are monolithic, with some degree of porosity (2).

Asteroids can be classified on the basis of their mineralogical properties, deduced from spectral analysis and comparison with meteorites recovered on ground. We will follow the Bus-DeMeo classification criterion presented in (13), that identifies three main complexes:

(8)

• Complex S: named after its expected silicaceous composition. The spectra of objects belonging to this complex show moderate silicate absorption features and high albedo, making them quite easy to observe. A reddening effect, due to space weathering of the surfaces, is present.

• Complex C: this class is historically named after carbonaceous chondrites meteorites. Objects belonging to this class present low-albedo (of the order of 5%) surfaces, with almost featureless spectra.

• Complex X: compositionally degenerate, since objects in this class show similar, almost featureless spectra, but a wide albedo range, from a few percent to as much as 50%, thus including both the darkest and brightest surfaces of all asteroids (12).

Objects whose spectra do not qualify to belong to any of these complexes are regrouped in other, less populated classes.

Most asteroids belong to the Main Belt (MB), a dynamical structure heavily influenced by the gravitational perturbation produced by Jupiter and, in a less significant way, by Mars. The distribution of the semimajor axes in the MB, as easily seen in Figure 1.1, shows some empty areas, calledKirkwood gaps, that are related to low order mean motion resonances with the orbit of Jupiter. This means that areas corresponding to orbits whose periods stand in an{m relation with the period of Jupiter’s orbit, n and m being small integers, tend to be depleted by gravitational effects. Resonances between the variation periods of the angular variables are responsible for long term effects and take the name of secular resonances. These, together with the 3/1 mean motion resonance with Jupiter, are held responsible for the transfer of asteroids from the Main Belt to near Earth trajectories. According to (40), this process accounts for around 80-90% of the NEO population.

A detailed comparison between the spectral characteristics of Main Belt asteroids and of NEOs was conducted in (3), providing evidence that the latters are originated in different specific regions of the MB according to their type. In particular, while the MB population is dominated by the abundance of C-type objects, S-type is the most common among NEOs, suggesting that the transfer mechanism is highly efficient in the inner part of the MB, where silicaceous objects are most abundant (although an observational selection effect due to the high albedo of S-types

(9)

Figure 1.1: Plot of eccentricity versus semimajor axis for all the catalogued asteroids belonging to the Main Belt (with the exclusion of a few objects close to Jupiter’s orbit). Red dots represent the core of the asteroidal families; green dots correspond to objects that have been attributed to a family; blue dots are objects with resonant or highly unstable orbits, whose Lyapunov time is short; black dots correspond to background asteroids (the most numerous), not belonging to any family nor showing resonant behaviour. This plot was generated via the AstDyS graphics tool (https://newton.spacedys.com/astdys2/Plot/).

cannot be excluded) (24).

1.1.2

The NEO population

Near Earth Objects (NEOs) are defined as minor bodies, both asteroids (NEA) and comets (NEC), whose heliocentric orbit has perihelion distanceq ď 1.3 AU. Based on their dynamical properties, NEAs can be classified into four categories, each taking the name of the first object discovered belonging to it:

(10)

Figure 1.2: Semimajor axis distribution of the NEO population, showing a bimodal behaviour with peaks around1.4AU and 2.3AU.

• Apollo: Earth-crossing asteroids with semimajor axis greater than the Earth’s one. It is estimated that 62% of the total number of NEAs are Apollos;

• Amor: NEAs with an orbit comprised between the ones of the Earth and Mars;

• Atira: category introduced after the discovery of its first representative in 2003. This objects have orbits contained entirely within the orbit of the Earth, implying that their aphelion distance isQ ă 0.983AU.

The characteristics of each category are summarized in Table 1.1, while Figures 1.2 and 1.3 show respectively the distribution in semimajor axis and eccentricity of the entire known NEO population.

NEAs that can pose a threat to Earth are calledPotentially Hazardous Asteroids (PHAs). By definition, the minimum distance between the two osculating keplerian orbits of a PHA and the planet (namely theMOID, Minimum Orbital Intersection Distance, that we will cover extensively in the dedicated section 2.3) is less than0.05AU, while the absolute magnitude of the object isH ď 22.0. Assuming an average albedo of 14%, according to the relation

diameter “ 1.329 ¨ 10

6

?

0.14 ¨ 10

(11)

Name Properties Orbit type Aten • a ă 1.0AU • Q ą 0.983AU Apollo • a ą 1.0AU • q ă 1.017AU Amor • a ą 1.0AU • 1.017 ă q ă 1.3AU Atira • a ă 1.0AU • Q ă 0.983AU

Table 1.1: Dynamical properties of NEO groups. Images of exemplary orbits are reproduced from https://cneos.jpl.nasa.gov/about/neo_groups.html

(12)

Figure 1.3: Upper panel: plot of eccentricity versus semimajor axis. Lower panel: eccentricity distribution for the NEO population.

(13)

Figure 1.4: Discovered NEOs by size. From https://cneos.jpl.nasa.gov/stats/

given in (21), the magnitude limit also sets a minimum diameter of about 140m, meaning that small objects are not considered PHAs. Figure 1.4 shows recent statistics on the size distribution of discovered NEOs.

Since Near Earth Objects can be observed in the vicinity of our planet, they represent the main source of information about small asteroids, that would not be detected at larger distances. However, the NEO detection efficiency is still affected by a size-associated selection effect, thus the actual numerosity of the population has to be estimated (18).

1.2

Observations

The process by which a NEA is discovered and its orbit determined, starts with observation: the recording of the position of an object on the celestial sphere, in order to acquire astro-metric information about it. Depending on the kind of telescope used for the detection, the observation can be of optical or radar type.

Anoptical observation gives as an output the position of the object in the sky, usually in terms of right ascension (indicated with RA orα) and declination (DEC or δ). Optical observations also allow to measure the apparent magnitude through photometry.

(14)

signal to noise ratio for a target at distance r is proportional to 1{r4. The advantage is an increase of accuracy in the astrometry, with respect to the optical case. The information pro-vided by radar observations includes astrometry in terms of range and range rate with respect to the observer’s position, but also shape, size and rotational state of the object.

Observations are performed all around the world both by professional institutions and am-ateur observers, and are then reported to theMinor Planet Center (MPC), responsible for the designation of minor bodies in the solar system (accordingly to the precepts dictated by the International Astronomical Union) and for the efficient collection, computation, checking and dissemination of astrometric observations and orbits for asteroids and comets. Newly ob-served objects waiting to be confirmed as NEAs are posted on theNEO Confirmation Page (NEOCP) with a temporary designation until a reasonably reliable orbit becomes available (27). The page can also generate rough ephemerides for the objects in the list to facilitate the observers in their follow-up.

1.2.1

Surveys

Asurvey is a project aiming at collecting observations for the largest and most representative sample of objects possible. The need for a NEO survey, aimed at the construction of a complete catalogue of the Earth-threatening objects, was addressed for the first time in 1992 by NASA through a workshop establishing the Spaceguard project (37), with the goal to identify at least 90% of all NEOs with diameter greater than 1km by the end of 2008. An international foundation with the same name, dedicated to planetary defense from impacting small bodies, was established in 1995, year in whichSpacewatch, the first telescope network dedicated to NEOs was built and became operational. Since then, firstly to achieve the Spaceguard goal, then to reduce the completeness diameter as much as possible (46),(47), numerous surveys have been conducted, and their results in terms of numbers of objects discovered are shown in Figure 1.5. We only list the main ones and their peculiarities:

• Spacewatch: a project led by the Lunar and Planetary Laboratory (University of Ari-zona), it was the first CCD NEO survey. It originally used a single CCD, but was

(15)

up-Figure 1.5: Timeline of NEO discoveries by survey. From https://cneos.jpl.nasa.gov/stats/

graded in 2002 with a mosaic camera to extend the field of view. A 1.8-m telescope for follow-up of fainter NEOs was added in 2001 (25).

• LINEAR: the Lincoln Near-Earth Asteroid Research, conducted by MIT, has been the leading NEO survey from 1997 until 2004, when it was surpassed by the more advanced Catalina Sky Survey. LINEAR had a limiting magnitude of 20.5.

(http://www.ll.mit.edu/linear, archived from the original).

• Catalina Sky Survey (CSS): formerly known as Bigelow Sky Survey, in 1999 it special-ized in the search for NEOs up to visual magnitude 21.5. Today the CSS operates two survey telescopes (one in the northern and one in the southern emisphere) with an 8.3 square degrees field of view, and one remotely-operated telescope dedicated to follow-up observations. It is the most prolific NEO survey to date (https://catalina.lpl.arizona.edu/). • NEOWISE: formerly WISE (Wide-field Infrared Survey Explorer), this NASA infrared-wavelength astronomical space telescope conducted a general purpose survey between 2009 and 2011. In 2013, the spacecraft was reactivated, renamed NEOWISE and tasked with the characterization of the NEO population.

(16)

• Pan-STARRS: the Panoramic Survey Telescope and Rapid Response System consists of two 1.8-meter diameter telescopes (the second recently became operational) located on Haleakala, Maui, Hawaíi. The cameras have an approximately circular field-of-view, with a diameter of approximately 3 degrees (53). Pan-STARRS was designed for a general purpose survey conducted starting in 2010, but since 2014 the most of its observing time has been dedicated to the search for NEOs (52).

1.2.2

The Fly-Eye telescope

Currently under development, the NEOSTEL Fly-Eye telescope is a new instrument expected to discover NEOs up to apparent magnitude 21.5, thanks to its wide survey observing strategy. The large field of view (6.7°ˆ 6.7°) will allow to scan 2/3 of the visible sky three times per night and detect objects due to hit the Earth a few weeks or days in advance of impacts. In other words, once the Fly-Eye will start operations, we expect an increment in the number of imminent impactors we will have to deal with (41). This will require an update of the currently available orbit determination methods and tools to improve the results in those cases in which the time span between discovery and possible impact is very short.

The first Fly-Eye telescope will be located in the site of Monte Mufara (Palermo, Italy) and will represent the core of the NEO optical observation network that ESA aims to build in the frame of its Space Situational Awareness programme.

The telescope structure

The Fly-Eye telescope is named after its innovative core optics, consisting of a configuration of 16 distinct telescopes, the outputs of which recombine to form a large field of view image of the sky, similarly to how the eyes of a fly work. The 16 optical channels are equivalent and, although independent of each other, share a common spherical primary mirror. The modularity of the structure allows for the channels to be added progressively, with no need to modify the already operational ones. Thanks to this characteristic, the deployment of the telescope could be split in two phases, the first of which led to the building and factory testing of an already functional instrument capable of observing half of the total field of view (8).

(17)

The second phase will see the completion of the structure with the remaining eight optical channels and the installation at the selected final location.

The architecture of the telescope consists of three main structures: the primary mirror, a centre piece and the secondary structure hosting the optical channels.

The secondary structure is composed by the Central Beam Shaper (whose core part consists of a 16 facets beam splitter), the 16 optical tubes and a mechanical system for alignment of the secondary elements and the overall telescope structure.

A state of the art CCD image-recording element is placed in correspondence of the focal plane of each optical tube. To reduce the effect of thermal noise, each CCD sensor is operated in a vacuum chamber and kept at 50°C by means of a Peltier electric cooler.

Furthermore, NEOSTEL will be provided with a low-noise equatorial mount, allowing fast repositioning and damping of every vibration prior to image acquisition.

The observing site

The location chosen for the first Fly-Eye telescope is atop the 1865m a.s.l. Monte Mufara in Sicily, where a dedicated seeing monitoring system, provided with a meteorological station has been installed to collect statistical information about the expected observing conditions. In the building of the dome that will host the telescope, special attention is being paid to elimination of any temperature gradient that might affect the acquired images.

As estimated in (41) by means of a Monte Carlo simulation, the telescope is expected to detect 2.8 impactors per year (allowing 365 clear nights per year, thus clearly an overesteem). The number is expected to rise to 4.1 detections per year after the implementation of a second telescope located in the southern hemisphere.

1.3

Discovery

The name"asteroid" was firstly proposed in 1802 by Herschel (23) to indicate an apparently star-like object moving with respect to the background of approximately fixed stars.

Moving objects are often discovered comparing several digital images of the same portion of the sky, taken in a time interval of 15 minutes to 2 hours (a method used to detect transients in

(18)

Figure 1.6: Internal structure of the Fly-Eye telescope. Light pathways are shown: the blue path represents the light collected by the primary mirror, and redirected to the beam splitter through the green path. Yellow paths exiting from the beam splitter go through the optical channels to the CCDs. Figure from (8).

general, known asblinking), singling out the objects that change their position with respect to the stars in the passing from one image to the next. A collection of astrometric measurements obtained from images that could correspond to the same moving object is called atracklet. Tracklets are then compared and linked, when possible, in order to obtain enough information to compute an orbit. We define aToo Short Arc (TSA) as an ensemble of one or more tracklets that does not contain sufficient information to calculate curvature, which is assumed to be

(19)

significant if χ2 “ » – κ 9 η fi fl T Γ´1 pκ, 9ηq » – κ 9 η fi fl ą χ2min (1.1) whereη “ ||~v|| “ b 9

α2cos2pδq ` 9δ2 is theproper motion of the object and κ “ pδ2α1

´ α2δ1

q cos δ ` α1p1 ` δ12q sin δ (1.2) is thegeodesic curvature of its path as a function of the derivatives of α and δ with respect to the arc lengths, which is a parameter defined by ds{dt “ η. The covariance matrix Γ is defined as Γ “ » – V arpxq Covpx, yq Covpx, yq V arpyq fi fl

for two continuous random variablesx,y. Thus this definition of TSA depends upon the choice of the control valueχ2

min.

N chronologically consecutive TSAs can then be combined to form anarc of type N if each couple of consecutive TSAs, once joined, shows significant curvature. By this definition, ob-servations of NEOs taken on a single night often form an arc of typeě 2, sufficient to compute an orbit.

According to the definition given in (33), a new object isdiscovered when three requirements are simultaneously fulfilled for the first time:

• an arc of typeě 3 is available for the object,

• a least squares orbit is computed and it fits the observational data with residuals com-patible with the current error model,

• enough photometric information to compute the absolute magnitude is available.

1.4

Orbit Determination

According to (32), any Orbit Determination (OD) problem has three main ingredients: a dy-namical system to study, observations and an error model.

(20)

1. Adynamical system is a set of differential equations of the type: d~y

dt “ ~f p~y, t, ~µq (1.3)

where ~y P Rp is thestate vector and ~µ P Rp

1

contains thedynamical parameters. Due to relativistic effects, the time t might not be the same for the observer and the target object, but we assume that a conversion to a common timescale (the ephemeris time) is always possible.

The solutions of the system, i.e. the orbits, for a given set of initial conditions~ypt0q “ ~y0 at an epocht0, are described by theintegral flow:

~

yptq “ Φtt0p ~y0, ~µq, (1.4) that maps the general solution from the initial conditions to the state at timet.

2. Observations are introduced in the problem by means of a differentiableobservation function Rp~y, t, ~νq, depending on a set of kinematical parameters ~ν P Rp2

. The com-position of the observation function with the general solution of the dynamical system is the prediction function rptq, that returns a prediction for the outcome of an ob-servation at some time ti,i “ 1, ..., m, with m the total number of observations. The difference between the predictionriand the actual observation result is theresidual

ξi “ ri´ rptiq. (1.5)

3. Since every observation contains an error, residuals are never zero. The error model needs to be appropriate for the specific problem at hand, but in most cases it is assumed to follow a Gaussian distribution.

Starting from the astrometry, the simplest OD approach is to compute, by means of a Gauss’ method or an analogous one, apreliminary orbit, that is a set of six coordinates at the same epoch to be used as a first guess. Then we apply aweighted least squares method, consisting in a minimisation of the residuals, that yields as a result anominal orbit, i.e. the solution for which the discrepancy between theory and observations is minimum.

(21)

Operatively, we can define atarget function Qp~ξq to minimise, proportional to the sum of squares of the residuals:

Q “ 1 mξ

TW ξ

(1.6) whereW is the weight matrix representing the RMS value (weighted with a sufficiently accu-rate astrometric catalogue) and correlations of the observation errors. The elements ofW are given by the inverse of the covariance of each measure and the matrix is diagonal if the errors are uncorrelated.

LetX be a subvector of p~y0, ~µ, ~νq. The minima of QpXq are then calculated by means of an it-erative procedure (usually a Newton’s method or one of its variants), that reaches convergence in correspondence of a nominal solutionX˚. In the neighbourhood of the nominal solution, we can expand the target function in linear approximation:

QpXq “ QpX˚

q ` ∆QpXq (1.7)

where∆QpXq is called penalty and contains all the higher orders. We define the Confidence Region (CR) as the region in the orbital elements space that satisfies the condition:

m∆QpXq ď σ2 (1.8)

whereσ is an ad hoc chosen parameter. If the observations are numerous and have high preci-sion, the orbit is well constrained. Then∆QpXq is negligible and the linear approximation is valid. The probability of a certain portion of the confidence region to contain the actual orbit is normally distributed, with the nominal solution being the mean of the Gaussian.

When both the residuals and the CR are small, the latter can be approximated by the ellipsoid ZL, centred inX˚: ZLpσq “ ! ~ x P RN|p ~X ´ ~X˚qTCp ~X ´ ~X˚q ď σ2 ) (1.9) where C is the normal matrix, defined as the inverse of the covariance matrix, Γ. This approximation is usually applicable only at a local level, because the residuals are generally not small.

A more frequent case requires the use of a nonlinear least squares approach. This is usually conducted by means of a procedure that usesdifferential corrections to reach convergence

(22)

at the nominal solution.

However, when the observational information available only amounts to a TSA, the formalism of the Confidence Region is not applicable, since either Gauss’ method fails to compute a preliminary orbit, or the differential corrections procedure does not converge to a least squares solution. In (35), Milani et al. introduced an alternative structure to the CR to be used in cases in which the orbit is poorly constrained: the so-calledAdmissible Region (AR). We will go through this formalism in section 2.5.

1.5

Impact Monitoring

Impact Monitoring (IM) is the set of mathematical tools and techniques used to study the pos-sibility of impact of a NEO with the Earth, with the aim to solicit the observers to perform follow-up observations, providing sufficient information to confirm or dismiss the announced risk cases. Thus an IM procedure need to provide the impact date, impact probability and the estimated impact energy (50).

“Classical” IM takes advantage of the fact that the Confidence Region is compact, thus it can be sampled with a set of orbits, namelyVirtual Asteroids (VAs), each of which has a certain probability of being the real one. By propagating the VAs in time we can look for intersec-tions with the Earth orbit. If they are found, the confidence region is said to contain one or moreVirtual Impactors (VIs), by which we mean subsets of initial conditions leading to an impact. Automatically scanning the NEA catalogue in search for VIs in the next 100 years is the purpose of the two classical Impact Monitoring algorithms that will be described in fur-ther detail in Section 2.7: CLOMON21, operational at the University of Pisa since 1999 (now University of Pisa, SpaceDyS and ESA), andSentry2, operational at NASA Jet Propulsion Lab-oratory (JPL) since 2002.

However, in case of a newly detected object, for which observations only amount to a TSA, Sentry and CLOMON2 might fail. This is often the case for Imminent Impactors, that are dis-covered and collide with the Earth during the same apparition. To address this specific

prob-1

http://newton.spacedys.com/neodys and http://neo.ssa.esa.int 2

(23)

lem, a new generation of algorithms has been developed, currently implemented into three automatic systems: SCOUT (JPL), NEORANGER (University of Helsinki), and NEOScan (University of Pisa).

The main differences between classical and new generation IM algorithms will be highlighted in Section 2.7.

(24)

Chapter 2

Mathematical tools and algorithms

2.1

Öpik’s theory of planetary encounters

The first theory of planetary encounters was formulated by the Estonian astronomer Ernst Öpik, who hypothesised that the motion of a minor body could be treated as the composition of the solutions of different two-body problems (38). According to Öpik’s theory, as explained in (39), the orbit of the object can be considered elliptical and heliocentric until it enters the region in which the dynamics is dominated by the gravitational attraction of a planet. At that point the trajectory becomes an hyperbola branch with focus in the planet, until the object leaves the sphere of influence of the planet on a new elliptic heliocentric orbit, whose initial conditions are given by the solution of the hyperbolic two-body problem.

Öpik’s theory, although constituting the basis for any model of planetary encounters, has been modified to include two aspects that were initially not taken into account:

• the original theory was only valid for cases of actual intersection between the orbits of the asteroid and of the planet, while leaving out the interesting events of near misses, • each encounter with the same planet is not independent of the previous one.

(25)

Figure 2.1: Left: B-plane with respect to the asteroid’s trajectory relative to the Earth. Right: B-plane coordinates and interpretation. Figure adapted from Figure 1 in (15).

2.2

Target planes

The geometry of a planetary encounter can be described in terms of the plane, known as Target Plane (TP) or B-plane, passing through the centre of the planet and orthogonal to the incoming asymptote of the hyperbolic planetocentric orbit of the asteroid at the time of closest approach. The orbit of the NEA is then completely described by a set of elements that includes two coordinates (ξ, ζ) on the TP, two angles (θ, φ) describing the orientation of the plane, the size of the escape velocity and the timet of the encounter (see Figure 2.1).¯

As explained in (15), the ~ζ direction is oriented opposite as the projection of the heliocentric velocity of the Earth on the TP, thus is related to the along-track position of the NEA. This means that the coordinateζ contains the information about how early or late the asteroid is with respect to the encounter with the planet.

The ~ξ direction is defined as ~ξ “ ~u ˆ ~ζ{|~u| and its module |~ξ| approximates the MOID (~u is the planetocentric velocity having the direction of the incoming asymptote). A schematic representation of the reference frame is reported in Figure 2.1.

(26)

and the cross section of the planet on the TP, represented by a circle of radius Rcross “ Rp d 1 ` 2GM Rpu2 , (2.1)

greater than the actual radius of the planet by a factor that accounts for gravitational focus-ing. The asymptotic velocity u of the NEA for t Ñ ´8 is computed, as in (16), from the conservation of energy: E “ v 2 2 ´ µ r “ u2 2 “ ´ µ 2a ùñ u “ c µ ´a. (2.2) In the frame of Öpik’s theory, the encounter can be modelled as a planetocentric two-body scattering event, whose impact parameter B is given by the module of the vector that extends from the center of the planet to the intersection of the incoming asymptote with the TP. A schematic representation of the TP is given in Figure 2.2.

As an alternative to the TP, a planetary encounter can also be described by means of the Modified Target Plane (MTP), passing through the centre of the planet and orthogonal to the velocityv of the NEA at the time of closest approach. Anyway the use of the TP provides an advantage with respect to the MTP, since gravitational focusing introduces a nonlinear deformation in the vicinity of an impact, thus the direction ofv can vary greatly in a small uncertainty range. This implies that it might not be the same for all of the VAs that belong to the same object. On the other hand, the direction ofu is unaffected by the encounter, since the incoming asymptote does not change, allowing to represent all VAs on the same TP. The MTP is used for the imminent impact problem, where the gravitational focusing does not affect so much the VAs orbits.

2.3

Minimum Orbital Intersection Distance

Being the minimum Euclidean distanced between any two points on two different orbits, the Minimum Orbital Intersection Distance (MOID) can be used as a warning for possible collisions between NEOs and the Earth. A large MOID indicates that the asteroid is not going to impact the Earth in the near future, while a small one does not necessarily imply a colli-sion, since this quantity does not consider the actual position occupied by the bodies on their

(27)

Figure 2.2: Target Plane for a generic planetary encounter, modelled as a scattering problem in a central field. The distanceB between the center of the planet and the intersection between the TP and the incoming asymptote of the hyperbolic orbit is the impact parameter of the encounter.

respective orbits: even if the orbits cross, the encounter might not happen if the timing is not right. Still, objects with a small MOID should be carefully followed to confirm or exclude the possibility of a collision in subsequent returns, especially when the orbit is known with large uncertainties.

The MOID is usually calculated by finding the absolute minimum among the stationary points ofd2, squared to be smooth even whend “ 0. The maximum number of stationary points has been proven in (19) to be 16 if both orbits are elliptical, 12 if one orbit is circular. In (20) the study has been extended to parabolic and hyperbolic orbits, also providing a large number of numerical tests. The maximal number of critical points found within these simulations is 12, of which at most four are local minima (although no real known asteroid has been observed to have four minimum points until the work of Šegan et al. in 2011, (44)).

(28)

Over the years, various methods have been proposed for the calculation of the MOID, and they fall under two big categories: analytical or numerical. The most accredited analytical method, currently the standard tool for MOID computation, is the one proposed by Gronchi in (19). Given the need for a faster method to be applied in massive computer simulations, most recent works are instead numerical and tend to sacrifice accuracy in order to achieve faster runtimes, as exemplified in (54) and (22). Still, each newly proposed method is usually tested for reliability against Gronchi’s method, which is the one implemented in the software OrbFit (see section 2.8 for a description of the software).

Following (49) and (32), we will now go through the main steps of the computation of the MOID and its uncertainty, starting from the definition of a reference frame and some useful minimal distance maps.

2.3.1

Minimal distance maps and their singularities

LetE “ pE, ECq be a set of 10 orbital elements, composed by two subsets of 5 elements each, describing the geometric configuration of the orbits of a NEO and the Earth respectively. In addition, we define the vectorV “ pv, vCq, consisting of two parameters along the orbits (e.g. the true anomalies).

The Keplerian distance function is then defined as:

V ÞÑ dpE, V q “axX ´ XC, X ´ XCy P R`, (2.3)

whereX and XC are two sets of Cartesian coordinates describing the positions of the two bodies in a reference frame centred in the common focus. For each set E, we consider the vector~vhpEq “ pvphq, vphq

C q of the minimum points of Equation 2.3 and, assuming ECas fixed

parameters, we define thelocal minimal distance map dhpEq “ dpE,vh~pEqq

and theabsolute minimum map

dminpEq “ min

(29)

returning the orbit distance.

The mapsdhanddminshow three types of singularities: 1. they are not differentiable where they vanish;

2. the absolute minimum point can be non-univocally defined: in this case, two local min-ima corresponding to the same value ofdpE, ¨q can exchange their role as absolute min-imum anddmin can lose its regularity even without vanishing;

3. the definition ofdh may become ambiguous when the Hessian matrix ofd2pE , ¨q is de-generate and a bifurcation occurs. The mapdmin is immune to this kind of singularity, because in case of bifurcation, by definition only the branch corresponding to the lowest value of the distance is chosen and the ambiguity is removed.

Following the procedure described in (49), the maps can be regularized by means of a suitable cut-off of their definition domain, changing the sign of the maps on a subset of the smaller re-sulting domain and finally extending the result by continuity to a wider domain that includes all the orbit crossings, except for those configurations where the Hessian matrix is degenerate at the crossing point.

2.3.2

Uncertainty of the MOID

The importance of the regularization ofdhanddminbecomes evident as soon as we take into consideration the uncertainty that affects our calculations.

The uncertainty in the orbit determination also affects the position of the minima of the Ke-plerian distance function and, consequently, those of the minimal distance maps. However, due to the presence of the aforementioned singularities, the calculation of the covariance of these maps1cannot be performed directly by linear propagation of the covariance matrixΓEE associated to the nominal orbitE˚:

ΓdminpE˚q “ „ Bdmin BE pE ˚ q  ΓE˚ „ Bdmin BE pE ˚ q T , (2.4) 1

(30)

because the partial derivatives Bdmin{BE do not exist when dminpE˚q “ 0. The use of the regularized map ˜dminsolves the problem, allowing for the computation of the covariance as

Γd˜ minpE˚q“ « B ˜dmin BE pE ˚ q ff ΓE˚ « B ˜dmin BE pE ˚ q ffT . (2.5)

This is valid assuming that ˜dminis a Gaussian random variable, which is a good approximation only provided that the uncertainty onE is small.

2.4

Sampling of the confidence region

As previously said, for a newly discovered asteroid, few observations are available, thus the exact orbit of the object is not determined, since there is a range of possible orbits compatible with the available data, forming the Confidence Region, that surrounds the nominal orbit in the orbital elements space. We will now go through the most used methods that allow to sample the CR, generating a set of VAs.

2.4.1

Geometric sampling

Geometric sampling methods have been developed to keep computational complexity to a min-imum, so that algorithms can run in a short time and analyse a large number of objects within the same day. They provide a set of VAs that, being obtained by sampling the intersection of the CR with a differentiable manifold, has a geometric structure (32).

Line Of Variations

The first geometric method to be introduced was the Line Of Variations (LOV) method, ideated in 1999 (31). The idea behind it is that the VAs used to sample the CR are not indepen-dent, but uniformly spaced points of a geometrical object in the space of the orbital elements, over which interpolation is meaningful. If the observation uncertainty is small, then the uncer-tainty of the orbit is dominated by the error on the along-track coordinate (usually the mean anomaly) and results stretched along the direction corresponding to the greatest eigenvalue of the covariance matrixΓ. Thus the CR can be approximated by the Line Of Variations: a

(31)

differentiable curve, parametrized byσ, in the six-dimensional space that constitutes a sort of a “spine” of the CR.

The LOV allows for a one-dimensional sampling with points that are either equally spaced in the parameterσ, or separated by intervals of constant probability

Pprσi, σi`1sq “ żσi`1 σi ppσqdσ whereppσq “ ?1 2πe

´σ22 is the probability density on the LOV. The second choice allows for a

denser sampling around the nominal solution, corresponding toσ “ 0, while to avoid too long intervals near the LOV endpoints, a threshold value for the interval length is chosen, beyond which the sampling switches to uniform in step size (11).

The LOV can be defined (among other possibilities) as the set of points to which aconstrained differential corrections procedure converges. This method has higher chances of conver-gence with respect to non-constrained differential corrections, since each step is bound to lie on the hyperplane orthogonal to the direction of maximum uncertainty. Given a vector field ~v1p~xq, the orthogonal hyperplane Hp~xq is defined as

Hp~xq “ t~y|p~y ´ ~x ¨ ~v1p~xq “ 0u. (2.6)

Given a vector of initial conditions~x (a preliminary orbit), we define the matrix BHp~xq, con-taining the partial derivatives of the residuals with respect to the coordinates of a vector ~h that belongs toHp~xq. The constrained normal matrix CH gives the restriction toHp~xq of the linear map associated toCp~xq. Then we can write the normal equations:

CH “ BHTW BH

DH “ ´BHTW Ξ

CH∆H “ DH with solution∆H “ C´1

H DH,

whereDH is the projection ofD “ ´BTW Ξ along the hyperplane.

The first step of the constrained differential corrections procedure gives~x1 “ ~x ` ~∆x, where ∆p~xq coincides with ∆H along Hp~xq and has zero component along the weak direction. At each step, the weak direction and the orthogonal hyperplane are recomputed and the pro-cedure is iterated until∆H “ 0. Then the LOV is the set

(32)

where the gradient of the target function is in the weak direction (1).

However, the eigenvalues of the covariance matrix are not invariant under a generic coordi-nate change, so the LOV is not independent from the coordicoordi-nate system used for initial orbit determination. If the CR is wide in two dimensions, then different coordinate choices can lead to dramatically different results (48).

To better understand this behaviour, let us take as an example a set of orbital elementsX, for which a LOV and weak direction have been computed in a certain coordinate system. These would be different if obtained using another set of coordinatesY “ Y pXq. This is true even in case of a linear coordinate changeY “ SX, under which the normal and covariance matrices are transformed according to

ΓY “ SΓXST (2.8)

CY “ rS´1sTCXS´1 (2.9)

and the eigenvalues are the same only if the coordinate change is isometric, that isS´1 “ ST. Thus the weak direction and the LOV in theY space do not correspond by S´1 to the weak direction and LOV in the X space. A special case is the scaling, that is a transformation to natural units for each coordinate, represented by a diagonal matrixS. Examples of possible scalings for five coordinate systems are listed in Table 2.1, wherer and v are the heliocentric distance and velocity,tp is the time of passage to perihelion,M is the mean anomaly, nC « k is the Earth mean motion,Z “ 2πq3{2n´1

C p1 ´ eq

´1{2is a characteristic time for an orbit with

largee, and the following definitions hold:

h “ e sin Ω ` ω k “ e cos Ω ` ω p “ tan I{2 sin Ω q “ tan I{2 cos Ω λ “ M ` ω ` Ω.

The coordinate change between any two of the listed types of orbital elements is nonlinear. In this case, the covariance is transformed by the same formula Eq. 2.8, withY “ ΦpXq and

(33)

Cartesian (CAR) x y z vx vy vz

Units AU AU AU AU {d AU {d AU {d

Scaling r r r v v v

Cometary (COM) e q tp Ω ω I

Units ´ AU d rad rad rad

Scaling 1 q Z 2π 2π π

Keplerian (KEP) e a M Ω ω I

Units ´ AU rad rad rad rad

Scaling 1 a 2π 2π 2π π

Equinoctial (EQU) a h k p q λ

Units AU ´ ´ ´ ´ rad

Scaling a 1 1 1 1 2π

Attributable (ATT) α δ α9 δ9 ρ ρ9

Units rad rad rad{d rad{d AU AU {d

Scaling 2π π n

C nC 1 nC

Table 2.1: Most common coordinate sets in Orbit Determination, with their respective natural units and LOV scaling parameters (36).

(34)

the Jacobian matrix

SpXq “ BΦ BXpXq.

Once the constrained differential correction∆Y has been computed, we need to pull it back to theX coordinates, in which the actual computations are carried out. This can be done linearly by X1 “ X ` „ BΦ BXpXq ´1 ∆Y

if the correction is small (as it usually happens when taking small steps along the LOV). If∆Y is large, as might be the case when the initial point is not near the LOV, the transformation is nonlinear:

X1 “ Φ´1pY ` ∆Y q.

Some coordinate systems are better than others depending on the problem at hand. To help the choice, Milani et al. in (36) state two general rules:

1. if the arc drawn on the celestial sphere by the apparent asteroid position is small (e.g. 1 degree or less), then there is less nonlinearity in the coordinate systems representing instantaneous initial conditions (e.g. AT T or CAR);

2. when the observed arc is wide, orbital elements solving exactly the two body problem are preferable. COM elements avoid the discontinuity at thee “ 1 boundary and are suitable for high eccentricity orbits, while EQU elements avoid singularities in e “ 0 andI “ 0. KEP elements are not always suitable for Orbit Determination, due to their high nonlinearity.

Figure 2.3 shows an example of LOVs in different coordinates for a short arc of observations: the “first LOV”, corresponding to the largest eigenvalue ofΓ, is indicated with 1, while 2 de-notes the “second LOV”. In this case, the CR has a two dimensional structure and the selection of a LOV is quite arbitrary.

In cases like this, the CR could in principle be sampled with a number of LOVs, but better results are achieved by a fully two-dimensional sampling, obtained through the bidimensional analog of the LOV: theManifold Of Variations (MOV).

(35)

Figure 2.3: LOVs in different coordinates for th first 17 observations of asteroid 2004F U4,

with-out scaling on the left and with scaling on the right. Figure from (36).

2.4.2

Monte Carlo sampling

When computational power is not an issue, a full probability sampling method can give more precise results with respect to a geometric one. A Monte Carlo method samples the CR with a set of equally probable VAs, that result non-equally spaced in the orbital elements space, having a higher density in the neighbourhood of the nominal solution (28). This is possible as long as the probability density distribution of the observational residuals is known. In the nonlinear case, the probability density does not have an explicit analytical expression, thus it cannot be directly computed. To avoid the problem, the sampling has to be carried out in the space of observations, where the probability density is usually assumed to be Gaussian (4). As recently as 2019, in (43) the authors addressed the issue of reducing the high computational cost of the Monte Carlo class of methods, proposing a multilayer technique. This consists of a preliminary low-resolution Monte Carlo sampling, followed by the selection of potentially IM-relevant regions, by means of a ranking system that prioritizes those VAs that, when prop-agated to the time of a given encounter with the Earth, reach a close approach distance that is smaller than a chosen threshold value. Each selected region is then sampled again with a higher resolution and the process is repeated. At each step (or layer), the size of the regions of interest is reduced, thus limiting the total number of VAs needed to reach the desired precision and, consequently, the computation time.

(36)

2.5

Attributables and Admissible Region

As previously stated, an optical observation produces information about the position and ve-locity of the observed object on the celestial sphere, expressed in terms of right ascension (RA,α), declination (DEC, δ) and their derivatives, while leaving undetermined the range ρ and range rateρ with respect to the observer’s position. It is thus useful in practice to define,9 following (34), a set of orbital elements that summarises all the information obtained from observations: theattributable orbital elements

X “ ”

α, δ, 9α, 9δ, ρ, 9ρ ı

. (2.10)

The vector that synthesises all of the information obtainable by a single observation is called attributable and is defined as

A “ pα, δ, 9α, 9δq P r0, 2πq ˆ ´ ´π 2, π 2 ¯ ˆ R2. (2.11)

Given an attributable, to obtain an orbit we need to choose a couplepρ, 9ρq among an infinite range of possibilities. By restricting the choice, following some exclusion criteria, to a subset of theρ- 9ρ plane, Milani et al. in (35) introduced the concept of Admissible Region (AR). Provided that it’s compact, the AR can be sampled with different techniques to generate a swarm of VAs, similarly to what is done with the confidence region. The AR can thus be used in those cases in which few observations are available and the confidence region, being wide in two dimensions, is not well approximated by the LOV.

The boundaries of the AR can be defined as follows:

• the outer boundary corresponds to the limit for which an object belongs to the Solar System, i.e. the two-body energy relative to the Sun is negative:

E@pρ, 9ρq “ 1 2|| 9~rpρ, 9ρq|| 2 ´ k rpρq ă 0 (2.12)

(37)

• the inner boundary is the limit for which an object is not a satellite of the Earth, i.e. the two-body energy relative to the Earth is positive:

ECpρ, 9ρq “ 1 2|| 9~rpρ, 9ρq|| 2 ´ µ ||~rpρq|| ą 0. (2.13)

2.6

Sampling the Admissible Region

The Admissible Region can be sampled in different ways to generate a set of VAs with the suitable characteristics for each case. An efficient method usually starts from the sampling of the border of the AR and then uses the resulting set of points as initial conditions for the sampling of the whole region with the adequate method.

We will now go through the sampling techniques that our software is able to perform, high-lighting their respective advantages.

2.6.1

Sampling of the AR boundary

The boundary of the Admissible Region is a composition of different curves, that we wish to sample with a set of equally spaced points. Thus ifs is the arc length that parametrizes the boundary, then the spacing between two consecutive points of the set has to be a fixed incre-ment ofs.

A relatively simple way to conduct the sampling, avoiding the explicit calculation ofs, is the elimination method. The procedure starts by selecting a great number of points equally spaced on theρ axis and then selectively eliminating them according to an iterative algorithm until the desired number of points is reached. The one presented in (35) and illustrated here is applicable to any rectifiable curveγ.

Elimination method. Given a set of n points on the curve γ, we need a criterion to choose a subset ofm points, such that the distance along the curve between two consecutive points is as close as possible to the ideal steph “ 1{pm ´ 1q. We take γ to be the interval r0, 1s Ă R, but the algorithm can be generalised.

(38)

Given a setpPkqk“1,...,nof ordered points onγ, such that P1 “ 0 e Pn “ 1, we define an ideal set of equispaced pointspQjqj“0,...,msuch that

Qj`1´ Qj “

1

m ´ 1 “ h. (2.14)

For the sake of simplicity we renamedk “ Pk´ Pk´1andδk,j “ |Qj ´ Pk|, noticing that for eachPk, aQj for whichδk,j ď h{2 exists.

The elimination rule according to which we can remove some points from the setPk is the following: the pointsP¯

kfor which ¯k minimises the function

f pkq “ mintdk, dk`1u 1 ` minj“1...mδk,j

, k “ 2, ..., n ´ 1. (2.15)

are eliminated.

Applying the rulepn ´ mq times allows to select the ideal points if they coincide with some of thePks. It can be demonstrated that ifP¯kis one of the ideal points, then ak exists, such that f pkq ď f p¯kq, thus the procedure does never remove the points belonging to the ideal set.

We indicate with D the domain defined by connecting the consecutive points obtained˜ from the sampling of the AR boundary with segments of a straight line.

2.6.2

Sampling of the AR interior

Triangulation

A triangulation is a pairpΠ, τ q, in which Π “ P1, ..., PN is a set of points (the nodes of the triangulation), andτ “ T1, ..., Tkis a set of triangles that have theΠias vertexes. The smallest angle among all the angles of theTis is theminimum angle of the triangulation.

A triangulation can be chosen in various ways: in the framework of the AR sampling, we want one such that the union of all theTis coincides with D and the intersection between any of˜ theTis is either empty or an edge of aTi.

A particularly useful triangulation, in case of a convex domain, is the so calledDelaunay triangulation, that counts some interesting properties:

(39)

2. minimizes the maximum circumcircle,

3. no circumcircles encloses any node of the triangulation.

The triangulation is said to beconstrained if the input information comprises, other than Π, also some edgesPiPj (e.g. the segments obtained from the sampling of the boundary). Over a non-convex domain, a constrained Delaunay triangulation is still possible, but property 3 is not guaranteed.

Any generic triangulation of ˜D can be transformed into a constrained Delaunay triangulation by means of an iterative algorithm, that we shall now explain.

In a generic triangulation, the union of any couple of triangles sharing an edge, constitutes a quadrangular domain, that can be triangulated in two different ways: for instance, by choosing either of the two diagonals as the common edge of the triangles. Only one of the two is a Delaunay triangulation and the corresponding diagonal is then called aDelaunay edge. The so callededge flipping technique consists in choosing the Delaunay edge for each couple of triangles if they share a non-Delaunay edge, and then repeating the procedure over the next adjacent triangle, until all edges are Delaunay. Each step increases the minimum angle and reduces the maximum circumcircle.

Having defined the density of the boundary points as ρpPjq “ min

l‰j dpPl´ Pjq,

(2.16) where d is some distance, the algorithm used in (35) refines the obtained triangulation by adding new points according to the following criteria:

• among all the barycentersGi of theTis, the one that maximises the minimum distance from the nodes of the triangulation, weighted with its density

˜ ρpGiq “ 1 3 3 ÿ m“1 ρpPimq (2.17)

(m being the index assigned to the three vertexes of a same triangle) is chosen;

• the corresponding triangle is replaced with the triangles obtained by connecting its ver-texes with the new point;

(40)

• each new edge, if not already a Delaunay edge, undergoes the flipping technique. The procedure is repeated until the required number of points is reached or while the quantity

max Gi „ min j ˆ dpGi, Pjq ˜ ρpGiq ˙ ą σ (2.18)

is greater than a chosen small parameter. At convergence, the result is a triangulation that avoids ’flattened’ triangles, thus the sampling points are uniformly distributed as much as possible and the computation time is contained.

Grid

At the time when the triangulation method was first proposed, computation time was a major limiting factor to the density of the sampling. Following the improvement of the capabilities of modern calculators, new, computationally heavier sampling techniques have been introduced. A simple but handful choice is to sample the AR with the nodes of a rectangular grid. Then, an evaluation of the two-body energy E@ for each node allows to exclude the points that fall outside of the AR. The preliminary grid thus obtained can be densified at a later step by increasing the number of nodes, in order to achieve a higher resolution.

Cobweb

If a preliminary orbit is available, it is possible to sample the AR with a faster method than the above described one. The “cobweb” sampling was introduced in (49) and allows to scan the AR with a set of VAs that is centred on the nominal solution and denser in its neighbourhood. The sampling points are selected along the level curves of the target function used to minimize the RMS of the observational residuals. These curves are the restriction to the (ρ, 9ρ) plane of the five-dimensional concentric ellipsoids defined in the orbital elements space by the quadratic form associated to the normal matrixC “ Γ´1.

From the operative point of view, the points are the nodes, corresponding to specific directions, of a rectangular grid in the space of polar elliptic coordinatespR, θq, with 0 ď θ ď 2π and R comprised between zero and the maximum RMS value that is accepted as reliable, MRM S.

(41)

Figure 2.4: Example of cobweb around the nominal solution˚, 9ρ˚q. Figure from (45) .

Each point is then mapped to thepρ, 9ρq space by means of the transformation ˆρ 9 ρ ˙ “ R ¨ ˝ ? λ1cosθ ´ ? λ2sinθ ? λ2sinθ λ1cosθ ˛ ‚~v1` ˆρ˚ 9 ρ˚ ˙ (2.19)

whereλ1 andλ2 are respectively the first and second largest eigenvalues of the restriction of the covariance matrix to thepρ, 9ρq plane, while ~v1is the eigenvector corresponding toλ1. The last term shifts each point of the grid so that the position of the nominal solution is in the centre. The resulting sampling configuration, as shown in Figure 2.4, resembles a spider web, hence the name of the method.

2.7

Impact monitoring algorithms

To generate reliable predictions for the orbit of a NEO, an accurate model of the forces acting on the object is necessary. The predominant contribution is given by the Sun’s gravitational attraction, but also the effect of the planets, Pluto, the Moon and the largest objects of the Main Belt (usually the 16 more massive asteroids are taken into account) is perceivable, in the form of perturbations. Another contribution to the model is given by relativistic effects, mainly due to the Sun, but, in case of close encounters, also to the planets. In this case a many-body relativistic model is to be applied. Furthermore, perturbations of non gravitational nature are present, such as the ones introduced by the Yarkovsky effect or the radiation pressure. Non gravitational perturbations are not included in the default force model, and have been calculated only for some selected objects.

(42)

2.7.1

Classical Impact Monitoring

The main goal of Impact Monitoring is to solicit the observation of the objects that have a nonzero probability of collision with the Earth in the future. The first efficient automatic Impact Monitoring algorithm, CLOMON, developed at the Department of Mathematics of the University of Pisa, became operational in 1999. What made it different from previous impact computations was the implementation of the LOV method. CLOMON was then able to compute the MOID for the VAs that could have close encounters with the Earth and such that a small variation of the initial conditions could lead to a notable decrease of the minimal approach distance. The results were then propagated to the following 50 years.

However, CLOMON presented some deficiencies:

1. being the only system of its kind during its first years of operation, there was no possi-bility to compare results;

2. when the available observed arc was very short, the results were unreliable;

3. VAs were propagated without variational equations, thus the uncertainty on future en-counters was not known (unless further analysis was conducted through the Newton’s method, but this was only possible for a limited number of cases);

4. convergence of the Newton’s method is not guaranteed, so in many cases it was not possible to establish the existence of VIs;

5. extrapolation of the along-LOV results to the orthogonal direction could result in spu-rious VIs.

To address this issues, two second generation IM algorithms, with the same core structure of CLOMON, were introduced: CLOMON2 (University of Pisa) and Sentry ( JPL/NASA). Having been developed by two collaborating teams, the two systems are similar and comparable, but they adopt slightly different solutions to the aforementioned problems, that we will now cover into further detail.

(43)

Priority list

At the time when classical algorithms came to be, their computational efficiency was strongly limited by hardware constraints: in particular, it was not possible to perform calculations fast enough to analyse all of the objects for which new observations became available in a single day. This led to the establishment of apriority list to signal to the observers which objects to follow-up first.

IM algoritms able to list the objects with a minimal encounter distance less than0.1AU and such that the uncertainty region is large enough that the linear method is not sufficient. Entries are then ordered on the basis of a score given by the combined evaluation of the nominal MOID and of a parameter calledRUNOFF, that represents the along-track uncertainty resulting from a 100 years propagation.

The search for VIs is then conducted starting from the objects with the highest score, while entries at the bottom of the list might never be analysed. For this reason, the attribution of the score has to be reliable.

The score attribution method used by CLOMON suffered of a selection effect: the uncertainty associated to the MOID is large for objects that are discovered far away from the Earth, which is often the case for large asteroids that, even when their collision probability is low, should be analysed, since an impact would release a great amount of energy and potentially cause great damage. Some of these objects might have not been rated as hazardous if their MOID exceeded0.05 AU, even if initial conditions with vanishing MOID could have been present within the large CR (30).

Second generation algorithms use an improved score attribution rule: multiple solutions are computed for each NEO that shows a high RUNOFF (greater than0.7AU for CLOMON2, while for Sentry the threshold is1AU), from which the uncertainty of the MOID is calculated. The score is assigned on the basis of the minimum MOID resulting from the multiple solutions, instead of using the nominal MOID. This method gives a high priority to some objects that CLOMON would not analyse, allowing to find numerous VIs.

Differently from CLOMON, multiple solutions are then integrated to 100 years in the future together with the uncertainty.

(44)

Convergence

As already mentioned, the iterative procedure that CLOMON used to search for VIs was not guaranteed to reach convergence. When a close encounter with the Earth was found, CLOMON performed a bidimensional analysis on the MTP, that easily lead to divergence in presence of nonlinearities.

Second generation algorithms behave differently: they search for the minimum of the ap-proach distance without leaving the LOV, thus limiting the problem to one dimension. If an interval1, σ2s exists such that for each set of initial conditions corresponding to σ P rσ1, σ2s a close encounter is verified and the derivative ofd2 with respect toσ has opposite sign in the interval extrema, then convergence is guaranteed. This because a minimum ofd2 necessarily exists and can be found by means of theregula falsi.

The main disadvantage of this method is the loss of information about the width of the Confi-dence Region, that is not one-dimensional. Moreover theregula falsi is only applicable starting from two initial points, meaning that at least two VAs with encounters in close dates, described by two consecutive points on a TP, have to be known. In case only one of the points is available (eventuality known assingleton), CLOMON2, as its predecessor, applies a Newton’s method, but modified with bounded steps, so to reduce the chance of divergence (30). Sentry, on the other hand, does not analyse singletons, behind the assumption that, even if a VI was found, the collision probability would be very low.

Completeness level

The maximum collision probability of a VI that, assuming a linear trace on the TP, can elude the algorithm is known as completeness level CL of the algorithm (11). For CLOMON2 CL “ 4.3 ˆ 10´7, while for SentryCL “ 8.6 ˆ 10´8. The discrepancy is direct expression of a

conceptual difference between the two systems: CLOMON2 propagates a smaller quantity of VAs, while performing a more detailed search for VIs, while Sentry aims to reduce the effect of nonlinearities by means of a denser sampling along the LOV, paying a price in computational efficiency. This is also a consequence of the fact that Sentry ignores singletons, thus needing a larger number of VAs to increase the probability of finding two suitable initial points for the

(45)

regula falsi.

Since CLOMON2 creates a less numerous set of VAs, its completeness level is higher than Sen-try’s and the algorithm is more vulnerable to errors deriving from nonlinearities. However CLOMON2 is faster than Sentry and has the capability to detect VIs with lower impact prob-abilities (e.g. those associated to singletons).

However, sinceCL is a quantity defined under some simplified assumptions (e.g. that a single point on the TP is enough to find a VI if it exists), there is noa priori guarantee that all VIs withIP ą CL are found, and the actual completeness level achieved has to be measured after each run. To fill the gap between actual and theoretical CL, in their latest article (10), Del Vigna et al. proposed a LOV densification method that could in principle solve the problem. Densification would allow to add points on the TP when too few (3 or less) are available, thus revealing the actual geometry of the TP trace of the LOV. The procedure ensures that each return (a LOV segment with consecutive indices) includes at least 5 indices, corresponding to an equal number of TP points.

Spurious VIs

When a NEO has just been discovered, its CR is large and the furthest from the Earth it has been observed, the greater the uncertainty of the MOID is. This can cause the generation of fictitious VIs, theSpurious Virtual Impactors (SVI). This is not a problematic issue, since the goal of the systems is to remove an object from the priority listif and only if it does not have VIs with nonzero impact probability. If SVIs are present, the object stays in the list until enough observations are available to exclude a collision.

Still, SVIs can alter the priority order, thus both CLOMON2 and Sentry are equipped with a filtering system able to remove some of the SVIs (30).

CLOMON2’s method consists of a reverse Newton’s method, that allows to retrieve the initial conditions for a VI, verify their pertinence to the CR and compute the residuals. If the proce-dure does not reach convergence or the value of the residuals is not acceptable, the related VI is considered spurious. This method however is prone to the risk of excluding real VIs, since it relies on the convergence of Newton’s method.

Riferimenti

Documenti correlati

Choledochocele represents a cystic dilatation of the distal common bile duct and it’s included in Todani’s classification as type III choledochal cysts.. We report a case of

The two skeletons excavated at Assyut (case 1, E225 and case 2, E235) were characterized by frontal bossing, multiple bony cavities (probably cysts) of mandible and maxilla; multiple

Si comprende meglio, allora, come con vulnerabilità Bobbio introduca il concetto del limite della sua propria natura (“non possiedo una filosofia ma soltanto nervi” 32 ) e come la

ChocoPhlAn uses publicly available genomes and standardized gene calls and gene families to gen- erate markers for taxonomic and strain-level profiling of metagenomes with MetaPhlAn

The present contribution shows how X-ray spectroscopies can shed light on the behavior of metal nanocrystals, influenced by complex relations between size, shape, surface

Using results from aeolian change detection and morphological analyses, we investigated aeolian bedform mobility (i.e., evidence of wind-driven migration or activity of bedforms

1(c), we have used a basic model which is the head-to head joined double switch as shown in Fig. The 3D simulation for this bridge structure also shows as the actuation

FGD PET/CT is indicated when active sarcoid involvement is suspected in relatively inaccessible areas such as the heart, brain, or bone, eventually as a guide to indicate