• Non ci sono risultati.

Design and development of a combined Raman and laser-induced fluorescence optical microscope

N/A
N/A
Protected

Academic year: 2021

Condividi "Design and development of a combined Raman and laser-induced fluorescence optical microscope"

Copied!
108
0
0

Testo completo

(1)

Politecnico di Milano

SCHOOL OF INDUSTRIAL AND INFORMATION ENGINEERING Master of Science – Engineering Physics

Design and development of a combined Raman and

Laser-induced Fluorescence optical microscope

Supervisor

Prof. Daniela COMELLI

Co-Supervisor

Prof. Gianluca VALENTINI

PoliTo Co-Supervisor

Prof. Fabrizio PIRRI

Candidate

(2)
(3)

Sommario

Lo scopo di questa tesi è presentare la progettazione, lo sviluppo e la caratterizzazione di un innovativo microscopio ottico che combina spettroscopia Raman e di fluorescenza laser-indotte. Il punto di forza principale della combinazione di queste due tecniche risiede nel fatto che sono entrambe tecniche ottiche non invasive e non distruttive. Ciascuna di esse ha vantaggi e svantaggi che saranno analizzati approfonditamente nel Capitolo 1, tuttavia la combinazione di esse in un singolo setup permette di sfruttare la loro complementarità per eseguire su un campione di interesse un’analisi più dettagliata rispetto a quella che si avrebbe se utilizzassimo una sola di queste due tecniche. Tale aspetto sarà ulteriormente messo in evidenza nel Capitolo 5 dove saranno mostrati i risultati di una misura eseguita su una stratigrafia pittorica, sotto-lineando il contributo di entrambe le tecniche di spettroscopia per fornire un quadro generale completo del campione e dei materiali contenuti al suo interno. Il sistema è stato progettato per esaminare campioni soprattutto nell’ambito del material science ma anche nell’ambito della biologia, ed esso permette di eseguire una mappatura del suddetto con un modello di acquisizione degli spettri di tipo raster-scan.

Il Capitolo 1 presenta i concetti fisici teorici alla base del lavoro di tesi. La prima parte richiama il fenomeno della fluorescenza descrivendone le caratteristiche e i parametri fondamentali e ricorrendo all’uso del diagramma di Jablonski. Nella seconda parte viene trattata la teoria riguardo la diffusione Raman utilizzando un approccio classico. Vi è poi un’intera sezione dedicata alla discussione critica sui vantaggi che un sistema ibrido come il nostro permette di esplorare, infine, per completezza della trattazione, viene fatto un breve cenno alle altre tecniche di spettroscopia Raman e di fluorescenza presenti in letteratura.

Il Capitolo 2 è ancora un capitolo introduttivo che affronta i principi della spettroscopia e della microscopia. Questo perché il setup analizzato in questa tesi è un microscopio ottico in cui il segnale raccolto dal campione viene analizzato da due spettrometri, uno per la fluorescenza e uno per il Raman. In particolare, il setup permette di eseguire una mappatura del campione sfruttando il modello di acquisizione raster-scan, per questo i dati raccolti hanno la struttura di un cubo tridimensionale che contiene informazioni riguardo l’intensità della radiazione rivelata per ogni lunghezza d’onda e coordinata spaziale sul campione (x, y, λ). Per ottenere lo spettro del segnale rivelato, viene usato uno spettrometro dispersivo in configurazione Czerny-Turner sia nel caso del segnale di fluorescenza che in quello Raman; pertanto, questo capitolo tratta nel dettaglio la fisica e l’ingegneria di uno spettrometro dispersivo con reticolo di diffrazione evidenziandone i principali parametri di design. Infine, vi è una introduzione alla microscopia ottica, con una attenzione particolare ai sistemi 4f, dal momento che tale configurazione è alla base dei cammini ottici del nostro setup. Il sistema è stato analizzato attraverso

(4)

Sommario

le leggi dell’ottica di Fourier al fine di ricavare le espressioni della risposta all’impulso e della funzione di trasferimento, e delle risoluzioni laterale e assiale.

Dopo i primi due capitoli introduttivi riguardo i fondamenti di fisica e di ingegneria del setup, quest’ultimo è presentato nel Capitolo 3, dove vi è una descrizione di tutti i cammini ottici presenti e una prima stima sulla dimensione degli spot di eccitazione sul campione. Le condizioni principali per la scelta degli elementi ottici da utilizzare nel setup erano soprattutto due: ottenere uno spot di eccitazione di dimensione di pochi micrometri tale da permettere lo studio di campioni di ambito material science, e qualcuno in campo biologico; raccogliere il segnale massimo ai rivelatori. Questo capitolo motiva in maniera approfondita le scelte di progettazione che sono state fatte considerando i possibili trade-off. Dopo una prima fase di design, il comportamento del setup è stato simulato nel dettaglio, specialmente per quanto riguarda i valori di dimensione degli spot di eccitazione sul campione e di risoluzione spettrale. Una nota di rilievo è per il cammino per l’eccitazione della diffusione Raman. Viene infatti utilizzata una fibra monomodale, pertanto la simulazione del comportamento della luce è stata fatta utilizzando le leggi del fascio gaussiano, la cui teoria è richiamata in Appendice A. Per quanto riguarda invece la risoluzione spettrale, è stato analizzato il comportamento del sistema sia in regime non diffrattivo che in quello diffrattivo, a seconda del variare della dimensione della fibra di ingresso dello spettrometro. Dopo la descrizione del setup e la stima teorica dei suoi parametri, il setup è stato carat-terizzato per stimare quanto le sue reali performance si discostassero da quelle teoriche. Il Capitolo 4 presenta la caratterizzazione del setup relativamente all’allineamento degli spot UV e IR, al campo visivo della camera, alla dimensione degli spot e alla relativa potenza sul campione. Sono stati stimati inoltre la dimensione dell’area del campione da cui i detector rivelano il segnale e il rumore di fondo dei detector stessi. Oltre a ciò, spettri di materiali conosciuti sono stati acquisiti per verificare la bontà delle misure da un punto di vista quantitativo. Il capitolo mostra che i risultati ottenuti sono soddisfacenti e che il comportamento del sistema è simile a quello simulato nel Capitolo 3, in particolar modo riguardo alla dimensione degli spot di eccitazione e del campo visivo della camera, tenendo comunque a mente le non idealità presenti negli elementi ottici.

Infine, il setup è stato usato per eseguire una mappatura (Capitolo 5) di una strati-grafia pittorica di un’opera dell’artista Larionov. Nel settore dei beni culturali, l’identificazione dei materiali pittorici è fondamentale per lo studio delle tavolozze degli artisti, per la datazione e per la comprensione di eventuali fenomeni di degrado. Le tecniche di spettroscopia ottica hanno un ruolo da protagonista in questo senso, in particolare la spettroscopia di fluorescenza permette di verificare con semplicità la presenza di eterogeneità sulle superfici o su micro-campioni, mentre gli spettri Raman sono usati come impronte digitali per l’identificazione dei pigmenti. Il capitolo presenta i protocolli seguiti per la misura e l’analisi dei dati, mettendo in rilievo i parametri impostati nella strumentazione. I risultati sono successivamente discussi e messi in relazione a quanto riportato in letteratura [1], evidenziando come essi confermino i punti di forza del setup.

(5)

Abstract

The aim of this thesis work is to present the designing, development and charac-terization of a novel optical hybrid setup which combines fluorescence and Raman spectroscopy techniques. The main strength in combining these two techniques comes from the fact that they are both optical, non-invasive and non-destructive. Each of them has both advantages and drawbacks that have been deeply analysed in Chapter 1, however combining them in a single setup means take advantage of their complementarity in order to perform a more detailed and deeper investigation of the sample than using a single technique. This aspect will be further analysed in Chapter 5 where we will show the results of a measurement performed by the setup, highlighting the importance of having both techniques in order to provide a complete framework of the sample identifying all the materials that are present. The system has been designed to work with samples belonging mainly to the material science field but also to the biological one, and it allows to perform a mapping by exploiting a raster-scanning approach.

In Chapter 1 the theoretical physics concepts at the basis of the thesis work are presented. In the first part we recall the phenomenon of fluorescence emission describ-ing its representation through Jablonski diagram and discussdescrib-ing its characteristics and its main parameters. In the second part we introduce the Raman scattering theory by a classical treatment approach, and we discuss the main characteristics. An entire section is dedicated to a critical analysis of having a hybrid setup and the advantages that it allows to explore, finally a brief mention on the fluorescence and Raman spectroscopic techniques is presented for sake of completeness.

Chapter 2 is a further introductive chapter which deals with the principles of spec-troscopy and microscopy. The setup is an optical microscope, and the signal collected by the sample is sent into two spectrometers, one for fluorescence and one for Raman, in order to analyse its spectrum. Specifically, the setup allows to perform a mapping of the sample by exploiting a raster-scanning approach, therefore the acquired dataset is a three dimensional data-cube which contains information about the intensity of the collected radiation for each wavelength and position over the scanned area (x, y, λ). In order to retrieve the spectrum of the signal coming from the sample, a dispersive spectrometer in Czerny-Turner configuration is used, both in case of fluorescence and Raman analysis; therefore in this chapter the dispersive spectrometer and its main figure of merits are discussed in-depth. Finally, there is an introduction to optical microscopy, with a focus on 4f optical systems, since it is the configuration that we used in all the paths of the setup. The system has been analysed by a Fourier optics approach in order to retrieve the expressions of the optical functions of such a system and the expressions of the lateral and axial resolution.

(6)

Abstract

After the first two theoretical and introductive chapters, the setup is finally presented in Chapter 3, where there is a description of all the optical paths and a first theoretical estimate of the excitation spot size. The main requirements for the choices of the optical elements during the designing phase were to have a spot size on the sample that allowed to analyse mainly solid-state samples but also some biological ones, and to collect the maximum signal at the detectors. This chapter goes in-depth critically motivating the designing choices that have been made considering the possible trade-offs. After the designing phase, the behaviour of the setup has been simulated in detail, especially regarding the dimension of the excitation spots on the sample and the spectral resolution achievable. A particular mention must be done for the Raman excitation path because of the use of a single mode fiber, therefore the simulation has been done considering the gaussian beam laws, which are recalled in Appendix A. With regarding to spectral resolution we explored both the non-diffractive and the diffractive regime, analysing the behaviour of the system as a function of the dimension of the entrance fiber of the spectrometer, which is the main critical parameter to take into account.

After the description and the theoretical estimation of its main parameters, the setup has been characterized to evaluate how much its performance was different from ideality. Chapter 4 presents the setup characterization regarding the alignment of the UV and IR spots, the Field of View of the camera, the excitation spot size and power on the sample, the dimension of the detected area both for fluorescence and Raman path, and the readout and dark noise of the detectors. Moreover, some spectra of well-known materials have been acquired in order to verify the goodness of the measurements from a quantitative point of view. The chapter shows that the results are quite satisfying and the actual behaviour of the setup is very close to the one simulated in Chapter 3, especially regarding the dimensions of the excitation spots and the Field of View of the camera, bearing in mind the nonidealities introduced by the optical elements.

Finally, the setup has been used to perform a real measurement (Chapter 5) by using a raster-scanning approach on a stratigraphy sample, coming from a Larionov painting. In conservation science, the identification of painting materials is fundamental for the study of artists’ palettes, for dating and for understanding on-going degradation phenomena. Optical spectroscopy techniques are retained as standard non-destructive techniques for this kind of investigation, specifically fluorescence spectroscopy, allows to simply evaluate the distribution of heterogeneities on surfaces or on micro-samples, while Raman spectra are successfully used as a fingerprint for the identification of the pigments. The Chapter presents the protocol followed to perform both the measure-ment and the analysis of the dataset, clearly identifying the parameters set for all the instrumentation. Results are presented and further discussed comparing them with the literature [1], highlighting how they confirm the strengths of the setup.

(7)

Contents

Sommario iii

Abstract v

Contents viii

List of Figures xii

List of Tables xiv

1 Fluorescence and Raman scattering 1

1.1 Fluorescence . . . 1

1.1.1 Jablonski diagram . . . 2

1.1.2 Characteristics of fluorescence emission . . . 3

1.1.3 Lifetime and quantum yields . . . 3

1.1.4 Further fluorescence spectroscopy techniques . . . 5

1.2 Raman scattering . . . 7

1.2.1 Classical treatment . . . 8

1.2.2 Selection rules . . . 10

1.2.3 Intensity and cross section . . . 10

1.2.4 Stokes and Anti-Stokes Raman scattering . . . 11

1.2.5 Further Raman spectroscopy techniques . . . 11

1.3 Hybrid setup . . . 12

2 Spectroscopy and Microscopy 15 2.1 Hyperspectral Imaging . . . 15

2.2 Introduction to spectroscopy . . . 17

2.2.1 Spectrometer . . . 17

2.2.2 The diffraction grating . . . 18

2.2.3 Dispersion . . . 19

2.2.4 Parameters of interest in diffraction gratings . . . 20

2.2.5 Czerny-Turner configuration . . . 21

2.3 Introduction to Microscopy . . . 23

2.3.1 Microscope . . . 23

2.3.2 Field propagation through a lens . . . 25

2.3.3 Intensity propagation . . . 28

2.3.4 Axial resolution . . . 29

(8)

Contents

3.1 Setup . . . 31

3.1.1 Fluorescence sub-unit . . . 32

3.1.2 Raman sub-unit . . . 33

3.1.3 Illumination and imaging sub-unit . . . 34

3.1.4 Scanning system (x, y, z) . . . 35 3.1.5 Combination of paths . . . 35 3.1.6 Design criteria . . . 35 3.2 Simulations . . . 39 3.2.1 Lateral resolution - UV . . . 39 3.2.2 Lateral resolution - IR . . . 39

3.2.3 Spectral resolution - Fluorescence . . . 46

3.2.4 Spectral resolution - Raman . . . 51

4 Characterization of the Setup 57 4.1 Spatial characterization of the Field of View . . . 57

4.2 Alignment characterization . . . 58

4.3 Excitation paths characterization . . . 60

4.4 Detection paths characterization . . . 64

4.5 Spectra characterization . . . 67 5 Real Measurement 71 5.1 Introduction . . . 71 5.2 Protocols . . . 72 5.2.1 Measurement protocol . . . 72 5.2.2 Analysis protocol . . . 73 5.2.3 Data pre-processing . . . 74

5.2.4 Scanned area reconstruction protocol . . . 77

5.3 Matherials and Methods . . . 78

5.4 Raman spectroscopy results . . . 79

5.5 Fluorescence spectroscopy results . . . 81

5.6 Discussion . . . 83

Conclusions 85

A Gaussian beam theory 89

B Stages remote control 91

Bibliography 94

(9)

List of Figures

Figure 1.1 Fluorescence spectrum of SYBR Green. . . 2

Figure 1.2 Scheme of Jablonski diagram. . . 3

Figure 1.3 Stokes shift of a fluorescence emission spectrum. . . 4

Figure 1.4 Two-level system. . . 5

Figure 1.5 Exponential decay of fluorescence signal. . . 6

Figure 1.6 FLIM: Sampling of fluorescence decay in a single pixel at different delay of the window with respect to the excitation. . . 6

Figure 1.7 Energy transfer model of elastic (Rayleigh) and inelastic (Raman) scattering based on photon description of electromagnetic radiation. . 7

Figure 1.8 Electric dipole. . . 8

Figure 2.1 Data-cube obtained by Raman imaging spectroscopy. . . 16

Figure 2.2 (a) Point mapping, (b) line-scanning mapping and (c) spectral scanning. . . 16

Figure 2.3 General scheme of a spectrometer. . . 17

Figure 2.4 Scheme of a reflection diffraction grating. . . 18

Figure 2.5 Scheme of Czerny-Turner mount. . . 22

Figure 2.6 Imaging with a lens of focal length f. . . 24

Figure 2.7 4f system with a circular aperture stop. . . 25

Figure 2.8 Basic imaging functions of an optical system with a circular aperture stop: (a) Amplitude spread function; (b) Amplitude transfer function; (c) Point Spread Function; (d) Optical Transfer Function. . 28

Figure 2.9 Representation of the light intesity profile of two nearby points imaged by a microscope. . . 29

Figure 3.1 Scheme of the optical microscopy designed to combine Laser-Induced Fluorescence and Raman spectroscopy. . . 32

Figure 3.2 Scheme of the optical paths present in the microscopy setup. . 36

Figure 3.3 (a) 3D CAD of the setup designed on SolidWorks: (b) frontal and (b) lateral view. . . 36

Figure 3.4 (a) Frontal and (b) lateral view of the full setup. . . 37

Figure 3.5 Behaviour of the radius of curvature of the beam on the output plane as a function of the distance between the lenses. The dashed blue line indicates the sum of the focal lengths of the two lenses. . . . 43

Figure 3.6 Behaviour of the z position of the waist beyond the second lens as a function of the distance between the lenses. The dashed blue line indicates the sum of the focal lengths of the two lenses. . . 44

(10)

List of Figures

Figure 3.7 Behaviour of the size of the waist on the output plane as a function of the distance between the lenses. The dashed blue line indicates the sum of the focal lengths of the two lenses. . . 45 Figure 3.8 Gaussian beam radius profile in the region between the two

lenses. The dashed blue line indicates the focal plane of the first lens. 45 Figure 3.9 Radius of curvature profile of the beam in the region between

the two lenses. The dashed blue line indicates the focal plane of the first lens. . . 46 Figure 3.10 (a) Pupil profile, (b) Amplitude transfer function and (c) Point

Spread Function of the system along the x geometry. . . 48 Figure 3.11 (a) Illumination profile in the object plane and (b) its Fourier

transform for Fluorescence spectrometer. (c) The Optical Transfer Function of the system is the autocorrelation of the Amplitude Transfer Function. . . 50 Figure 3.12 (a) The OT F of the system and the Fourier transform of the

Object profile are represented on the same scale to show the weight of the OT F in modulating the spatial frequency of the Object illumination. (b) Fourier Transform of the illumination profile on the image plane. . 50 Figure 3.13 (a) The OTF of the system considering a fiber size of 10 µm

and the Fourier transform of the Object profile are represented on the same scale to show the weight of the OT F in modulating the spatial frequency of the Object illumination. (b) Fourier Transform of the illumination profile on the image plane. . . 51 Figure 3.14 (a) x profile of the pupil of the optical system considering a

dimension of the grating of 32 mm. (b) Point Spread Function of the system. . . 53 Figure 3.15 (a) Illumination profile in the object plane and (b) its Fourier

transform for Raman spectrometer. . . 53 Figure 3.16 (a) The OT F of the system and the Fourier transform of the

Object profile are represented on the same scale to show the weight of the OT F in modulating the spatial frequency of the Object illumination. (b) Fourier Transform of the illumination profile on the image plane. . 54 Figure 3.17 (a) The OT F of the system considering a slit width of 10 µm

and the Fourier transform of the Object profile are represented on the same scale to show the weight of the OT F in modulating the spatial frequency of the Object illumination. (b) Fourier Transform of the illumination profile on the image plane. . . 55 Figure 4.1 (a) Image of the ruler acquired by using the 20x objective, and

(b) its profile after been processed on Matlab. . . 58 Figure 4.2 (a) Image of the ruler acquired by using 40x objective, and (b)

its profile after been processed on Matlab. . . 58 Figure 4.3 (a) UV and (c) IR excitation spots on the sample. (b)

Fluores-cence and (d) Raman detected area represented by a He-Ne spot on the sample. Pictures captured with 20x objective. . . 59 Figure 4.4 (a) x and (b) y profile of the UV (blue) and IR (red) excitation

spots on the sample, with 20x objective. . . 60 x

(11)

List of Figures

Figure 4.5 Pictures of UV spot on the sample using the setup with (a) 20x and (c) 40x objective, and (b) (d) the corresponding profile made by processing images on Matlab. Dimension of picture (a): 260 × 195 µm. Dimension of picture (c): 130 × 98 µm. . . 61 Figure 4.6 Pictures of IR spot on the sample using the setup with (a) 20x

and (c) 40x objective, and (b) (d) the corresponding profile made by processing images on Matlab. Dimension of picture (a): 260 × 195 µm. Dimension of picture (c): 130 × 98 µm. . . 63 Figure 4.7 Output optical power – Input current Characteristic of IR laser. 64 Figure 4.8 Detector noise – Exposure time Characteristic of CCD used for

Fluorescence spectra acquisition. . . 66 Figure 4.9 Detector noise – Exposure time Characteristic of CCD used for

Raman spectra acquisition. . . 67 Figure 4.10 Fluorescence emission profile of blue, green, yellow and red

fluorescent reference slide. Spectra have been acquired using the 40x objective and an OD2 optical attenuator. Spectra have been normalized. 68 Figure 4.11 The figure shows the Raman spectra of well-known materials used

for the characterization of the setup, acquired with the 40x objective. (a) Calcite spectrum has been acquired setting the exposure time of the

CCD to 60s, averaging 2 times. The power density on the sample was 42442 W/cm2 (laser input current 300 mA). (b) Cinnabar, (c) White lead, (d) Litharge and (e) Phthalo Blue spectra have been acquired setting the exposure time of the CCD to 10s, averaging 4 times. The power density on the sample was 21221 W/cm2 (laser input current 200 mA). (f) Polystyrene beads spectrum has been acquired setting the exposure time of the CCD to 30s, averaging 4 times. The power density on the sample was 42442 W/cm2 (laser input current 300 mA). 70 Figure 5.1 (a) Block diagram describing the main features of the

pre-processing Matlab software. (b) A picture of the Graphic User Interface of the software. . . 75 Figure 5.2 Picture of the whole stratigraphy where the three different layers

are highlighted: yellow, white mixed with brownish, and tiny blue layer. 78 Figure 5.3 Picture of the Larionov L5 sample. Dashed blue line represents

the UV scanned area, while dashed red line represents the IR scanned area. . . 79 Figure 5.4 Pre-processing step of the Raman dataset of sample L5. (a)

Plot of the filtered mean spectrum with the background spline fitting. (b) Plot of the background subtracted filtered mean spectrum. . . 80 Figure 5.5 Raman spectra of the (a) Chrome yellow, (b) Phthalocyanine

blue and (c) Ultramarine blue pigments of the sample L5. . . 80 Figure 5.6 False colour representation of the sample L5 made with PyMca

in order to show the reconstructed distribution of Chrome yellow (red), Phthalocyanine blue (blue) and Ultramarine blue (green) over the sample. 81 Figure 5.7 Fluorescence spectra of the (a) Cadmium yellow, (b) Zinc white

(12)

List of Figures

Figure 5.8 False colour representation of the sample L5 made with PyMca in order to show the reconstructed distribution of Cadmium yellow (red), Zinc white (green) and Cadmium red (blue) over the sample. . 82 Figure 5.9 Comparison between the (a) IR and (b) UV scanned area and

the reconstruction of the spatial distribution of the pigments identified with (b) Raman spectroscopy and (d) fluorescence spectroscopy. . . . 84 Figure A.1 Gaussian beam width as a function of the propagation distance

along the beam. . . 89 Figure B.1 A scheme of all the hardware systems controlled by the PC.

Red lines indicate the instrumentations which are controlled with the custom-made software written in LabWindows, while blue lines indicate the instrumentations that are controlled by using dedicated softwares. 92 Figure B.2 A picture of the user interface for the control of the stages. . . 92

(13)

List of Tables

Table 3.1 Specifications of the optical elements chosen in the UV excitation path and resulting spot size on the sample. . . 33 Table 3.2 Specifications of the optical elements chosen in the IR excitation

path and resulting spot size on the sample. . . 34 Table 3.3 Dimension of the Field of View of the camera for the different

objectives considered in the setup. . . 34 Table 3.4 The table shows the values of the size of the image of the

excitation spot on the head of the collecting fiber and the numerical aperture with which light is injected in the fiber, in case of fluorescence signal. . . 38 Table 3.5 Table shows the values of the size of the image of the excitation

spot on the head of the collecting fiber and the numerical aperture with which light is injected in the fiber, in case of Raman signal. . . 38 Table 3.6 Specifications of the optical elements chosen in the UV excitation

path and resulting spot size on the sample. . . 39 Table 3.7 Specifications of the optical elements chosen in the IR excitation

path and resulting spot size on the sample. . . 41 Table 4.1 Dimension of the measured value of the Field of View, for each

objective. . . 58 Table 4.2 Comparison between the size of the UV measured spot on the

sample and the one of the theoretical spot, for each objective. . . 60 Table 4.3 Values of the UV optical power on the sample without attenuator

in the excitation path, for the different objectives. . . 62 Table 4.4 Comparison between the size of the IR measured spot on the

sample and the one of the theoretical spot, for each objective. . . 62 Table 4.5 Values of the IR optical power on the sample with the different

objectives. . . 64 Table 4.6 Comparison between the dimension of the UV illuminated area

on the sample and the detected one for fluorescence analysis, for each objective. . . 65 Table 4.7 Comparison between the dimension of the IR illuminated area on

the sample and the detected one for Raman analysis, for each objective. 66 Table 4.8 Comparison between the position of the aforementioned

well-known materials peaks detected with the setup and the ones found in literature. . . 69 Table 5.1 Pigment identified on sample L5 on the basis of Raman spectroscopy. 79

(14)

List of Tables

Table 5.2 Pigment identified on sample L5 on the basis of fluorescence spectroscopy. . . 81

(15)

Chapter 1

Fluorescence and Raman scattering

In this first chapter, the theoretical physics concepts at the basis of the thesis work are introduced. The chapter is divided into three main sections.

The first section treats the subject of fluorescence explaining the physics involved in the emission of light from an excited material. The Jablonski formalism is presented to show the phenomenon, then the main characteristics and the main parameters of fluorescence are described.

The second section presents the theory of Raman scattering. The light-matter interaction is treated within the classic framework obtaining the frequency components of the scattered light, and the selection rule that regulates the phenomenon. Then, the factors which mainly influence the intensity of Raman signal are discussed, considering both the Stokes and the Anti-Stokes components of the scattered light.

The third section introduces the aim of the thesis activity, the design of a novel microscopy setup that combines fluorescence and Raman spectroscopy. We discuss the strengths of these two optical techniques with a main focus on the advantages of having a hybrid setup.

1.1

Fluorescence

Luminescence is defined as the spontaneous emission of light by a substance and occurs from electronically excited states. Electrons, if brought up to an excited energetic level, can relax to the fundamental state through different mechanisms of decay, which can be radiative or non-radiative. Luminescence consists in a radiative decay of the material, thus with emission of light. An interesting form of luminescence is the photoluminescence, in which the excitation of the electronic states occurs by absorption of photons.

It is possible to divide luminescence into two broad categories, fluorescence and phosphorescence, depending on the nature of the excited state. In singlet excited state, the electron in the excited orbital has opposite spin to that in the coupled ground-state orbital, therefore a fast transition is allowed (selection rule ∆s = 0 is satisfied). The relaxation of the electron to the ground-state orbital results in an emission of a photon due to conservation of energy. This phenomenon is called fluorescence and the emitting substance is called fluorophore. Typical fluorescence lifetime is on the order of nanoseconds. On the contrary, in triplet state, the electron

(16)

Chapter 1. Fluorescence and Raman scattering

in the excited orbital has the same spin as the electron in the coupled ground-state orbital, so selection rule is not satisfied, and transition is forbidden. Emission of light from triplet state is called phosphorescence. It is a less probable process and slower in time compared with fluorescence, with typical lifetimes of the order of milliseconds up to seconds.

Fluorescence spectral data are generally presented as emission spectra, plots that represents the intensity of the fluorescence signal as a function of the wavelength [nm] or wavenumber [cm−1] (Figure 1.1). These spectra strongly depend upon the chemical structure of the fluorophore and the solvent in which it is dissolved, giving important information on the sample [2].

Figure 1.1. Fluorescence spectrum of SYBR Green.

1.1.1

Jablonski diagram

The processes that occur between the absorption and the emission of light can be illustrated in the Jablonski diagram (Figure 1.2), which is the best tool to show the possible configurations of the electron in an excited state.

The levels S0, S1 and S2 represent the singlet states and are respectively the electronic

ground-state and the first and the second electronic excited states, while T1 represents

the first excited triplet state. At each electronic state are associated many different vibrational levels, to which the molecule can be excited. Transitions can occur from each vibrational level of a certain electronic state to each vibrational level of a second electronic state. Transitions due to absorption are very fast, on the order of f s, and electron is excited to a vibronic level of S1 or S2. Then, a process called internal

conversion occurs and the electron is non-radiatively relaxed to the least vibrational level of S1. This process is on the order of ps, that is very fast compared to fluorescence

dynamics, so it is possible to assume that all the electrons undergo this process before fluorescence occurs. After that, the electrons relax radiatively by emitting fluorescence light and decay can occur from S1 to any vibrational level of the electronic

ground-state, just like in case of absorption transitions occur from S0 to any vibrational level

of S1. This phenomenon explains why absorption and emission spectra are generally

(17)

1.1. Fluorescence

Figure 1.2. Scheme of Jablonski diagram.

mirrored. The electron in S1 can alternatively make a transition to the triplet state

T1 with a spin conversion, by a mechanism called intersystem crossing. Relaxation

from T1 to S0 is forbidden by selection rule, since electrons of coupled orbitals have

the same spin, therefore this results in less probable and slower event that takes the name of phosphorescence [3].

1.1.2

Characteristics of fluorescence emission

Fluorescence emission displays two main general characteristics which can be found in mostly all fluorophores with few exceptions. The first one is the Stokes’ shift, which means that the light emitted by the material has lower energy, or longer wavelength, with respect to the light used to stimulate absorption, as shown in Figure 1.3 [3]. This is due to the presence of non-radiative thermalization processes that occur just before and after optical emission, as will be better detailed in the following. When an electron is excited, it firstly relaxes non-radiatively to the least vibronic level by losing energy, then it decays to a vibrational level of the ground-state by emitting fluorescence, and finally it undergoes to a further thermalization to go back to the least level of the ground state. Therefore, the two thermalization processes, which occur in a non-radiative way, determine the lower energy of the emitted light.

The second general characteristic is that fluorescence emission spectra are typically independent of the excitation wavelength of the light [2]. The explanation regards the presence of vibrational levels here too. Whichever the energy (and so the wavelength) of the exciting photons, the first step after excitation is thermalization to the least vibronic level of the excited state, that is a non-radiative process. Therefore, since no light is emitted, the emission spectrum is independent on the excitation wavelength.

1.1.3

Lifetime and quantum yields

The most important parameters of a fluorophore are the fluorescence quantum yield and the lifetime. The former is defined as the efficiency of fluorescence over all possible decay pathways, while the latter is the average time the molecule spends in the excited

(18)

Chapter 1. Fluorescence and Raman scattering

Figure 1.3. Stokes shift of a fluorescence emission spectrum.

state before returning in the ground-state [2]. We define with kr the radiative decay

rate which represents the probability per unit of time that an excited electron will relax by emitting light, and with knr the non-radiative decay rate which is instead

the probability per unit of time to have a non-radiative decay of an excited electron (including all possible non-radiative decay processes) . The total probability to have a decay, radiative or non-radiative, in the unit of time is the sum of the two coefficients:

ktot = kr+ knr (1.1)

The fluorescence quantum efficiency is defined as the ratio between the radiative decay rate and the total decay rate:

ϕ = kr kr+ knr

(1.2) This quantity is always less than 1 unless the non-radiative decay rate is negligible with respect to the radiative decay rate. Considering for sake of simplicity one photon absorbed one electron promoted, this quantity represents the number of emitted photons relative to the number of absorbed photons.

The fluorescence lifetime is the average time the molecule is in the excited state before relaxing and so it is the inverse of the total decay rate, which is instead the probability per unit of time to have a relaxation:

τ = 1

kr+ knr

(1.3) Generally, fluorescence lifetimes are on the order of ns. In case of non-radiative decay rate negligible with respect to the radiative one, the lifetime of the fluorophore is called intrinsic or natural lifetime and is given by:

τn =

1 kr

(1.4) After defining these quantities, it is possible to retrieve the rate equation that describe the depopulation of the excited state, and so the intensity of the fluorescence signal 4

(19)

1.1. Fluorescence

Figure 1.4. Two-level system.

as a function of time.

Let us consider a two-level system as in Figure 1.4. Since absorption process is on the order of f s, very fast compared with fluorescence lifetime, it is possible to consider that the population of the excited level occurs instantaneously. The rate equation describes the variation in time of the population N1 which, at first order, is function

of the instantaneous population of the state: dN1

dt = − (kr+ knr) N1 (1.5)

Therefore, the depopulation follows an exponential decay, with fluorescence lifetime as decay constant:

N1(t) = N1(0) e−

t

τ (1.6)

The fluorescence signal is directly proportional to the population N1, so it follows the

same exponential trend as a function of time [4]: I (t) = I0e−

t

τ (1.7)

In Figure 1.5 it is possible to notice how fluorescence emission is a random process, and the lifetime is an average value of the time spent in the excited state. For a single exponential decay, 63% of the molecules have decayed at t < τ and 37% decay at t > τ .

1.1.4

Further fluorescence spectroscopy techniques

Fluorescence spectroscopic techniques are divided into two broad categories: steady-state and time-resolved techniques.

The first case is the one we will deal with during this thesis work, however in this paragraph we present a brief on time-resolved techniques because the setup will be improved in the future in order to acquire time-gated fluorescence spectra.

Considering time-resolved techniques, the source has a finite coherence time, as a pulsed laser, therefore measurements are performed in time domain and it is possible

(20)

Chapter 1. Fluorescence and Raman scattering

Figure 1.5. Exponential decay of fluorescence signal.

to extract information also about the characteristic time of fluorescence decay. In this case it is possible to use both time-correlated techniques or time-gated detectors [2] which collect fluorescence light in multiple time gates opened at different delays with respect to the excitation, as shown in Figure 1.6. This approach allows the reconstruction of the fluorescence decay lifetimes both at ns scale and at µs scale and to discriminate between the emission spectra of species emitting different signals at different timescales. Time-resolved techniques are not one shot, but they require more than one measurement in order to measure the signal at different time gates, therefore the sample must remain in a stable condition for all the time and must not be perturbed from the measurement process.

Figure 1.6. FLIM: Sampling of fluorescence decay in a single pixel at different delay of the window with respect to the excitation.

The most important and widespread time resolved technique for the analysis of fluorescence spectra is Time-correlated Single Photon Counting [5], that we mention for sake of completeness.

(21)

1.2. Raman scattering

1.2

Raman scattering

When a monochromatic light beam impinges on a medium, photons can undergo different process. Among the scattering processes the elastic one, also called Rayleigh scattering, is the most probable. Photons are scattered by the medium maintaining the same energy and so the same frequency and the same wavelength of the incident beam. However, a small fraction of the light can be scattered inelastically, that means that the scattered radiation has a wavelength different with respect to the incident one. The difference in energy lies in the ranges associated with transitions between rotational and vibrational levels in the molecular system. This phenomenon is also called Raman scattering, which is a form of light-matter interaction. Raman scattering has a fundamental role in spectroscopy in many fields of science because of its application for understanding molecular structure of a material [6].

Figure 1.7 shows a schematic model of Rayleigh and Raman scattering which is based on the photon description of the electromagnetic radiation.

Figure 1.7. Energy transfer model of elastic (Rayleigh) and inelastic (Raman) scattering based on photon description of electromagnetic radiation.

In the diagram an arrow pointing upward means a photon annihilation, while an arrow pointing downward means a photon creation. When monochromatic light impinges on the medium, the molecular system is excited to a virtual level by absorbing a photon, and then it relaxes by emitting a secondary photon. The emitted photon can have the same energy of the absorbed one if the system return to its previous level, or can have different energy: higher if the molecular system relaxes to a lower vibrational energy state, lower if the system relaxes to a higher energy state.

The theoretical treatment of Raman scattering may be approached in three different ways. The first is the purely classical way. Radiation is treated as an electromagnetic wave, and the material system as an assembly of independent classical rotors and vibrators. The second one is a semi-classical approach, in which we still retain the classical treatment of radiation, but we adopt a quantum mechanical description of the medium. Finally, the third option is to quantize both the radiation and the material system. This procedure is the basis of the quantum electrodynamics [7]. For sake of simplicity we will treat in detail in the following section only the classical formalism,

(22)

Chapter 1. Fluorescence and Raman scattering

then we briefly mention its limitations with respect to the other two approaches.

1.2.1

Classical treatment

Let us start considering a molecule and a monochromatic electromagnetic field im-pinging on it. The radiation will polarize the molecule according to two different phenomena, depending on the polarization state of the molecule. If the molecule has a null dipole moment, the radiation causes a charge displacement in the molecule, therefore the electromagnetic field polarizes the matter and we refer to this situation as polarization by induction. If the molecule already has a non-zero dipole moment, the molecule will orient in the same direction of the electric field, and we refer to this situation as polarization by orientation.

Figure 1.8. Electric dipole.

Considering the Figure 1.8 where d is the charge displacement and +q and −q are the centre of mass of the positive and negative charge of the molecule, the oscillating electric dipole moment of the molecule will be:

µ (t) = qd (t) (1.8)

The total time-dependent dipole moment vector of the molecule may be written as the sum of a series of powers of the electric field. Since we are considering low intensity signals, non-linear effects are not involved in the discussion and so we can consider a linear relation at first order approximation.

µ = αE (1.9)

Where α is the polarizability. If the system is isotropic α is a constant, otherwise it is a tensor.

In general, the k-th component of the dipole moment depends on all components of the field:

µi =

X

j

αijEj (1.10)

The electric dipole moment is source of the oscillating electric field emitted by the molecule, therefore our objective is to retrieve the frequency-dependent linear induced electric dipole to point out its frequency components, which will be the same of the emitted radiation.

The polarizability tensor will, in general, be a function of the nuclear coordinates 8

(23)

1.2. Raman scattering

and hence of the molecular vibrational frequencies. Therefore, we may obtain the frequency-dependent dipole moment µ by introducing the frequency dependence of E and α.

Considering a laser beam as incident light, we can assume that the electric field is a monochromatic plane wave oscillating at frequency ω0:

E = E0cos ω0t (1.11)

Considering a system which is free to vibrate but not rotate, each vibration of the molecular system can be expressed as superimposition of independent normal mode Qk, that, for a harmonic vibration, can be written as:

Qk = Qk0cos (ωkt + δk) (1.12)

Where Qk0 is the normal mode amplitude, ωk is the oscillating frequency and δk a

phase factor.

The variation of the polarizability with vibrations of the molecule can be expressed by expanding each component αij of the polarizability tensor α in a Taylor series with

respect to the normal coordinates of vibration, as follows: αij = (αij)0 + X k  ∂αij ∂Qk  0 Qk+ 1 2 X k,l  ∂2αij ∂Qk∂Ql  0 QkQl+ ... (1.13)

Where derivatives are taken at the equilibrium configuration. At first order, we can neglect terms involving powers of Q.

αij = (αij)0+ X k  ∂αij ∂Qk  0 Qk (1.14)

Substituting the expression of αij, Qk and Ej into Eq. 1.10, we obtain for the generic

i-th component of µ: µi = X j (αij)0Ej0 cos ω0t + X j,k  ∂αij ∂Qk  0 Ej0Qk0cos ω0t cos (ωkt + δk) (1.15)

Using the trigonometric identity cos α cos β = 1

2[cos (α + β) + cos (α − β)] (1.16) We can rearrange the expression by writing:

µi = µi1(ω0) + µi2(ω0+ ωk) + µi3(ω0− ωk) (1.17) Where: µi1= X j (αij)0Ej0 cos ω0t (1.18) µi2 = 1 2 X j,k  ∂αij ∂Qk  0 Ej0Qk0cos [(ω0+ ωk) t + δk] (1.19) µi3= 1 2 X j,k  ∂αij ∂Qk  0 Ej0Qk0cos [(ω0− ωk) t + δk] (1.20)

The expression of the generic i-th component of the dipole moment highlights all the scattering components at different oscillating frequencies:

(24)

Chapter 1. Fluorescence and Raman scattering

• Rayleigh scattering: the emitted radiation oscillates at the same frequency of the incoming one;

• Stokes Raman scattering: the emitted radiation oscillates at lower frequency ω0− ωk with respect to the incoming radiation;

• Anti-Stokes Raman scattering: the emitted radiation oscillates at higher fre-quency ω0+ ωk with respect to the incoming radiation.

All the components of the emitted radiation do not carry information on the tempo-ral and spatial coherence of the incident laser radiation, so they are emitted in all direction.

The classical treatment can be considered good for a qualitative description of the phenomenon. It correctly predicts the frequency dependence for Rayleigh and Raman scattering; however, it has many limitations. It can be applied only to molecular vibrations and not to its rotations, moreover classical theory does not provide infor-mation on how variation of the polarizability tensor is related to the properties of the scattering molecule. Finally, the classical treatment does not point out the efficiency of the three processes (Rayleigh scattering, Stokes, and anti-Stokes Raman scattering), therefore it is not possible to highlight the probability that a particular phenomenon occurs. For these reasons, a semi-classical and a full quantum mechanical approaches are introduced to complete explain all the aspects of the Raman scattering [7].

1.2.2

Selection rules

Looking at Eqs. 1.19, 1.20 the terms of the electric dipole moment which describes the Raman scattering depend on the derivatives of the components of the polarizability tensor with respect to the molecular vibrational normal modes evaluated in the polarizability equilibrium position:

 ∂αij

∂Qk



0

(1.21) Therefore, in order to have a Raman-active vibration in a molecule, at least one component of these derivatives must be non-zero. This condition defines a selection rule for vibrations. Indeed, only vibrations which change the electronic polarizability of the molecule can induce Raman scattering. If polarizability varies as a function of vibrational normal modes, the incoming light will be inelastically scattered and the scattered light will have three frequency components Eq. 1.17 that will generate the beats effect.

1.2.3

Intensity and cross section

Raman scattering is inherently incoherent, therefore the intensity of scattering from a system of N non-interacting molecules is N times that from one molecule. Illuminating a system with monochromatic light, the intensity I of the Raman scattering is directly proportional to the irradiance I of the light and to the number of the molecules [7].

I ∝ σIN (1.22)

(25)

1.2. Raman scattering

σ is the scattering cross section which express the efficiency of the Raman process. It depends on the excitation wavelength with the inverse of the fourth power:

σ ∝ 1

λ4 (1.23)

Typically, the scattering cross section is very small, therefore Raman signal is very weak compared to elastic scattering signal and difficult to detect. In practice, only one photon each 106 – 1010 is inelastically scattered [8].

1.2.4

Stokes and Anti-Stokes Raman scattering

As we can see from Eq. 1.17, the electric dipole moment is formed by three factors oscillating with three different frequencies which describes respectively the elastic scattering and the inelastic ones. In a quantum electrodynamic framework, these three frequency components represent the sources of three different photons emitted by the molecule. In case of Stokes Raman scattering, the generated photon has a lower energy (because lower frequency) than the incoming one, while in case of Anti-Stokes Raman scattering, it has a higher energy (because higher frequency) than the incoming one. This former case means that the molecule has been promoted to a higher energy vibrational state by interacting with a photon which loses part of his energy, while the latter case means that the molecule already was in an excited vibrational state, and then it relaxes to a lower energy state by interacting with a photon which increases its energy. While the Stokes scattering is always possible, the Anti-Stokes one requires that the molecule is in an initial excited state when incident photon arrives, which is an unlikely condition and this determines that the intensity of the Anti-Stokes signal is very small compared to the Stokes one.

Recalling the expression of the Bose-Einstein distribution which describes the proba-bility that a molecule is in a state with energy ¯hωk at temperature T :

n (ωk, T ) = 1 1 − e− ¯ hωk kB T (1.24) It is possible to demonstrate that the probability to have an Anti-Stokes scattering process is proportional to n, while the Stokes ones goes with n + 1. For this reason, the intensity of the Stokes signal is always more intense than the Anti-Stokes one. Moreover, at room temperature n << 1, so Anti-Stokes processes are very rare. The ratio between the Anti-Stokes intensity and the Stokes one is:

IAS IS = e− ¯ hωk kBT (1.25)

At room temperature the intensity of Stokes scattered light is about 10 times higher than the Anti-Stokes one. Measuring the value of the two peaks, it is possible to estimate the local temperature of the sample.

1.2.5

Further Raman spectroscopy techniques

In our experimental setup we will exploit the spontaneous Raman emission, after having excited the sample with a laser, and we will collect the signal in a dispersive

(26)

Chapter 1. Fluorescence and Raman scattering

spectrometer. However, the intensity of Raman signal is very weak compared to the high level of elastic scattering therefore it is very difficult to detect, and this issue leads to the necessity of good spectral filtering and low-noise detector. The intensity of Raman signal is proportional to the inverse of the fourth power of the excitation wavelength, so one may be tempted to use more energetic laser to obtain a better signal. Nevertheless, high energy incoming light will stimulate electronic transitions generating a fluorescence emission background that will completely obscure the Raman scattering signal. This is way Raman spectroscopy is often performed with NIR excitation light to reduce the amount of background fluorescence emission. Moreover, aiming at improving Raman spectroscopy techniques, several technologies have been developed based on Raman scattering phenomenon [9]. We will briefly mention some of them.

To enhance Raman scattering process, Surface Enhanced Raman Scattering (SERS) uses a roughened noble-metals substrate in proximity of the analyte material. This enhancement mechanism is due to the resonance effect between incident light and localized surface plasmon which amplifies both incident and scattered light in proximity of the surface, and it can increase the signal up to six order of magnitudes [10]. Other techniques are based on coherent effects. The development of lasers with high irradiance and extremely short pulse, opens the opportunity to observe non-linear effect. Coherent Anti-Stokes Raman Spectroscopy (CARS) and Stimulate Raman Spectroscopy (SRS) are two techniques which exploit non-linear effects involving the third-order nonlinear susceptibility of the medium. CARS is a particular case of four-wave mixing process in which coherent anti-Stokes radiation is generated by the interaction of two input beams (pump and stokes) and the vibrational state; while SRS is a non-parametric process in which the interaction of the two input beam with the sample induce the stimulated emission to the vibrational state resulting in a simultaneous pump beam attenuation and stokes field amplification [8].

1.3

Hybrid setup

In the previous sections we have treated the theory about fluorescence emission and Raman scattering. The aim of this thesis work is to design and construct a setup able to perform combined fluorescence and spontaneous Raman spectroscopy. It is important to highlight the idea of combining these two techniques in a single setup, and we chose to do this because fluorescence and Raman spectroscopy are both optical techniques, moreover they are non-invasive and contactless, so they do not damage the sample if we consider safe operating optical power densities. Finally, they allow to perform in-situ and real-time analysis.

Both fluorescence and Raman analysis techniques have advantaged and drawbacks, indeed fluorescence is an optical technique with high sensitivity and selectivity due to the presence of fluorophore, and there is a simple relation between the concentration of excited molecules and the intensity of the emitted signal. However, it is not an analytical method since it is strictly related to the presence of the fluorophore and it depends on the environmental conditions.

On the other hand, Raman spectroscopy is an analytical technique which gives in-formation directly on the sample and it is highly specific, like a chemical fingerprint

(27)

1.3. Hybrid setup

of a material. The Raman scattering is specific to the nuclear masses of the atoms forming the chemical bonds in the molecule, the shape (symmetry) of the molecule and the electronic structure, making unambiguous identification of the molecule possible. Many organic and inorganic materials are suitable for Raman analysis, no sample preparation is needed, and samples can be analysed also through glass or polymer packaging. Nevertheless, the spontaneous Raman effect is very weak, so it is a technique with low sensitivity. It has a very low cross section with respect to the elastic scattering one, moreover fluorescence of the sample can hide Raman spectrum. As said, these two techniques are both optical and non-invasive, therefore combining them in a single setup allows us to perform a more detailed and deeper investigation on the sample than using a single technique. Moreover, it is rather possible that a certain sample has a good fluorescence signal and bad Raman one, or vice versa, so using a combined setup, it is still possible to extract some information also for these kinds of samples. In this latter case we take advantage of the complementarity of these techniques.

In literature there are different examples of groups that implemented a hybrid setup ex-ploiting its advantages [11], [12], [13], and they reported that multi-modal spectroscopy can give extensive information about the material under examination. Fluorescence and Raman spectroscopy are highly complementary techniques which permit molecular analysis on the same location in a sample. An integration of different techniques can provide complete information and can improve the sensitivity and capability of the instrument to meet the multi-faced demands of real-life. Finally, some of the disadvantages of using a single technique can be reduced by a bi-modal technique, as pointed out before.

(28)
(29)

Chapter 2

Spectroscopy and Microscopy

This chapter treats theoretical engineering concepts useful for the understanding of the thesis work. We briefly recall that the aim of this thesis work is to design and develop an optical microscope for the detection of the fluorescence and Raman spectrum in all the points of a sample surface. The chapter is divided into three sections.

The first and second sections deal with the principle of imaging spectroscopy. In the former Hyperspectral Imaging technique is introduced to understand the basic principle of spectral mapping of a sample. Since we will collect a hyperspectral imaging dataset by raster-scanning the sample surface and resolving the spectrum of the scattered/emitted light with a dispersive spectrometer, I will discuss this latter element in detail, focusing on its characteristics and its main parameters.

The third section deals with the basis of optical microscopy. The 4f system, which is the cornerstone of all modern microscopes, is discussed in depth, retrieving the expression of the main operational optical functions both for the coherent and incoherent source, and explaining their physical meaning and their properties.

2.1

Hyperspectral Imaging

In our thesis work we will exploit dispersive spectroscopy by using a spectrometer whose scheme will be fully described in the next sections. The main goal is to collect spectra in each point of a sample surface by employing a raster-scanning approach. Spectral imaging, also known as imaging spectroscopy [14], refers to a set of methods and devices for acquiring the light spectrum for each point of image scene. Among them, Hyperspectral Imaging (HSI) is the one we will focus on.

Hyperspectral imaging acquires the full continuous spectrum of light for each point (x, y) of the scene, therefore the resulting data (Figure 2.1) consist in a three dimen-sional data-cube (hypermatrix) which contains information about the intensity of the collected radiation for each position and wavelength (x, y, λ).

Two main techniques for acquiring the data-cube are available: spatial scanning and spectral scanning. In the former, the sample is scanned by moving it in relation to the excitation source and spectra are acquired by a spectrometer, while in the latter, there is a spectral scanning of the sample, and we acquire an image for each wavelength by using a 2D sensor.

(30)

Chapter 2. Spectroscopy and Microscopy

Figure 2.1. Data-cube obtained by Raman imaging spectroscopy.

The first is point-to-point mapping (whisk-broom), in which the spectrum is collected point by point, by implementing a raster-scanning, while the second is line mapping (push-broom) in which the spectrum is collected from an entire row of pixels by

defo-cusing the excitation source to cover a large area and by performing a line-scanning. With regards to spectral scanning, the laser is defocused to illuminate the whole sample surface, and an image sensor is employed to collect the emitted/reflected light. Spectral information is retrieved by using a tunable spectral filter, such as liquid crystal tunable filter (LCTF) or acoustic optic tunable filter (AOTF) or a set of bandpass filters, centred at different wavelengths, mounted on a motorized wheel. The hypercube is constructed by making an image of the entire sample for each wavelength available in the set. The main limit consists in the discrete numbers of filters [9]. Alternatively, inteferometric techniques have been also proposed [15].The three acquisition techniques are shown in Figure 2.2.

(a) (b) (c)

Figure 2.2. (a) Point mapping, (b) line-scanning mapping and (c) spectral scanning.

Spectral scanning approach is a faster techniques and it does not require the mechani-cal handling of the sample, however the whole signal acquired is dominated by the points with the highest intensity signal. This drawback is not present in the spatial scanning approach, especially in the point mapping, that is the one that we have implemented.

(31)

2.2. Introduction to spectroscopy

2.2

Introduction to spectroscopy

Spectroscopy refers to the measurement of the radiation intensity as a function of the wavelength, so it is the study of the interaction between matter and electromagnetic radiation as a function of the wavelength or the frequency of the radiation. It is a fundamental tool in the fields of physics, chemistry, and astronomy to investigate the composition and the properties of the matter at molecular scale. A great deal of physico-chemical information on object can be obtained by measuring the spectrum of the light they emit, scatter, or reflect [15].

In the following sections we will focus on dispersive spectroscopy which is characterized by the use of a dispersive element to separate the light spectral components, and we will describe in detail the configuration of a dispersive spectrometer, focusing on the main components and parameters which are fundamental for its design.

2.2.1

Spectrometer

A spectrometer is an instrument used to study the spectrum of light. In its simplest form, a spectrometer has an entrance slit in which it collects the light that we want to analyse, then it uses an optical element (lens or mirror) to produces a collimated beam of light and disperses it with a grating or a prism. The dispersed spectrum is then focused and recorded with an electromagnetic array detector. Figure 2.3 shows the general scheme of a spectrometer.

A spectrometer can be considered as a 4f imaging system which makes the image of the entrance slit (that is in the object plane) on the detector (that is in the image plane). So, the entrance slit and the detector are on conjugated planes. The image of entrance slit, ignoring aberrations and diffraction, will not have in general the same dimensions as the entrance slit itself. It is therefore possible to define a tangential and sagittal magnification which takes into account the ratio of their width and height, respectively. In general, most of the spectrometers have these magnifications equal to 1, so in the conditions said before, we have a perfect replica of the entrance slit on the detector.

(32)

Chapter 2. Spectroscopy and Microscopy

2.2.2

The diffraction grating

The main element of a spectrometer is the dispersive element, which allows one to separate in space each component of the spectrum of the light. A diffraction grating (Figure 2.4) is a collection of transmitting or reflecting elements separated by a distance comparable to the wavelength of light under study. Dispersive elements can be reflecting or transmitting and the electromagnetic field which interacts with them will have its amplitude or phase modified in a predictable manner. In a reflection configuration, which is the most used, dispersive elements are called “grooves”. Since the distance between grooves should be comparable with the wavelength of light, it is important to achieve high precision in the spacing of the grooves.

Figure 2.4. Scheme of a reflection diffraction grating.

To explain the main working principles of a diffraction grating, let us consider for sake of simplicity a monochromatic light. When it is incident on a grating surface, it is diffracted into discrete directions. Each groove can be thought as an independent source of diffracted light and the light diffracted by each groove combines to form a diffracted wavefront. The usefulness of this phenomenon is that there exist a unique set of discrete angles along which, for a given spacing d between grooves, the diffracted light from each source is in phase with the light diffracted from any other source, so they combine constructively.

The geometrical path difference between light reflected by two adjacent grooves is:

d sin α + d sin β (2.1)

Where α and β are the incident and the reflection angle of light (β < 0).

The principle of interference states that only when this difference in the path length is equal to the wavelength of light, or its multiples, the light from adjacent grooves will be in phase, so there is constructive interference. In all other cases, the two waves sum destructively.

mλ = d (sin α + sin β) (2.2)

(33)

2.2. Introduction to spectroscopy

m is the diffraction order and it is an integer number. Since sin x is always less than 1, only diffraction orders corresponding to

mλ d < 2 (2.3)

are physically meaningful.

It is worth noticing that since m can assume only discrete values, there is only a discrete set of angles βm for a given incidence angle α at which light interferes

con-structively.

Instead of using d, distance between two consecutive grooves, it is useful to use the parameter G = 1/d, which is the groove frequency or better “grooves per millimeter”. Considering now a polychromatic light, in each spectral order m, the different com-ponents of the spectrum of the incident light are dispersed at different angles as a function of the specific wavelength

β (λ) = arcsin mλ

d − sin α 

(2.4) Therefore, considering a specific diffraction order m, this formula states that the various components of the spectrum of the signal under study are separated in angle when they are diffracted, implying that they will be separated also in space on the image plane (the detector).

It is important to point out that successive spectra could overlap. Indeed, the light of wavelength λ diffracted at the first order overlaps in space with the light at λ/2 diffracted at the second diffraction order (m = 2), etc. This superposition is an intrinsic characteristic of diffraction gratings and could be prevented by a suitable filtering called order sortering. The issue that arises is that a detector sensitive to both wavelengths will detect both signals without distinguishing the different wavelengths [16].

2.2.3

Dispersion

The main purpose of a diffraction grating is to disperse light spatially by wavelength. A beam of white light incident on a grating will be separated into its spectral components and each wavelength will be diffracted in a different direction. Dispersion is a measure of the separation among different wavelengths. It is possible to define an angular dispersion and a linear dispersion [17].

The angular dispersion expresses the variation of the diffraction angle dβ between two wavelengths of light differing by dλ, considering the diffraction order fixed.

Therefore, it is possible to write the angular dispersion by differentiating Eq. 2.4: D = dβ

dλ = Gm sec β (2.5)

The linear dispersion expresses the variation of the position along the spectrum between two wavelengths of light differing by dλ, considering the diffraction order fixed. It is worth noticing that the physical quantity that allows the calculation of the variation in space by knowing the variation in angle is the focal length of the optical

(34)

Chapter 2. Spectroscopy and Microscopy

element which focuses the collimated beam of light that comes from the grating. Since the variation of the position in the image plane can be written as:

dl = f dβ (2.6)

the linear dispersion is the product of the angular dispersion and the effective focal length of the system. Higher effective focal length of the system is related to higher linear dispersion. Instead of considering the linear dispersion, it is frequently used its inverse, which is called Plate factor, and expresses the change in wavelength (in nm) corresponding to a variation in location in the image plane (in mm).

P = 1 f D = 1 G cos β mf (2.7)

A grating with smaller Plate factor means that the system has a higher dispersive capability than one with a higher value of that factor [16]. It is worth noticing that Plate factor is inversely proportional to the groove density, so higher groove density means high dispersion. This parameter is the one practically used to describe the dispersion of a diffraction grating.

2.2.4

Parameters of interest in diffraction gratings

The angular dispersion and especially the plate factor, as well as the groove density, are important parameters for the study of the design of a spectroscopic system and for the determination of the ultimate spectral resolution of the signal of interest. Moreover, there are other parameters that it is worth pointing out.

The first one is the number of independent wavelengths that is possible to resolve. In the diffraction limit this parameter is simply the ratio between the width of the detector and the width of the image of the entrance slit on the detector:

nλ =

wp

ws

(2.8) If the width of the entrance slit decreases, the number of independent wavelengths which can be detected increases and this enhances the spectral resolution of the system. This is true in diffraction limit, but decreasing more the dimension of the slit, diffraction phenomena becomes more and more relevant and must be taken into account, so the number of independent wavelengths that is possible to detect is not limited by the width of the entrance slit but by the resolution of its diffracted image on the detector. Moreover, the intensity of the light that enters in the spectrometer decreases if the dimension of the slit decreases, and its magnitude could become comparable with the noise. Therefore, there is a trade off in the choice of the width of the entrance slit between the intensity of the signal and the spectral resolution to achieve.

The resolving power R of a grating measures its ability to separate adjacent lines of average wavelength λ:

R = λ

δλ (2.9)

Where δλ is the limit of resolution, or rather the difference between the two nearest wavelengths that can be distinguished.

(35)

2.2. Introduction to spectroscopy

The Bandpass B (or spectral bandwidth) of a spectrometer is the wavelength interval of diffracted light which impinges on the detector. It can be calculated by multiplying the plate factor by the width of the detector:

B ' wpP (2.10)

Considering fixed the dimension of the detector, a smaller bandpass means that the instrument can resolve wavelengths that are closer together than an instrument with a larger bandpass, since the bandpass is proportional to the plate factor. Therefore, higher groove density means higher dispersion but lower bandpass.

The most important parameter is the spectral resolution which is the minimum detectable variation in wavelength by the whole system, or rather the minimum wavelength difference that can be resolved unambiguously by the system. This ability depends on many factors: not only on the grating but also on the dimension of the entrance slit, on the dimension of the detector elements, on the aberrations in the image, on the magnification of the image and on the diffraction phenomena induced by the entrance slit [16]. In diffraction limit regime the spectral resolution can be defined as:

δλ = ws wp

B (2.11)

The last parameter of interest is the Free Spectral Range defined as the range of wavelengths in a given spectral order for which superposition of light from adjacent orders does not occur. It is possible to calculate it directly from its definition. Let us impose that there is a ∆λ maximum for which the light of the m − th order is superimposed to the light of m + 1 − th order:

m (λ + ∆λ) = (m + 1) λ (2.12)

It is possible to define the free spectral range as the ∆λ which verifies this condition: Fλ = ∆λ =

λ

m (2.13)

This is the maximum shift in wavelength before than the signal at m − th order overlaps with the one of the m + 1 − th order [17]. The concept of free spectral range applies to all gratings capable of operation in more than one diffraction order. Moreover, this parameter is related with order sorting, indeed systems with greater free spectral range may have less need for filters. The concept of free spectral range is related to the periodicity of the system in the spectral domain.

2.2.5

Czerny-Turner configuration

As said previously there are two types of diffraction gratings, reflecting and trans-mitting ones. In addition, it is possible to distinguish between two configurations regarding the geometry of the grating: plane grating systems and concave grating systems. A plane grating has a flat surface, so it is used in collimated incident light. A collimated beam impinges on the grating which diffracts it, but the diffracted beam remains collimated. Therefore, a plane grating requires auxiliary optical elements which have to modify the wavefronts of the light incident on and diffracted by the

Riferimenti

Documenti correlati

The coherence, clearness and open-minded nature of Jervis’s thought about scientific and cultural innovations, which were main features of his intellectual research,

Le scelte degli autori dei testi analizzati sono diverse: alcuni (per esempio Eyal Sivan in Uno specialista) usano materiale d’archivio co- me fotografie e filmati, altri invece

And the recent studies on the acquisition agreement (i) revitalized the debate concerning the subject matter of the agreement, in which there are a formalistic thesis according

In this paper we distinguish four different fields of enquiry in which to seek a history of ideas on the causes of market power: the first concerns the history of the models of

During the transfer of data to the tax authorities on the availability of real property subject to taxation from physical persons, identification of addresses of buildings in

In § 2 we present the abstract contract model, followed by two concretisations: in § 2.1 we adopt the contracts-as-processes paradigm whereby CCS-like processes represent contracts

Lo scopo principale del presente lavoro è stato quello di mettere a punto i protocolli sperimentali per la caratterizzazione genetica tramite marcatori molecolari ISSR