POLITECNICO DI MILANO
School of Industrial and Information Engineering
Dipartimento di Fisica
Master Degree in Engineering Physics
Design and development of a combined Raman and
Laser-induced Fluorescence optical microscope
Advisor:
Prof. Daniela Comelli
CoAdvisor: Prof. Gianluca Valentini
Candidate:
Luca Parmigiani, Matr. 920280
Contents
List of Figures iv List of Tables ix Abstract xi Sommario xv 1 Introduction 1 1.1 Fluorescence spectroscopy . . . 1 1.1.1 Phenomena of fluorescence . . . 2 1.1.2 Jablonski diagram . . . 31.1.3 Fluorescence lifetimes and quantum yields . . . 5
1.1.4 Fluorescence spectroscopy as a tool . . . 7
1.2 Raman spectroscopy . . . 9
1.2.1 Classical treatment . . . 10
1.2.2 Semi-classical treatment . . . 12
1.2.3 Selection rules . . . 14
1.2.4 Intensity and cross section . . . 15
1.2.5 Raman spectroscopy as a tool . . . 17
2 Scanning optical microscopy 23 2.1 Microscope . . . 24 2.2 The 4f system . . . 25 2.2.1 Lateral resolution . . . 27 2.2.2 Intensity propagation . . . 31 2.2.3 Axial resolution . . . 32
2.3 Study of the propagation of a Gaussian beam through a non-ideal 4f system . . . 34
3 Hyperspectral imaging by spatial scanning of a sample surface 39 3.1 Hyperspectral imaging . . . 39
3.2 Dispersive spectrometer . . . 41
3.2.1 The diffraction grating . . . 42
3.2.2 Dispersion . . . 44
3.2.3 Parameters of interest . . . 45
3.2.4 Grating mount configuration . . . 47
4 Design of the device 51 4.1 Objectives . . . 52
4.2 The fluorescence section . . . 53
4.2.1 Excitation laser and the multimode excitation fiber . . . 53
4.2.2 Detection camera and spectrometer . . . 53
4.2.3 The optical system – choice of the optics . . . 54
4.2.4 Expected spatial resolution . . . 56
4.2.5 Expected spectral resolution . . . 57
4.3 The Raman section . . . 58
4.3.1 Excitation laser and the monomode excitation fiber . . . 58
4.3.2 Detection camera and spectrometer . . . 59
4.3.4 Expected spatial resolution . . . 61
4.3.5 Expected spectral resolution . . . 62
4.4 The section of the widefield observation of the sample . . . 64
4.4.1 Illumination lamp, illumination optical fiber and the color camera 64 4.4.2 The optical system – choice of the optics . . . 64
4.4.3 Theoretical field of view . . . 64
5 Characterization of the device 67 5.1 The fluorescence section . . . 69
5.2 The Raman section . . . 71
5.3 The section of the widefield observation of the sample . . . 73
5.4 CAD simulation . . . 76
5.5 Characterization . . . 77
5.5.1 Spatial calibration of the field of view . . . 77
5.5.2 Characterization of the laser beam . . . 78
5.5.3 Measurement of the detectors noise . . . 83
5.5.4 Preliminary spectral measurements . . . 84
5.6 Sample spatial scanning . . . 87
5.7 Description of the remote control . . . 88
5.7.1 Motor remote control . . . 88
5.7.2 A bigger remote control program . . . 91
6 Measurement of a paint micro-stratigraphy 93 6.1 Larionov oil painting microsample . . . 94
6.2 Analysis . . . 98
6.2.1 Fluorescence dataset . . . 101
6.2.2 Raman dataset . . . 104
Contents
Bibliography 111
Ringraziamenti 115
A Wave propagation 117
A.1 Monochromatic Wave Propagation . . . 117
A.2 Monochromatic Field Propagation through a Lens . . . 121
A.3 Intensity Propagation . . . 124
A.4 3D imaging . . . 125
B Study of the dependence of the spectral resolution on the entrance
slit width 129
List of Figures
1.1 Absorption and emission spectra of antrachene molecules . . . 2
1.2 Jablonski diagram . . . 3
1.3 Epifluorescence microscope . . . 7
1.4 Example of epifluorescence microscope use . . . 8
1.5 Energy transfer model of elastic and inelastic scattering . . . 9
1.6 Example of Raman microscope use . . . 18
2.1 2f configuration . . . 24
2.2 4f configuration . . . 26
2.3 4f system . . . 36
3.1 Spectral imaging, Line scanning, Point scanning . . . 40
3.2 General scheme of a spectrometer . . . 41
3.3 Scheme of a reflection diffraction grating . . . 42
3.4 Czerny-Turner mount . . . 48
4.1 4f system path . . . 51
4.2 Position and size of the waist VS d . . . 62
5.1 Numeration . . . 68
5.2 Real device . . . 68
5.3 UV excitation and fluorescence detection paths . . . 69
List of Figures
5.5 Lamp excitation and camera detection paths . . . 74
5.6 Microscope CAD simulation . . . 76
5.7 Microscope 3D model: frontal and lateral sides . . . 76
5.8 Field of view for 20x and for 40x . . . 78
5.9 Spot of UV for 20x and for 40x . . . 80
5.10 Spot of IR for 20x and for 40x . . . 80
5.11 Excitation spots alignment . . . 81
5.12 Correct alignment of the collection and the excitation points . . . 82
5.13 Noise of CCDs . . . 83
5.14 Spectra Fluorescent reference slides . . . 84
5.15 Raman spectra of pigments and polymer . . . 85
5.16 Sample holder . . . 87
5.17 LabWindows software for stepper control . . . 90
5.18 Instruments controlled by PC . . . 91
6.1 Larionov painting stratigraphy - image taken from [24] article . . . 93
6.2 Part of the microsample from Larionov’s oil painting and scanned area 97 6.3 ROI Imaging To . . . 99
6.4 Fluorescence - Larionov: scanned area, intensity map and superimposition102 6.5 Fluorescence spectra of Larionov pigments: zinc white, cadmium yellow and cadmium red . . . 103
6.6 Raman - Larionov: scanned area, intensity map and superimposition . 105 6.7 Raman spectra of Larionov pigments: phthalocyanine blue, ultramarine blue and chromium yellow . . . 106
A.1 Free space propagation with Kirchhoff . . . 119
A.2 2f and 4f configurations . . . 122
B.2 Graphs with ws 100µm . . . 133
B.3 Graphs with ws 10µm . . . 135
List of Tables
1.1 List of existing hybrid Raman-LIF devices . . . 20
4.1 Expected spots for fluorescence section . . . 54
4.2 Expected spots for Raman section . . . 60
5.1 Field of view dimensions . . . 77
5.2 Powers . . . 79
5.3 Estimated spot sizes dimensions . . . 79
5.4 Noise of spectrometers . . . 83
5.5 Raman peaks of pigments and polymer (s = strong; m = medium; w = weak; v = very; sh = shoulder) . . . 86
6.1 Pigments fluorescence peaks of the painting * depending on Zn molar fraction ** depending on Se molar fraction . . . 101
6.2 Pigments Raman peaks of the painting (s = strong; m = medium; v = very) . . . 104
Abstract
This thesis work proposes the design, the development and the characterization of a combined Raman and Laser-induced Fluorescence optical microscope with which to perform raster scanning spectroscopy on sample for laboratory analysis. Laser-induced Fluorescence (LIF) and Raman spectroscopies are two powerful tool for chemical anal-ysis of both organic and inorganic materials. They both do not require any particular preparation of the sample and they are non-destructive and non-invasive techniques. They are both optical techniques and this aspect makes easier their combination, whose big advantage is their complementary: LIF gives information on electronic energy lev-els and Raman on vibrational levlev-els.
In the first chapter the principles of LIF and Raman spectroscopies are briefly presented. We focus on the theory relative to the two arguments, but also on the use of the LIF and Raman spectroscopies as analysis tools, and particularly on the state of art of their combined use.
In the second chapter the principles of scanning optical microscopy are presented. It is also reported a study of the propagation of a Gaussian beam through a non-ideal 4f system.
The third chapter describes the hyperspectral imaging and the theory of dispersive spectrometer.Hyperspectral imaging is explained because at the end of the scanning the implemented device records automatically acquired measures in a cubic dataset . It is an imaging method with which to acquire the full continuous spectrum of light
Abstract
for each point of the scan, resulting in a three-dimesional data cube. There are two main techniques for acquiring the hypercube: spectral scanning and spatial scanning. The device presented in this thesis is designed to perform the second one, particularly a version called point scanning.
The fourth chapter describes the design of the device, explaining the optical study done. The device is divided in three sections: fluorescence, Raman and widefield observation of the sample. For each one of the sections, the most important features are reported: excitation sources, detection system, the optical system with the choice of the optics, the expected spatial and spectral resolutions for the fluorescence and the Raman sections and the theoretical field of view for the section of the widefield observation.
The fifth chapter describes the device and its characterization. A description is presented for each previous section. It is also explained the advantage of creating a CAD of the device, using the software SOLIDWORKS®. The characterization includes spatial calibration of the field of view and the estimate of the laser spots size, which impacts on the real spatial resolution of the device. We have also characterized the typical power on sample. Finally, we have done the characterization of the detectors in terms of noise, and also the spectral measurements of some known materials which show fluorescent behaviour or are Raman active specimens. At the end of this chapter, there is the description of the sample handlig and of the remote control to manage sample handling and other instruments linking to the device operation.
The sixth chapter describes a real measurement performed by raster scanning on a paint stratigraphy taken from a Russian oil painting, with the aim of identifying and mapping the spatial distribution of photoactive pigments. The analysis of the two datasets is based on physical model, using a Python software, that allows us to create intensity maps related to the peaks of the spectra of each scan points. As it will be explained in the chapter, the results of the analysis allow us to say that painting pigments have been mapped quite correctly.
Abstract
The application of the combined Raman and LIF optical microscope has highlighted its strenghts and its weaknesses that can be addressed in a future work.
Sommario
Questa tesi propone la progettazione, lo sviluppo e la caratterizzazione di un micro-scopio ottico, che presenta la combinazione delle tecniche di Raman e di fluorescenza indotta da laser, con cui si esegue una misura di spettroscopia tramite scansione raster su un campione per analisi di laboratorio. Le spettroscopie di fluorescenza indotta da laser (LIF) e Raman sono due potenti metodi diagnostici per l’analisi della compo-sizione chimica di materiali organici ed inorganici. Entrambe non richiedono alcuna preparazione del campione ed entrambe sono tecniche non distruttive e non invasive. Sono entrambe tecniche ottiche e questo aspetto rende pi`u facile la loro combinazione, il cui grande vantaggio `e nella loro complementariet`a: la LIF d`a informazioni sui livelli di energia elettronici e il Raman sui livelli vibrazionali.
Nel primo capitolo, sono presentati brevemente i principi della spettroscopia di fluorescenza e Raman. Ci concentreremo sulla teoria relativa ai due argomenti ma anche sull’uso delle due spettroscopie come metodi di analisi, e in particolare sullo stato dell’arte del loro uso combinato.
Nel secondo capitolo, sono spiegati i principi della microscopia a scansione ottica. E’ stato riportato inoltre uno studio teorico della propagazione di un fascio gaussiano attraverso un sistema ottico 4f non ideale.
Il terzo capitolo descrive l’imaging iperspettrale e la teoria riguardante lo spet-trometro a dispersione. L’imaging iperspettrale perch´e dopo la scansione il dispositivo implementato salva automaticamente le misure acquisite in un dataset cubico. Esso
Sommario
`
e un metodo di imaging con il quale si acquisisce lo spettro full continuos della luce per ogni punto della scansione, ottenendo un cubo tridimensionale di dati. Ci sono due tecniche principali per acquisire un ipercubo: la scansione spettrale e la scansione spaziale. Il dispositivo presentato in questa tesi `e stato progettato per eseguire la seconda tecnica, in particolare una sua versione, chiamata scansione punto per punto. Il quarto capitolo descrive la progettazione del dispositivo, spiegando lo studio ottico fatto. Il dispositivo viene diviso in tre sezioni: fluorescenza, Raman e osservazione del campione. Per ognuna di esse, le principali caratteristiche sono riportate: le sorgenti di eccitazione, il sistema di rilevamento, il sistema ottico con la scelta delle ottiche, le risoluzioni spaziale e spettrale previste per le sezione di fluorescenza e di Raman e il campo visivo per la sezione di osservazione del campione.
Il quinto capitolo descrive il dispositivo e la sua caratterizzazione. Ogni sezione viene descritta nel dettaglio. Viene anche spiegato il vantaggio di fare una simulazione CAD del dispositivo, usando il programma SOLIDWORKS®. La caratterizzazione include la calibrazione spaziale del campo visivo e la stima delle dimensioni degli spot dei laser sul campione, che influenza la reale risoluzione spaziale del dispositivo. Ab-biamo inoltre caratterizzato la potenza tipica sul campione. Infine, abAb-biamo fatto la caratterizzazione del rumore dei rilevatori e anche la misure spettrali di materiali noti, che sono dei fluorofori o delle specie Raman attive.
Alla fine del capitolo, c’`e la descrizione della movimentazione del campione e del con-trollo remoto per spostare in campione e per controllare gli altri strumenti necessari per il funzionamento del dispositivo.
Il sesto capitolo descrive una misura reale fatta tramite scansione raster su una stratigrafia pittorica presa da una tela ad olio russa, con lo scopo di identificare e mappare la distribuzione spaziale dei pigmenti fotoattivi. L’analisi dei due set di dati `
e basata su un modello fisico, usando un programma scritto in Python, che permette di creare mappe di intensit`a relative ai picchi degli spettri di ogni punto scansion-ato. Come verr`a spiegato nel capitolo, i risultati dell’analisi permettono di dire che i
Sommario
pigmenti del quadro sono stati mappati abbastanza correttamente.
L’applicazione del microscopio ottico, che presenta la combinazione delle tecniche di Raman e di fluorescenza indotta da laser, ha evidenziato le potenzialit`a e i difetti che possono essere corretti in un futuro lavoro.
1. Introduction
Chapter 1
Introduction
1.1
Fluorescence spectroscopy
Luminescence is the emission of light from any substance and occurs from electron-ically excited states. Luminescence is formally divided into two categories, fluorescence and phosphorescence, depending on the nature of the excited state.
The first observation of fluorescence from a substance was reported by Sir John Fred-erick William Herschel in 1845, precisely from a quinine solution in sunlight. Looking at its report, it is evident that he recognized the presence of an unusual phenomenon that could not be explained by the scientific knowledge of the time. After this ob-servation, the study of these phenomena began. Nowadays luminescence is used a lot as a research tool, very often in the fields of biochemistry and biophysics, but also in many other branches.[1] This is due to the advantages that characterizes the use of this technique. The most important is its sensitivity: sensitivity to trace lumines-cent compounds, sensitivity to the microenviroment, sensitivity to molecular distance (FRET).
1.1. Fluorescence spectroscopy
1.1.1
Phenomena of fluorescence
Entering more into details, phenomena of luminescence consist in a radiative de-cay from an excited state. In particular, when excitation occurs through light, the phenomenon is called photo-luminescence. Electrons in atoms, molecules or solids, if brought up to excited energetic levels, can come back to the fundamental state through different mechanisms of decay, which can be radiative or non radiative. The former, the so called luminescence, consists in the emission of photons.
The two radiative mechanism are fluorescence and phosphorescence.
In singlet excited states, the electron in the excited orbital, paired (by opposite spin) to the electron in the fundamental state, can make a transition to the ground state with the emission of a photon, which is spin-allowed (it implies δs 0, allowed by spin selection rules). The results is a very fast transition, the lifetime is typically of the order of pico/nanoseconds, and the process is called fluorescence while the emitting substance is called fluorophore.
Instead, in triplet excited states, the excited electron has spin parallel to the one of the electron in the ground state, so the transition is not allowed by spin selection rules (since it would imply δs 1) and it is less probable and slower in time, with lifetime of the order of microseconds up to seconds, also even longer. Such mechanism is called phosphorescence.
Figure 1.1: Absorption and emission spectra of antrachene molecules
funda-1. Introduction
mental electronic state, as it will be explained in the next section through the discussion of the Jablonski diagram.
These spectra data are generally presented as emission spectra. A fluorescence emission spectrum is a plot of the fluorescence intensity versus wavelength [nm] or wavenumber (cm1) or energy [ev].
An example of a fluorescence spectrum (along with the absorption one) is shown in Figure 1.1. Emission spectra vary widely and strongly depend on the chemical struc-ture of the fluorophore and on the solvent in which it is dissolved, giving important information on the sample. The spectra of some compounds show significant structure due to the individual vibration energy levels of the ground and excited states. Other compounds show spectra devoid of vibrational structure.[1]
1.1.2
Jablonski diagram
The processes that occur between the absorption and emission of light are usually illustrated by the Jablonski diagram. This diagram is named after Professor Alexander Jablonski, who is regarderd as the father of fluorescence spectroscopy. Jablonski dia-grams are often used as the starting point for discussing light absorption and emission and to illustrate various molecular processes that can occur in excited states.
Figure 1.2: Jablonski diagram
A typical Jablonski diagram is shown in Figure 1.2. The singlet ground, first and second electronic states are depicted by S0, S1 and S2. The first tripled excited state is
named T1. Transitions among singlet states are represented by vertical arrows, in order
1.1. Fluorescence spectroscopy
s, a time too short for significant displacement of nuclei. This is the Franck-Condon principle.
To each energetic electronic level can correspond many different vibrational levels, to which the molecule can be excited. Usually the energy difference between the S0 and
S1 is too large for thermal promotion, for this reason, light and not heat is used to
induce fluorescence. Following light absorption, several processes occur. A fluorophore can be excited to a vibrational level of S1 or S2. Looking at the first case, the molecule
relaxes non radiatively to the least energetic vibrational level of S1, through a process
of internal conversion, before the emission of fluorescence. Internal conversion generally occurs within 1012or less. Since fluorescence lifetimes are typically near 108, internal conversion is complete prior to emission. Hence, fluorescence emission generally results from a thermally equilibrated excited state. The return to the ground state usually occurs to a high vibrational level, which then quickly reaches thermal equilibrium. This explains why the emission spectrum is typically the specular image of the absorption spectrum, which, however, is located at higher energies (lower wavelength) with re-spect to the emission one. This explains also why emission occurs always at energies (wavelength) lower (higher) than excitation. The shift in energy (or in wavelength) between the main peak of the absorption and emission spectra is called Stoke’s shift. Electrons excited to level S1 can, alternatively, make a transition, with a spin
conver-sion, to the triplet state T1, called intersystem crossing. Due to spin selection rules,
the probability that it could decade to the fundamental state is very low and so it generates a phosphorescence emission.
Generally speaking, spectra of absorption and emission are indipendent of the excitation wavelength.[1]
1. Introduction
1.1.3
Fluorescence lifetimes and quantum yields
Two characteristic parameters of fluorescence are lifetime τ and quantum effiiency ψ (quantum yield). The former determines the time available for the fluorophore to interact with or diffuse in its enviroment, so the average time during which it remains in the excited state before returning to the ground state. The latter is the ratio between emitted photons and absorbed photons.
In the Jablonski diagram in Figure 1.2, the main decaying processes to fundamental state are represented. Usually, the radiative decay rate is named as kr and the non
radiative one (including spin conversion S1 Ñ T1 and other non radiative processes,
such collisions with molecules) as knr.
Using this nomenclature, quantum yield can be defined as
ψ kr
kr knr
(1.1)
i.e. the fraction of the emitted photons with respect to the total number of absorbed photons. The higher the quantum yield of a substance, the higher the intensity of emission. ψ is always smaller than 1: it can be approximated to 1 when non radiative decay is much smaller than emission (knr kr).
Lifetime is defined in this way:
τ 1 ktot 1 kr knr ψ kr (1.2)
In absence of non radiative decay processes, it can be simplified in
τ 1 kr
(1.3)
that is the natural lifetime of the substance. Lifetime and quantum yield are both parameters that depend on the non radiative decay rate, so they can give information
1.1. Fluorescence spectroscopy
about the enviroment around the fluorophore [1].
Knowing the lifetime τ , it is possible to write an equation that, at first order, represents the depopulation of the excited level [2]. It is a linear function of the population N of the level itself:
dN dt pkr knrqN N τ (1.4) Integrating, it results: Nptq N0e t τ (1.5)
where N0is the initial population of the excited level and τ is the corresponding lifetime.
The intensity of the radiative emission from the excited state I is directly proportional to the population of the excited level, then it is possible to write this exponential law:
Iptq I0e
t
1. Introduction
1.1.4
Fluorescence spectroscopy as a tool
Fluorescence is nowadays a dominant methodology used extensively in many dif-ferent fields, as biotechnology, medical diagnostics, cultural heritage. In particular fluorescence spectroscopy plays a dominant role. The measurements, that can be done, can provide information on a wide range of molecular processes and the usefullness of fluorescence is being expanded by advances in technology for cellular imaging and single-molecule detection. These advances have decreased the cost and the complexity of previously complex instruments. In cultural heritage field, it is so diffused because it is a non-invasive technique and so it guaranteees the conservation, reconstruction and authentication of art-works [1]. In parallel to the spectroscopy, it is necessary to talk about fluorescence microscopy. This technique involves the use of a fluorescence microscope that is an optical microscope to study the properties of organic or inorganic substances and to generate an image of the optical emission from the sample.
The scheme of the epi-fluorescence microscope is in Figure 1.3. Typical components of a fluorescence microscope are a light source, the excitation filter, the dichroic mirror and the emission filter. Sample observation is achieved by the naked eye or through the use of a digital color camera. The filters and the dichroic beamsplitter are chosen to match the spectral excitation and emission characteristics of the fluorophore.
1.1. Fluorescence spectroscopy
In Figure 1.4 there is an example of the use of epifluorescence microscopy in the Heritage Science field. The two images represent a stratigraphic microsample taken from the painting by Henri Matisse, ”Still Life with Eggplant ”, observed through an optical microscope and an epifluorescence optical microscope. [3]
Figure 1.4: Images of the stratigraphic microsample taken with the benchtop optical mi-croscope in dark-field illumination (a) and epifluorescence configuration (b).
1. Introduction
1.2
Raman spectroscopy
The Raman effect is a physical phenomenon of light scattering, postulated by Smekal [4] and experimentally observed for the first time in 1928 by Chandrasekhara Venkata Raman and Kariamanickam Srinivasa Krishnan [5]. The Raman effect is one consequence of light-matter interaction. When a light beam impinges onto a medium, photons can undergo different scattering processes or can be either transmitted or ab-sorbed, depending on the light wavelength and the nature of the medium. Among the scattering processes the elastic one, called Rayleigh scattering, where the photons maintain the same energy, is the most probable. A small fraction of light instead is inelastically scattered (Raman scattering), resulting in photons with a higher or lower energy with respect to the incoming ones [6]. The difference in energy between the inelastically scattered photons and the incident photons is due to transitions between vibrational energy levels in the molecular system. Hence, the study of this phenomenon is useful for studying the vibrational levels of a molecule.
Figure 1.5: Energy transfer model of elastic (Rayleigh) and inelastic (Raman) scattering based on photon description of electromagnetic radiation.
A schematic model of Rayleigh and Raman scattering processes is shown in Figure 1.5. The theoretical treatment of Raman scattering can be approached in different ways. The classical theory treats the radiation as an electromagnetic wave and the material system as an assembly of freely non-interacting classical rotors and vibrators. The semi-classical method instead uses a quantum mechanical treatment of the material
1.2. Raman spectroscopy
system but retain a classical treatment for the radiation.
1.2.1
Classical treatment
When light interacts with a molecule, an oscillating dipole moment (µ) is induced by the electric field (E) of the incoming radiation. This dipole moment could be expressed, at the first order, as
~
µ ¯α ~E (1.7)
where α is the polarizability tensor of the system. The polarizability tensor is function of nuclear coordinates and consequently depends on the molecular vibrational frequen-cies. Each vibration of the molecular system can be expressed as superimposition of indipendent normal modes Qk that, for a harmonic vibration, can be written as
Qk Q0kcospωkt δkq (1.8)
The different components of the polarizability tensor will be a function of the nu-clear coordinates and, in the hypothesis of small displacements around the equilibrium position, it can be expressed within the first approximation in a Taylor series
¯ α ¯α0 ¸ k B ¯α BQk |0Qk (1.9)
Now, the induced electric dipole moment µ can be obtained by considering the equa-tions 1.7 and 1.9 with an incident electromagnetic radiation ~E ~E0 cospω0tq and
using the prosthaphaeresis formulae1:
~ µind ¯α0 ~E0 cospω0tq 1 2 ~ E0 B ¯α BQk |0Q0krcosrpω0 ωkqt δks cosrpω0 ωkqt δkss (1.10)
1. Introduction
The expression of the dipole moment explicits all the scattering components at different frequencies:
• Rayleigh scattering: oscillates at the same frequency ω0of the incoming radiation
• ”Stokes” Raman scattering: refers to photons generated at a lower frequency ω0 ωk with respect to the incoming radiation
• ”Anti-Stokes” Raman scattering: refers to photons generated at a higher fre-quency ω0 ωk with respect to the incoming radiation
In this expression there is also the condition necessary for the Raman effect: to generate a photon at different frequency with respect to the incoming radiation, polarizability must change during the vibration of the molecular system
B ¯α BQk
(1.11)
The radiant intensity from a dipole is
Iptq 2n 3 3c3 B ~µindptq Bt2 2 (1.12)
where c is the light speed and n is the dielectric refractive index. Developing the square modulus of the second derivative of ~µind using 1.10 and collecting costants in
terms CR, Ck,S, Ck,AS the expression of the diffused light intensity from a molecular
system interacting with an oscillating electric field at frequency ω becomes
Iptq CRω4cospωt ~k ~rq ¸ k Ck,Spω ωkq4cos2rpω ωkqt ~k ~rs ¸ k Ck,ASpω ωkq4cos2rpω ωkqt ~k ~rs (1.13)
In this equation there are the three contributions to the oscillation. As it can be seen, Raman spectroscopy depends a lot on the molecular vibration frequency ωk. The
1.2. Raman spectroscopy
difference between the laser frequency and the Raman frequency is called Raman shift e often it is expressed using cm1
δωkpcm1q 1 λpnmq 1 λkpnmq 107pnmq pcmq (1.14)
where λ is the oscillation wavelength of the excitation source and λk is the emission
wavelenght from the material after Raman scattering. The intensity of each oscillation term is proportional to the fourth power of pω ωkq.
Doing the ratio between the intensities of Stokes and Anti-Stokes peaks, in the hypoth-esis that Ck,S Ck,AS, it results
IStokes
IAnti-Stokes
pω ωkq4
pω ωkq4
(1.15)
For ωk much lower than ω (as in Raman spectroscopy) the ratio is near to one.
Ex-perimentally, in the reality, it is not true: in standard conditions the Stokes diffusion is higher than the Anti-Stokes one.
So, the classic theory is not sufficient to explain the Raman diffusion phenomenon. It might be considered adequate for a qualitative description of th vibrational frequencies but it has some limitations for the representation of the intensities of the scattering components [7]. Moreover, a full classical approach cannot provide information on how the variation of the polarizability tensor is related to the properties of the scattering molecule [8]. It is necessary to introduce a quantic treatment.
1.2.2
Semi-classical treatment
The semi-classical treatment of the Raman scattering is based on the time-dependent perturbation description of the energy levels of the molecule. The perturbed wave func-tions can be expressed as linear combinatins of the unperturbed wave funcfunc-tions [9]. The
1. Introduction
general expression of unperturbed wave function is
Ψp0qk ψk eitpωj iΓk q (1.16)
where Γk is related to the lifetime of the state τ .
The perturbed wave function of the initial and final state can be expressed as:
Ψp1qi Ψp0qi Ψp1qi Ψp2qi Ψpnqi
Ψp1qf Ψp0qf Ψp1qf Ψp2qf Ψpnqf
(1.17)
where the Ψp1q, ..., Ψpnqare the first , second and so on order modification as the result of the perturbation.
The electromagnetic fields of the incident radiation produces a perturbation of the states of the molecular system with a transition i Ñ f from an initial state (i) to a final state (f ) through an intermediate virtual state v i, f. To this transition is associated a transition electric dipole (Pfi) defined as
Pfi xΨp1qf | µ | Ψp1qiy (1.18)
Using 1.17 in the equation 1.18 and collecting the terms according to their dependence on the perturbing magnetic field, the total transition electric dipole can be written as
Pfi P(0)fi P(1)fi (1.19)
where
P(0)fi xΨp0qf | µ | Ψp0qiy (1.20)
P(1)fi xΨp0qf | µ | Ψp1qiy xΨp1qf | µ | Ψp0qiy (1.21)
1.2. Raman spectroscopy
dipole that generates both Rayleigh and Raman scattering. By considering monochro-matic radiation ω0 and by assuming an infinite lifetime for the initial and final states
(Γi Γf 0 in eqn. 1.16), the expression of the real induced transition electric dipole
moment can be written as:
P(1)fi 1 2~ ¸ vi,f xf | µ | vy xv | µ | iy ωvi ω0 xf | µ | vy xv | µ | iy ωvi ω0 E0eiωst complex conjugate (1.22)
To simplify the notation let’s introduce:
• frequency difference for different states: ωfi ωf ωi
• absolute frequency of scattered radiation: ωs ω0 ωi ωf
• for the wavefunction of different state (i,f,v ... and so on): xΨf| Ñ xf|
Introducing the general transition polarizability αfi, the eqn.1.22 can be written in that
form:
P(1)fi αfiE0cospωstq (1.23)
So, the semi-classical theory and classical treatments give similar results with the proper substitution of the oscillating electric dipole with a transition electric dipole that can be related quantitatively to fundamental molecular properties [10].
1.2.3
Selection rules
In order to have a Raman-active vibration in a molecular system, at least one component of the transition polarizability tensor ¯α should be non-zero for a general vibrational transition vf Ñ vi. The simmetry properties of the vibrational wave
1. Introduction
first order term of the polarizability ¸ k Bα BQk 0xvf | Qk| viy (1.24)
the integral is non-zero if, for the kth-mode, the vibrational quantum number changes by one unit vfk vik 1. These conditions are due to the properties of harmonic
wave functions. Within this approximation, only fundamentals vibrational can be predicted in Raman scattering (∆vk 1 for Stokes and ∆vk 1 for Anti-Stokes).
Considering anharmonic wave functions, the transitions with ∆vk 2, 3, ... are
allowed. These transitions bring to overtone bands which are usually characterized by a small Raman intensity in as much as they are related to the second derivative terms of the polarizability.
1.2.4
Intensity and cross section
The intensity of the Raman scattering is directly proportional to the irradiance (I) of the incident light
I σIN (1.25)
where N is the number of molecula in the sample and σ is the scattering cross-section of the Raman process that is related to the molecular properties and to excitation wavelength (λ). The cross-section, that express the scattering efficiency, is proportional to the inverse of the fourth power of the excitation wavelength
σ9 1
λ4 (1.26)
Typically, σ is very small and for this reason Raman scattering is weaker than the Rayleigh one. Raman scattering can be of two types, as said before: Stokes and Anti-Stokes. The latter can occur only if there are molecula already in an excited
1.2. Raman spectroscopy
vibrational state, a condition necessary to have a scattered photon at higher energy than the incident one. So, the occurence of Anti-Stokes scattering will be proportional to the number of molecula in excited vibrational state when incident photons arrive. In the eqn. 1.15, it has to be add a term that represents the probability to find the molecular system in an excited vibrational state:
IStokes IAnti-Stokes e ~ωk kBT pω ωkq 4 pω ωkq4 (1.27)
where T is the temperature of the sample, kB is the Boltzmann constant and ωk is
the vibration frequency. So, the Stokes ”branch” is always more intense than the Anti-Stokes one.
1. Introduction
1.2.5
Raman spectroscopy as a tool
The Raman spectroscopy is a cost-effective, and often preferable, approach since it requires virtually no sample preparation, relatively simple instrumentation, and is noninvasive and nondestructive. [11] The development of efficient Raman instrument has been always a difficult task. The main problem is the high level of elastic scattering with respect to the low Raman signal that leads to the necessity of good spectral filtering and low-noise detectors.
A turning point was in 1960s when laser source and double monochromators were introduced. Thanks to this, the strong fluorescence background, that represents one of the major hurdle associated with Raman analysis, can be circumvented by selecting an appropriate laser wavelength or gating the detector system.
The most experimental approaches are FT-Raman, dispersive Raman, plasmon-enhanced Raman, coherent Raman and each of these is based on different excitation sources and detection methods.
Raman spectroscopy is a tool used in a variety of application fields due to its non-invasive nature and to its flexibility and versatility. Fields in which Raman spectroscopy is used are the following:
• the cultural heritage and conservation science fields • the biological and medical fields
• the identification of chemicals with also implication in industrial applications • the material science field
This is a proof of the potentiality of Raman spectroscopy.
As done in the previous section for the fluorescence, let’s talk about the microscopy. It is a technique through which it is possible to reproduce a image of the sample exploiting the Raman effect. For this type of microscopy generally the lasers used
1.2. Raman spectroscopy
are the NIR ones, particularly for biological and medical specimens. For inorganic specimens, such as rocks, ceramics and polymers, Raman microscopy can use a broader range of excitation wavelengths.
Typically, Raman microscopy is performed with a point scanning approach (as done in this thesis work), while the use of widefield excitation and detection is much less common. This is because the use of a widefield excitation would lead to a low power density on the sample which would not allow one to detect the weak Raman signal.
An example of what can be achieved through Raman microscopy is the Figure 1.6. In this case, the Raman microscope is used in the biological field. [12]
Figure 1.6: (A) An optical image of the sulfur-oxidizing bacterium Beggiatoa, cultured from a microbial mat. (B) Corresponding Raman map of the bacterium showing the sulfur peak (yellow/red) and autofluorescence of the cell (green).
1. Introduction
1.3
Combined Fluorescence and Raman devices
Raman spectroscopy and LIF are highly complementary to each other since both give information on molecular species in the sample, Raman on vibrational and fluores-cence on electronic energy levels. Since LIF is very sensitive and orders of magnitude stronger than Raman, it can provide valuable information about fluorescing atomic and molecular species like rare earths and transition metal ions, and biological molecules. It can detect biomarkers, pathogens, many organic pollutants and samples of bio-medical relevance and clinical samples. However, fluorescence spectra of complex samples like tissues, containing many closely related fluorophores, overlap considerably and it is not easy to derive individual molecular information from the total spectra. Raman spec-tra, on the other hand, are highly specific for each of the molecular species concerned and enable derivation of detailed information on individual species even in complex mixtures.[11]
Another aspect that pushes to combine these two techniques, in addition to their com-plementary, is that they are both optical techniques. So, a combination of the two techniques requires the use of optical elements, only.
1.3. Combined Fluorescence and Raman devices
In the following table, we reports some applications in cultural heritage. After that, these works are briefly explained.
Table 1.1: List of existing hybrid Raman-LIF devices
Year Groups Work done (mm)
2001 M. Castillejo et al. Polychromy study
2009 Osticioli et al. Cultural heritage analysis
2017 Vincent Detalle et al. Conservation of heritage materials
2017 Syvilay et al. Cultural heritage analysis
2018 A. Martinez-Hernandeza et al. Analysis of heritage stones and model wall paintings
2019 X. Bai et al. Cultural heritage analysis
Most of the reported works present a combination of three spectroscopic techniques (Laser-Induced Breakdown Spectroscopy, Raman spectroscopy and LIF spectroscopy). In the study of the state of the art we focus our attention on the combination of Raman and LIF techniques. All of the devices have the characteristic of being portable and are designed to carry out non-contact spectroscopy measurements (LIBS-LIF-Raman) of a point of interest of an artwork.
Castillejo et al. employed four spectroscopic techniques (FTR, FTIR, LIBS, LIF) to characterize the pigment and binding media composition of polychromes.[18]
Osticioli et al. developed a compact instrument for LIBS-Raman-LIF analysis of cultural heritage objects. A Q-switched Nd-YAG laser at 532 nm was used for both LIBS and Raman spectroscopy, while for LIF, 266nm excitation was used.[17]
Vincent Detalle et al. reported the design and development of a multi-analytical prototype instrument capable of combining three laser-based spectroscopic techniques
1. Introduction
for future E-RIHS (European Research Infrastructure for Heritage Science). E-RIHS addresses the environmental problems of knowledge and conservation of heritage ma-terials. They used four different wavelengths 1064, 532, 355, and 266 nm. [15]
Syvilay et al. have described the use of LIBS, Raman, and LIF analysis for the identification of cultural heritage materials. The aim of combining these three spec-troscopies was to identify a material (molecular and elemental analysis) without any preliminary preparation, regardless of its organic or inorganic nature, on the surface and in depth.[16]
Martinez-Hernandez et al. have analysed model wall paintings and heritage stone samples of marble, gypsum, and alabaster, with LIBS, Raman, and LIF techniques, employing a Q-switched Nd:YAG laser with 266, 355, and 532nm excitations and ICCD-coupled spectrograph detection for temporal resolution.[14]
Bay et al. have proposed an hybrid system for the characterization of cultural heritage artworks because analysis in this field implies the necessity to get information from organic and inorganic materials. Hence, a combination of different laser based spectroscopic techniques is required. They have tested different system design and excitation sources which eventually lead to a mobile, compact, and multi-spectroscopy system suitable for in-situ cultural heritage applications.[13]
In addition to cultural heritage, the combined techniques are used in other fields. We reports two examples.
Ichimura et al. employed an imaging method using hybrid fluorescence-Raman microscopy that measures the chemical enviroment associated with protein expression patterns in a living cell. In this case, the scientists exploit anti-Stokes fluorescence emission to realise the simultaneous imaging by fluorescence and Raman scattering.[19] Greszik et al. presented LIF and Raman for the measurement of water fluid film thickness. In this study, the excitation wavelength uses is one for both the detection techniques (266nm). The diagnostic methods are based on LIF of the organic tracer added to the liquid in sub-percent concentration levels and on spontaneous Raman
1.3. Combined Fluorescence and Raman devices
scattering of liquid water. The simultaneous detection of two-dimensional liquid film thickness information from both methos is enabled by an image doubler and appropri-ate filtering in both light paths.[20]
2. Scanning optical microscopy
Chapter 2
Scanning optical microscopy
Microscopy is the science of using microscopes to view objects and regions of objects that cannot be seen with the naked eye. There are four branches of microscopy: optical, electron, scanning probe and X-ray.
In our project, we focus on the optical one. It involves the diffraction, reflection or refraction of electromagnetic radiation beams interacting with the sample and the collection of scattered radiation in order to create an image. This process can be carried out by wield-field irradiation of the sample or by scanning a fine beam over the sample. The instrument of this technique is the optical microscope. It is a type of microscope that commonly uses visible light and collects the light transmitted, reflected or emitted by the sample through a system of lenses to generate magnified image of the sample. This image can be detected directly by the eye or captured using a CCD camera.
2.1. Microscope
2.1
Microscope
Referring to the previous definition, a microscope is an instrument which gives a magnified image of an object.
The fundamental part of a microscope is a lens (or a collection of lenses) that magnifies an object and gets an image of it. This lens is called objective lens.
We start considering a system formed by a simple thin lens with a certain focal length f (Figure 2.1), called 2f system. If we put an object at distance p from the lens, the
Figure 2.1: 2f configuration
lens will create an image of the object at distance q, such that it is possible to write the relation: 1 p 1 q 1 f (2.1)
The imaginary line that defines the path along which light propagates is called optical axis. It is perpendicular to the principal planes that are the planes in a lens sytem at which all th refraction can be considered to happen. The object plane and the image plane are the perpendicular axis, which contain respectively the object and the image. An important parameter in microscopy is called lateral magnification M and it is the ratio between the height of the image and the one of the object. This ratio is also equal to that of the distances of the image and of the object from the lens M qp. To create a magnified image of the object using a lens, the lateral magnification must be M ¡ 1 and so q ¡ p.
2. Scanning optical microscopy
2.2
The 4f system
The modern microscopes has not a so simple configuration. Their basic configu-ration is slightly different, indeed it is a so called 4f system (Figure 2.2), that is equivalent to two 2f configurations in tandem.
The object is at distance f0 from the first lens, called objective lens. The image
obtained by the optical system is at distance f1 from the second lens, called tube lens,
where f0 and f1 are the focal distances of the first and of the second lens. The distance
between the two lenses is f0 f1. Since the focal point of a lens is conjugated with a
point at infinity, the light coming from the object is collimated by the first lens. In the same way, this collimated beam is focused by the second lens in its focal point.
The lateral magnification of this configuration is
M f1 f0
(2.2)
Also in this case, to create a magnified image of the object, M ¡ 1 and so f1 ¡ f0.
So, we see that in this configuration the light in the region between the two lenses is collimated. This region is the ideal place where to put the a filter or a stop to get its right effect. There is also an advantage for the point of view of the analysis of the system: in this region we have the Fourier transform of the light coming from the object, so it is possible to switch to the Fourier domain. Another advantages of this configuration is that, since the light between the two lenses is collimated, the distance inter-lens can also be not equal to the sum of the two focal lengths, but, in principle, can be whatever. This a important degree of freedom in the design of the system. When the distance between the two lenses is no more the sum of their focal lengths, the system can not be more called 4f system. For simplicity, we will refer to this type of configuration as a non-ideal 4f optical system (especially in reference to those present in our microscope).
2.2. The 4f system
Now, we introduce an important parameter of an optical system: the numerical aperture N A. It is the maximum acceptance angle for the entering light
N A n sinpθq (2.3)
where n is the refractive index of the medium and θ is the maximal half-angle of the cone of light that can enter the lens. For a configuration with only the two lenses, the numerical aperture is limited by the aperture of the objective lens.
Instead, if a stop is put in the region between the two lenses, it limits the light that enters in the optical system and, so, it limits the N A.
Let’s consider a 4f system with a circular aperture stop in the common focal plane of the lenses (Figure 2.2).
Figure 2.2: 4f configuration
Considering a point-like source in the object plane, we study what we obtain as distribution of the field at the image plane.
Since the pupils of the system, the entrance and the exit pupils, which are the images of the aperture stops made by the two lenses respectively on the object plane and on the image plane, are at infinity, we will use in the following discussion the word pupil to indicate the aperture stop.
2. Scanning optical microscopy
2.2.1
Lateral resolution
The spatial distribution of the source light can be decomposed in plane waves. Each of these can be totally described by its wavevector ~k, with a modulus equal to
k n
λ (2.4)
where n is the refractive index and λ is the wavelength. Initially, we consider monochro-matic light, that, placing it in the context of our thesis work, can be reflected or emitted from the sample. So, λ is fixed and, therefore, the modulus of the wavevector is fixed too. So, what identifies each plane wave, that describes the spatial distribution of the field, is the direction of ~k, which is the propagation direction of that particular plane wave.
The general wavevector ~k can be decomposed in two components: one perpendicular and one parallel to the optical axis (along z), respectively ~κK and ~κz, such that
k2 κ2K κ2z (2.5)
Since the modulus of the wavevector is fixed, all the wavevectors associated to the plane waves of interest, lie on a hemispherical surface in the cartesian space with radius equal to 2.4. Besides, the assumption of a monochromatic source has also another implication: the direction of the plane waves in which we decompose the field is univocally identifies by the associated vector ~κK. The decomposition of the point-like source in spatial plane waves is useful because it is the basis of the Fourier analysis. In general, a lens performs the Fourier transform of the a field distribution located in the focal plane on the other focal plane. Focusing on the 4f configuration, on the pupil plane we will have the Fourier transform of the field distribution in the object plane. We have decomposed the source in plane waves, each one with its own ~k because the Fourier transform of a plane wave is a Dirac delta. Hence, each plane wave is univocally
2.2. The 4f system
associated to a point in the xy pupil plane. Obviusly, the coordinates of this point are related to the wavevector of the plane wave in this way:
~ ρP
~ κ0K
k f0 (2.6)
This formula is valid in paraxial approximation, which means that we assume to work with small angles.
The second lens performs the Fourier transform of the delta on the pupil plane, ob-taining new plane waves each described by a different wavevector (equal in modulus with respect to the original one, but different in direction). We can write the relation between the space position of the Dirac delta and the corresponding new wavevector using this formula:
~ ρP ~ κ1 K k f1 (2.7)
The linear combination of all the plane waves will result in the spatial distribution of the field at the image plane. This distribution is different from the field distribution at the object plane for two main reason.
Firstly, the image of one plane wave, that at the object plane is univocally identified by κK, is also a plane wave, as we’ve already said, but with different spatial period, identified by the modulus of its new perpendicular component of the wavevector κK. Using the eqn.s 2.2, 2.6 and 2.7, we obtain
~κ1K ~κ
0 K
M (2.8)
Secondly, only a portion of all the plane waves in which we decompose the field in the object plane will contribute to create the image, indeed the dimension of the pupil limits the maximum acceptance angle of the entering light (N A). So, the plane waves with a direction beyond the limit angle will not be imaged by the system. This means that a point in the object plane may not be a point in the image plane.
2. Scanning optical microscopy
Let’s do the discussion in mathematical terms. We consider a plane wave at the object plane. It is described by a field:
E0p~ρ0q ei2π~κK~ρ (2.9)
The objective lens makes the Fourier transform of the field. So, at the pupil plane we have ε0p~kq. The pupil cuts the plane waves which does not respect the limit of the
N A. The pupil can be represented as a circular function that is 1 for all points which distance from the center is less than the radius and 0 otherwise.
The field distribution at the pupil plane is then proportional to the product between the Fourier transform of the object field and the pupil function:
EPp~ρPq 9 ε0p
k f0
~
ρPq P p~ρPq (2.10)
The second lens makes the Fourier transform of the field of the pupil plane. So, the field distribution at the image plane is the Fourier transform of the one at the pupil plane and this one is the Fourier transfom of the field distribution at the object plane. We can conclude using the previous equations that
ε1p~κ1Kq 9 ε0pM~κ1Kq P p
M~κ1 K
k f0q (2.11)
From this last equation, we see that the transfer function of the optical system is a scaled version of the pupil function. Hence, we can define the so called amplitude transfer function Hp~κ0 Kq P ~κ0K k f0 (2.12) It is equal to 1 for ~κ0
K Rfk0 and 0 otherwise. If we make the inverse Fourier transform
2.2. The 4f system function Hp~ρq k f0 2 P k f0 ~ ρ (2.13) Considering the pupil as a circle, so the pupil function as a circular function, then the inverse Fourier transform is a sinc. So, we can express the amplitude spread function in this particular case as
Hp~ρq π N A λ 2 sinc 2πN A λ ~ρ (2.14)
The field distribution at the image plane is the convolution between the field distribu-tion at the object plane and the amplitude spread funcdistribu-tion. As said before, H is the impulse response of the system, so, a point-like source at the object plane is imaged as a sinc profile.
Looking at the eqn. 2.12, it is worth noticing that the dimension of the pupil impose limits for which plane waves can enter in the optical system. In particular, it defines a spatial bandwidth. All the plane waves, which are described by a perpendicular wavevector, that is out of the bandwidth, are cut off by the system. The spatial bandwidth of the system for κK is
∆κK 2Rk f0
(2.15)
If we work in paraxial approximation, we can find a simple relation between the radius of the pupil and its numerical aperture, therefore we can write
∆κK 2N A
λ (2.16)
2. Scanning optical microscopy
between two distinguishable points, is the inverse of the spatial bandwidth
∆ρ λ
2N A (2.17)
If N A is maximum (N A n), the system will collect all the light coming from the source, but the resolution will still be limited by diffraction. [21]
2.2.2
Intensity propagation
In the previous discussion it is made a fundamental assumption that fields are monochromatic, so the time dependence of the fields is harmonic (that is true if the light comes from a laser source). This assumption is not valid in general.
Considering the most general case, the image field is constructed from a superposition of object fields. So, light fiels cannot be treated deterministically, but, instead, must be treated statistically. If we are dealing with incoherent light, it is better to work looking at the intensities. If the fields are taken to be explicity time dependent, the intensity of the light field can be defined as
Ip~ρq x | Ep~ρ, tq |2y (2.18)
The new spread and transfer functions can be derived using the Van Cittert-Zernike theorem. The impulse response of the system is called point spread function (PSF) and it is proportional to the squared modulus of the amplitude spread of H
P SFp~ρq 1
k2| Hp~ρ | 2
(2.19)
Instead, the transfer function of the system is called optical transfer function (OTF) and it is the Fourier transform of the PSF. Exploiting Van Cittert-Zernike therorem, it is possible to retrieve the OTF by making the autocorrelation of the H.
2.2. The 4f system
The resolution of the system, according to Abbe criterion, is the same as in the case of coherent light source. But, in the incoherent case, the physical meaning is slightly different, indeed the spatial bandwidth does not involve the direction of the plane waves, but it gives the cut-off frequency at which the modulation factor (the value of OTF), which is multiplied by the sinusoidal intensity pattern at the object plane, drops down to zero.
In another way, we can define the resolution accordingly to the Rayleigh criterion, which states that it is possible to distinguish two points when the center of the diffrac-tion pattern of one is directly over the first minimum of diffracdiffrac-tion pattern of the other. So, the value of the resolution is given by the position of the first zero of the PSF
∆ρ 1.22 λ
2N A (2.20)
[21]
2.2.3
Axial resolution
Until now we consider the simplifying cases, where the object and image planes are fixed either by the thins lens in a single-lens imaging geometry or by the focal lenghts f0 and f1 in a 4f imaging geometry, are considered. When these restrictions
are applied, the imaging is said to be in focus. Relaxing the restrictions of in-focus imaging, the imaging system presents the object and image plane out of focus relative to one another.
In this section, the goal is to understand what happens to the spread and transfer function in out-of-focus imaging.
For a point-like source, the PSF can be considered as a 2D probability to receive one photon in a specific position of the image plane. We can extend this concept to the 3D space. Out-of-focus imaging can be thought of as a combination of two operations: in-focus imaging followed by additional defocus. The operational functions for in-focus
2. Scanning optical microscopy
imaging are given by H or H for monochromatic fields, or by the PSF or OTF for incoherent intensities. Since defocus corresponds to additional free-space propagation from an in-focus to an out-of-focus plane, the operational functions for this are given by Dp~ρ, zq and Dp~κK, zq, the free-space propagator and its Fourier transform:
Dp~κK, zq ei2π ?
k2κ2
Kz (2.21)
To obtain the distribution of the field in the out-of-focus near image plane, it is nec-essary to do the convolution between the field at image plane with the free space propagator. So, starting from the object plane, the 3D impulse response becomes the convolution between H and D. Moving into the Fourier domain, the 3D transfer func-tion can be simply obtained by the product between H and D. We can write the following mathematical definitions:
E1p~ρq E0p~ρq rHp~ρq Dp~ρ, zqs (2.22)
ε1p~κKq ε0p~κKqrHp~κKqDp~κK, zqs (2.23)
To define the 3D PSF, it is necessary to follow the same steps. The 3D PSF is propor-tional to the squared modulus of the 3D H of the system. Instead, the 3D OTF can be obtained doing the autocorrelation of the 3D H. By doing all the calculations, it is possible to retrieve the expression of 3D PSF and we will notice that it has a sinc2 profile along ~ρ, as in the 2D case, and also along z.
Looking at the position of the first zero of the PSF, we can define the axial res-olution, accordingly to Rayleigh criterion, as we have done previously for the lateral resolution:
∆z 2λn
N A2 (2.24)
2.3. Study of the propagation of a Gaussian beam through a non-ideal 4f system
2.3
Study of the propagation of a Gaussian beam
through a non-ideal 4f system
During the study and design of the device, we performed a theoretical simulation of the propagation of a Gaussian beam through a non-ideal 4f system. We want to simulate the charateristics of the excitation beam on the sample.
The non-ideality of the 4f system is very likely in a real system: the distance between the two lenses is longer than the sum of the two focal lenghts due to the addition of optomechanical components. Furthermore, a laser beam can be described in most cases as a Gaussian beam. This type of beam has the characteristic that its amplitude envelope in the transverse plane is given by a Gaussian function and this also implies a Gaussian intensity profile. As said before, this fundamental transverse Gaussian mode describes the intended output of most of the lasers, as such a beam can be focused into the most concentrated spot. When such a beam is refocused by a lens, the transverse phase dependence is altered; this results in a different Gaussian beam. For the Gaussian beam, we have to consider not only the radius of curvature R, but also the beam waist w0, that is the measure of the beam width at the point of its focus, so the spot size.
Now, we define the most used parameter for the description of a Gaussian beam, the q parameter 1 q 1 R i λ πw2 (2.25)
For a sake of simplicity, we define the Rayleigh range of the beam zR, that is the
distance along the propagation direction of a beam from the waist to the place where the area of the cross-section is doubled, so where the spot size is equal to ?2w0. So we
can rewrite the eqn. 2.25, as
1 q 1 R i 1 zR (2.26)
2. Scanning optical microscopy
Let’s write the equations that gives the real quantities R and w when q is known
Rq 1 R 1 q (2.27) wq c πI1 q λ πI 1 q (2.28)
For the treatment of the development of the beam during is optical path, we use the Ray transfer matrix analysis, also known as ABCD matrix analysis. It is a mathe-matical form for performing ray tracing calculations in problems which can be solved considering only paraxial rays. Each optical element, that can be a surface, an in-terface, a lens or a beam travel, is described by a 2 x 2 ray transfer matrix which operates on a vector describing an incoming light ray to calculate the outgoing ray. An entire optical system can be described doing the multiplication of the matrices that represents each elements of the ray path. We can apply this technique, because in the case that we are considering, the paraxial approximation is valid. All ray directions (direction normal to the wavefronts) are at small angles θ relative to the optical axis of the system, such that the approximation sin θ θ remains valid.
We recall that the matrix for the propagation over distance d is
Md 1 d 0 1 (2.29)
The matrix for the crossing of a lens with focal length f is
Mlenspfq 1 0 1 f 1 (2.30)
2.3. Study of the propagation of a Gaussian beam through a non-ideal 4f system
Figure 2.3: 4f system
We consider a non-ideal 4f system, as in Figure 2.3, where fcand fo are the focal
lenghts of the two lenses, d is the distance between the lenses and h is the distance between the second lens and the position of the waist. The matrix of this optical system is M4f system phd fcqfo hpdfcq fcfo fcpfohq fo fc dfo fofc fc fo (2.31)
When a Gaussian beam enters an optical system described by a matrix ABCD the q parameter undergoes a transformation such as in the exit plane of the optical system, it is given by
qout
Aqin B
Cqin D
(2.32)
All the calculations, that are reported in this section, are performed using the program Mathematica.
We start considering a Gaussian beam that has its waist in the input plane of the optical system. Since the parameter R in the waist is infinite, the q parameter in the input is given by
qin i
πw2in
λ (2.33)
When the beam passes through the non-ideal 4f system, the q parameter can be calculated using the eqn. 2.32
qABCD
iπrfcpfo hq ph dqfo hdsw2in fc2λph foq
ipfc d foqπw2in fc2λ
(2.34)
2. Scanning optical microscopy and w: RABCD π2rph d f cqfo hpd fcqs2win4 fc4λ2ph foq2 π2rph d f cqfo hpd fcqspfc d foqw4in fc4λ2ph foq (2.35) wABCD a π2rph d f cqfo hpd fcqs2win4 fc4λ2ph foq2 fcfowinπ (2.36) Eqn. 2.34 describes the q parameter at the output plane of a generic two lens system with the object in the front focal plane of the first lens. Using this equation we can explore the effect of the specific implementations of the optical system.
• In this first scheme, we set the output plane in the back focal plane of the second lens, so, by reference Figure 2.3, h fo. This correspond to a ”geometrical
optics” approach, because it assumes that the input and output planes are con-jugated. The beam width wABCD Ñ wg in the output plane is then expected to be
equal to the waist in the input plane win multiplied by the lateral magnification
ML ffoc. So, setting h fo, we calculate q, R and w
qg iπf2 ow2in ipfc d foqπw2in fc2λ (2.37) Rg f2 o fc d fo (2.38) wg fowin fc (2.39) It is worth noticing that, in this case, the ”beam radius”, i.e. wg of the Gaussian
beam is given by the expected value MLwin. Yet, the output plane does not
correspond to the new waist, unless the distance d between the lenses is equal to the sum of the focal lengths, d fo fc, which is the condition required to
get an infinite radius R at the output plane. In fact, only in this case both the intensity and the phase front of the beam at the ouput plan are a replica of the input. Actually, in this special case, the optical system makes a double Fourier
2.3. Study of the propagation of a Gaussian beam through a non-ideal 4f system
transform of the input beam [22], which means a perfect replica unless a scale factor.
• In the second case, we take advantage of the coherence of the optical field, we evaluate the q parameter in diffractive regime. We can find the distance h that gives the new waist of the beam after the optical system. This corresponds to impose that the real part of the q parameter vanishes. The required value of h can be obtained from the eqn. 2.27
hw rπ2pd f cqpfc d foqw4in fc4λ2sfo f4 cλ2 pfc d foq2π2w4in (2.40)
If we substitute hw in eqn. 2.34, we obtain
qw
w2inrpfc d foqπwin2 ifc2λsfc2πλfo2
ripfc d foqπw2in fc2λsrfc4λ2 pfc d foq2π2w4ins
(2.41)
The new waist is
ww winfcλfo a f4 cλ2 pfc d foq2π2w4in (2.42) As we expect, R pqwq 0 (2.43)
This simulation will be used for the theoretical estimation of the spatial resolution of the Raman section. We will estimate the laser spot size on the sample.
3. Hyperspectral imaging by spatial scanning of a sample surface
Chapter 3
Hyperspectral imaging by spatial
scanning of a sample surface
3.1
Hyperspectral imaging
In our project, we exploit dispersive spectroscopy by using a spectrometer whose scheme and functions are described in the next section. Our goal is to collect Raman and fluorescence spectra from each point of the samaple surface by employing a point scanning approach. In this way, we will ultimately collect Raman and fluorescence hyperspectral imaging datasets of the sample.
We briefly recall here the main concepts related with hyperspectral imaging (HSI). Indeed, spectral imaging, also known as imaging spectroscopy, refers to a set of methods and devices for acquiring light spectrum for each point of image scene. The method, on which we focus, is the HSI. Hyperspectral imaging acquires the full contin-uous spectrum of light for each point px, yq of the scene, therefore the resulting data give rise to a three dimensional data-cube (hypermatrix). This cube contains infor-mations about the intensity of the collected radiation for each point and wavelength px, y, λq. So, the goal of HSI is to obtain the spectrum for each pixel in the image with the purpose of finding objects, identifying materials or detecting processes.
3.1. Hyperspectral imaging
There are two main techniques for acquiring the three-dimensional dataset of a hyperspectral cube: spectral scanning and spatial scanning. The choice of the technique dependes on the specific application.
In spectral scanning, each 2D sensor output represents a monochromatic (a single λ), spatial (x,y) map of the scene (Figure 3.1(a)). HSI devices for this type of technique are typically based on optical band-pass filters (either tunable or fixed). The scene is spectrally scanned by exchanging one filter after another while the platform remains stationary. The hypercube is constructed by making an image of the entire sample for each wavelength available in the set. The main limit is in the discrete numbers of the filters.
In spatial scanning, each 2D sensor output represents a full slit spectrum px, λq. These slit spectra are obtained by projecting a strip of the scene onto a slit and dis-persing the slit image with a prism or a grating. So, the image is analyzed per lines (with a push-broom scanner). With these line-scan systems, the spatial dimension is collected through platform movement or scanning (Figure 3.1(b)). A special case of line scanning is point scanning (with a whisk broom scanner). It is a point-to-point mapping, in which the spectrum is collected point by point by implementing a raster scanning (Figure 3.1(c)). The latter is the technique that we perform to get our hyperspectal cube dataset of the sample.
(a) (b) (c)