• Non ci sono risultati.

• The definition of frequency is ambiguous for asynchronous trains of spikes. The most common solution is to count the spikes in time-intervals. However, the length of the interval influences the behavior of the Frequency in time

Given these two problems, it is clear that the SNNs are not capable to offer the same level of perfor-mance as ANNs, for which training presents fewer non-idealities.

CHAPTER 4. LIQUID STATE MACHINE 64 a steady frequency of 100Hz or, at a certain moment (10s) the frequency is stepped up to 125Hz for 1s. The NMSE is evaluated in an interval of time of 8s, while the spikes are time-binned in intervals of 100ms. Examples of the Input and reproduced Output with Delay of [0, 0.4, 1]s are plotted below.

In all the cases below, the same input is presented, in order to better compare the quality of the reconstruction with different delays.

(a) Input/Output comparison, steady input (b) Input/Output comparison, 3s step input

Figure 4.9: Reconstruction without Delay. The upper plots represents the time-binned evolution of the spike count, i.e. the frequency. The lower plots instead are obtained by displaying the Input values on the x-axis and the Output on the y-axis, in order to evaluate the correlation

When no delay is applied, the Liquid simply has to provide the output the same features in time it receives from the input. As a consequence, the task is relatively simple and, as it is shown in Fig.4.9, the accuracy of the reconstruction is high. Without introducing the step of frequency in the input, the Output is even able to follow the fast fluctuations of the frequency of the input. Also in the case of the input presenting the step of frequency, the output is able to reproduce most of the frequency fluctuation, even though the step increases the difficulty of the task and thus slightly lowers the accuracy of the output.

All these considerations are confirmed by the Correlation Plots, in which the Input is plotted on the x-axis and the Output in the y-axis: ideally, if the output would be able to perfectly follow the input, the dots would be aligned in a straight line. The closer to the ideal line, the more accurate the output.

(a) Input/Output comparison, steady input (b) Input/Output comparison, 3s step input

Figure 4.10: Reconstruction for a Delay of 400ms. The steady frequency case exhibits good level of reproduc-tion of the input frequency, without the same precision it had without the delay. When a step of frequency is introduced, the level of accuracy drops, since the task is harder.

The figures above are obtained by shifting the Output of the Delay which they are given, in order to judge the quality of the reconstruction. With a 400ms delay, the accuracy is in general worse, as expected: the Liquid is asked to provide the output information it received some time before. The ability to retrieve this information is related to the recurrent architecture of the connections, which form the short term memory. By storing information about the past inputs in the complex recurrent dynamics of the Network, the Liquid is in principle able to provide some information about the past to the Output. Of course, the quality of the reconstruction is lower with respect to the previous case, since the readout has to linearly select the information of the past inputs in the Liquid, ignoring the ones of the present.

Moreover, the complexity of the task is much enhanced by the step of the frequency of the input, for which the reconstruction is of pretty poor quality.

(a) Input/Output comparison, steady input (b) Input/Output comparison, 3s step input

Figure 4.11: Reconstruction for a Delay of 1000ms. In the case of steady input frequency, still some of the features are reproduced: the Liquid has a rich enough dynamics so that it can reproduce, to some extent, the oscillations of the input frequency. With the step of frequency, the Output only learned the average frequency of the input and forgot the step of frequency.

Increasing the Delay to 1s, one expects to see a further decrease in accuracy. This is true for the case of the Step of frequency: the output is seen to mainly learn the average frequency, losing the ability to reproduce the fast frequency oscillations. This behavior is expected to saturate the accuracy for higher delays. As a matter of fact, the output weights can learn the average frequency anyway, also producing random oscillations due to the dynamics in the Liquid.

Instead, for the case of steady average input frequency, the accuracy has already saturated: no major difference in behavior is seen with respect to the case of 400ms delay. The Output is mainly repro-ducing a signal at the same average frequency with some oscillation, which in some cases remind the ones of the input. Some correlation with the Input is still present, but the level of accuracy of the case of no Delay is not even nearly reached.

Results of the DAE

The NMSE and Accuracy are evaluated for different Delays in order to estimate the memory of the Reservoir. In order to test the Accuracy, the DAE is performed 10 times, each of which varying the onset time of the step of frequency, if present, and the structure of the input. This means that the Poisson input group, still firing at the same average frequency, produces different frequency oscilla-tions every time. The average NMSE and Accuracy are plotted.

CHAPTER 4. LIQUID STATE MACHINE 66

(a) Accuracy of the DAE for the steady input (b) Accuracy of the DAE for the 3s step input

Figure 4.12: Results of the Delayed-AutoEncoder for the cases of No-Step and 3-second-Step input frequency.

Accuracy is stable regardless of the Delay for the case of Steady input frequency, while it decreases for the case of the 3-second-Step

In the case of the Steady Input, the reconstruction is of great quality, as also confirmed by Fig.

4.12. For small Delays, the Output is able to capture also the small fluctuations of the input frequency in time, due to the stochasticity of the Poisson Input neurons. This is confirmed by the Correlation plot, in which the points are distributed along the straight line representing the maximum correlation between Input and Output. When instead the Delay is over 200ms, the reconstruction is of worse quality: the Output is mainly able to learn the mean input frequency and produces some fluctuations which are correlated, only to some extent, to those of the Input. This is why the behavior of the Accuracy is almost independent on the Delay, when that exceeds certain threshold values.

If the Input presents a step of Frequency, the Output is presented a more complex task, since simply learning an average output frequency will generate a larger error. As a matter of fact, the error is in general larger than in the case of steady input for every Delay. It is relevant to observe that the reconstruction is of good quality when the Delay is small, while the accuracy decreases as the Delay overcomes 200ms. This is attributed to the Fading Memory of the Reservoir, for which the information of the Input is retrieved in the Reservoir only for small Delays. When the Delay is greater than 0.7s, a similar situation as the case of the steady input occurs. The Output learns the average frequency and produces random oscillations, but totally forgets the step of frequency. Still, the accuracy will stabilize around 0.85, but this does not mean that the Liquid is able to remind pas input, rather than the output weights has successfully learned the mean value of frequency of the Input.

At last, a common behavior for both the cases of steady input and with the frequency feature: the accuracy is best when the Delay is of 100ms. The interpretation of this fact is that the activity of the Input takes time to spread in the Liquid, since the input spikes are processed by Neuronss and Synapses which have time constants at the 10ms time scale. This means that even a single spike event takes some time in order to have an effect on the Liquid. Due to the level of connectivity (5%) and the size of the Reservoir (200 Neurons), each input spike is processed in average by a mean number of Neuron and the time it takes to produce an effect on the Liquid is certainly on the 10ms scale. It is then of no surprise that when a Delay of 100ms is present, the reconstruction is of best quality: the Liquid contains just the information it received from the input a short time before, which has spread through the Network.

Intrinsic Plasticity

This chapter of the Thesis is devoted to the implementation of Intrinsic Plasticity (IP) in a Spiking Recurrent Neural Network, with the aim of controlling the dynamics of the whole system by acting on the Input resistances of the Neurons, implemented by Memristors. As a matter of fact, in Spiking Neural Networks (SNNs) with a high enough degree of plausibility with respect to biological Neural Systems, Neurons receive current pulses - resulting from Action Potentials - which are integrated by an RC group. The equivalent membrane resistance Rmem then has a crucial role in modulating the integration of the incoming spikes. This work proposes to act on this resistance in order to not only control the activity of the single Neuron, but also of the whole Network. Since the Membrane resistance Rmem is a property of the Neuron, an adaptive rule concerning that variable falls in the category of Intrinsic Plasticity. This does not mean that the real Neuron only performs Intrinsic Plasticity by varying their input resistance, but this simple concept comes with convenient implications:

• featuring a complementary circuit for the operation of the input resistance, as shown in [53], IP can be implemented for on-chip operations

• it will be shown that the switching of Memristive states is very energy efficient, thus allowing IP to operate consuming very low power

• despite the stochasticity of the Memristors, changing the input resistances of Neurons is shown to have an effective role in the dynamic behavior of the whole Network

Considering the beneficial effects observed in most experiments on Mammalians visual cortex [54], Intrinsic Plasticity promises to increase the energetic efficiency of Spiking Neural Network based sys-tem, while maximizing the volume of processed information.

All these factors make the implementation of IP a promising feature for Neuromorphic Computing able to increase the degree of fidelity to biology and possibly enhancing the performance of the Net-work and the efficiency of the chip, without incrementing its size.

67

CHAPTER 5. INTRINSIC PLASTICITY 68