• Non ci sono risultati.

In figura A.1 viene mostrata una tabella riassuntiva dell’intera rete; vengono omesse le due reti ausiliarie di GoogLeNet, contenenti i due ulteriori output layer.

Le figure A.2, A.3, A.4, A.5, A.6 mostrano invece l’intera architettura della rete. Si noti che la parte iniziale della rete non possiede più layer in parallelo, bensì rappresenta un classico approccio di tutte le CNN viste finora. A partire dal primo inception layer aumenta anche la larghezza della rete.

Figura A.5: Architettura GoogLeNet (parte 4/5)

[1] Y. Jia. Caffe: An open source convolutional architecture for fast feature embedding.http://caffe.berkeleyvision.org/, 2013.

[2] Evernote Scannable App. https://evernote.com/intl/it/products/ scannable/

[3] Microsoft Office Lens. https://www.microsoft.com/it-it/store/apps/ office-lens/9wzdncrfj3t8

[4] App Mobile Banking Unicredit. https://www.unicredit.it/it/ privati/serviziinnovativi/tuttaunaltrastoria_app.html

[5] Programma di Intervento Per la Prevenzione dell’Istituzionalizzazione (PIPPI). https://elearning.unipd.it/progettopippi/

[6] Laboratorio di Ricerca e Intervento in Educazione Familiare (LabRIEF).http://labrief.fisppa.unipd.it/pippi/

[7] Geoffrey Hinton, Li Deng, Dong Yu, Abdel-rahman Mohamed, Nav-deep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara Sainath George Dahl, and Brian Kingsbury. “Deep Neural Networks for Acoustic Modeling in Speech Recognition”. IEEE Signal Processing Magazine, pages 82-97, 2012.

[8] Jun Yang, Yu-Gang Jiang, Alexander G. Hauptmann, Chong-Wah Ngo. “Evaluating Bag-of-Visual-Words Representations in Scene Clas-sification”. Proceedings of the international Workshop on Workshop on Multimedia information Retrieval , pages 197-206.

[9] Deep Learning and Deep Neural Networks.https://en.wikipedia.org/ wiki/Deep_learning.

[10] Philipp Fischer, Alexey Dosovitskiy, Thomas Brox. “Descriptor Mat-ching with Convolutional Neural Networks: a Comparison to SIFT ”. CoRR, abs/1405.5769, 2014.

[11] “This Business of Brewing: Caffe in Practice”, from the cour-se of Convolutional Neural Networks for Visual Recognition of the Standford University. http://vision.stanford.edu/teaching/cs231n/ slides/evan.pdf.

[12] Yann LeCun, Yoshua Bengio, Geoffrey Hinton. “Deep learning”. Nature, Vol 521, 2015. https://dx.doi.org/10.1038%2Fnature14539

[13] Artificial Neuron. https://en.wikipedia.org/wiki/Artificial_neuron

[14] UFLDL Tutorial, Standford University. http://ufldl.stanford.edu/ wiki/index.php/UFLDL_Tutorial.

[15] Convolutional Neural Network for Visual Recognition, Standford University.http://cs231n.github.io/convolutional-networks/. [16] Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton. “Imagenet

classifi-cation with deep convolutional neural networks”. In Advances in Neural Information Processing Systems 25, pages 1106−1114, 2012.

[17] Min Lin, Qiang Chen, and Shuicheng Yan. “Network in network ”. CoRR, abs/1312.4400, 2013.

[18] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich,A. “Going deeper with convolutions”. CoRR, abs/1409.4842, 2014.

[19] Karen Simonyan, Andrew Zisserman. “Very deep convolutional net-works for large scale image recognition”. Published as a conference paper at ICLR 2015.

[20] Matthew D. Zeiler and Rob Fergus. “Visualizing and understanding convolutional networks”. David J. Fleet, Tom as Pajdla, Bernt Schiele, and Tinne Tuytelaars, editors, Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Pro-ceedings, Part I, volume 8689 of Lecture Notes in Computer Science, pages 818-833. Springer, 2014.

[21] Ken Chatfield, Karen Simonyan, Andrea Vedaldi, Andrew Zisserman. “Return of the Devil in the Details: Delving Deep into Convolutional Nets”, 2014.

[22] Song Han, Huizi Mao, William J. Dally. “Deep compression: com-pressing deep neural networks with pruning, trained quantization and Huffman coding”. Under review as a conference paper at ICLR 2016.

[23] Sergey Karayev, Matthew Trentacoste, Helen Han, Aseem Agarwala, Trevor Darrell, Aaron Hertzmann, Holger Winnemoeller. “Recognizing Image Style”. University of California, Berkeley, Adobe, 2014.

[24] R. B. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proc. CVPR, 2014.

[25] Emily Blem, Jaikrishnan Menon, Karthikeyan Sankaralingam. “Power struggles: revisiting the RISC vs. CISC debate on contemporary ARM and x86 architectures” University of Wisconsin - Madison In Proc. IEEE, 2013.

[26] Hassan Ghasemzadeh, Navis Amini, Ramyar Saeedi, Majid Sarrafza-deh. “Power-Aware computing in wearable sensor networks: an optimal feature selection”. IEEE Transactions on mobile computing, VOL.14, NO.4, April 2015.

[27] Caffe-android-lib by Shiro Bai. https://github.com/sh1r0.

[28] Android-Object-Detection by Darrenl Tzutalin. https://github.com/ tzutalin/Android-Object-Detection.

[29] Boost libraries.http://www.boost.org/.

[30] Eigen libraries. http://eigen.tuxfamily.org/index.php?title=Main_ Page.

[31] OpenCV libraries.http://opencv.org/.

[32] Protobuf library.https://developers.google.com/protocol-buffers/. [33] Khairul Muzzammil Saipullah, Ammar Anuar, Nurul Atiqah Ismail and Yewguan Soo “Measuring power consumption for image processing in android smartphone”. Department of Computer Engineering, Faculty of Electronic and Computer Engineering, University Teknikal Malaysia Melaka, Melaka, Malaysia.

[34] Fatemeh Jalali, Chrispin Gray, Arun Vishwanath, Robert Ayre, Tan-su Alpcan, Kerry Hinton, Rodney S. Tucker “Energy ConTan-sumption of Photo Sharing in Online Social Networks”. Centre for Energy-Efficient Telecommunications (CEET), University of Melbourn.

[35] Yu-Hsin Chen, Tushar Krishna, Joel Emer, Vivienne Sze. “Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutio-nal Neural Networks”. InternatioConvolutio-nal Solid-State Circuits Conference (ISSCC), pages 262-263, San Francisco, CA, USA, 2016.

Vorrei ringraziare innanzitutto il Prof.Carlo Fantozzi per avermi sempre se-guito, ascoltato e consigliato nell’arco dell’intero svolgimento di questa tesi, dedicandomi molto del suo tempo e fornendomi anche molto materiale utile in vari punti della tesi.

Ringrazio poi la mia famiglia e ai miei parenti (un saluto speciale al nonno) per avermi sostenuto nel corso di tutti questi anni di studio e per avermi sempre fornito un consiglio nei momenti di bisogno.

Ringrazio tutti i miei compagni di studio, incontrati nell’arco del mio percorso universitario, per tutti i bei momenti passati assieme.

Un ringraziamento è d’obbligo anche ad i miei amici, per tutte le sere che hanno sopportato i miei racconti nel corso di questo lavoro.