ING-ING/01 ELETTRONICA
Massimiliano Donati
Hardware/Software co-design,
implementation and testing of embedded
systems in challenging applications
Anno 2013
Scuola di Dottorato in Ingegneria “Leonardo da Vinci”
Corso di Dottorato di Ricerca in
Ingegneria dell’Informazione
1
Autore:
Massimiliano Donati
_______________________
Relatori:
Prof. Luca Fanucci _______________________ Prof. Sergio Saponara _______________________
Hardware/Software co-design,
implementation and testing of embedded
systems in challenging applications
ING-INF/01 ELETTRONICA Anno 2013
UNIVERSITÀ DI PISA
Scuola di Dottorato in Ingegneria “Leonardo da Vinci”
Corso di Dottorato di Ricerca in
Ingegneria dell’Informazione
Oggigiorno i sistemi embedded sono impiegati in un vasto insieme di applicazioni. Questi sistemi di calcolo special-purpose sono progettati per eseguire una funzione specifica, o una classe di funzioni similari, sottostando a vincoli che riguardano l’applicazione di desti-nazione ed il fattore di forma, l’alimentazione e le prestazioni, le risorse e le caratteristiche di sistema, il costo, e una seria di assunzioni sul comportamento dell’utente finale. Diver-samente dai sistemi general-purpose non possono essere riprogrammati dall’utente. In questo settore, i progettisti hanno due obiettivi. Per prima cosa costruire un sistema capace di eseguire la computazione richiesta e risolvere i bisogni dell’utente senza vio-lare i vincoli imposti. Secondariamente, ottimizzare le prestazioni, l’efficacia, l’affidabilitá, le dimensioni ed il costo del sistema finale. Poiché le metriche di progetto sono connesse e dipendono dalla separazione tra componenti hardware e software, l’astrazione di se-parare il software dall’hardware non funziona. Invece, per massimizzare il successo del sistema finale é richiesto un approccio integrato che parta dal partizionamento tra hard-ware e softhard-ware, e includa paradigmi di progettazione tipici di entrambi i mondi.
Questo lavoro presenta i risultati della ricerca nel campo dei sistemi embedded per tele-medicina, tecnologie assistive e applicazioni ad alte prestazioni di calcolo, evidenziando particolarmente l’aspetto dell’interfacciamento uomo-macchina e gli aspetti architetturali. Per ciascun sistema sono stati definiti, a partire dai bisogni dell’utente, i requisiti e le sfide progettuali, le metriche di riferimento e il livello di integrazione desiderato. Quindi é stato impiegato il modello di progettazione integrata di hardware e software. Il primo sistema é un gateway interattivo per il telemonitoraggio dei parametri vitali con sensori non invasivi caratterizzato dalla sua usabilitá ed integrazione con il fascicolo sanitario elettronico. Il secondo studio riguarda un’architettura di digital processing fortemente parallela, origina-riamente nata in ambito spaziale, che consente un incremento significativo di prestazioni rispetto ai predecessori.
Nowadays embedded systems are commonly deployed in a wide range of applications. These special-purpose computing systems are designed to perform a target function, or a class of functions, having distinctive constraints related to the intended application and form factor, power and performance, system resources and features, affordable cost, and assumptions about end-user behaviour. Differently from the general-purpose systems, embedded systems are not programmable by the end-user. In this sector, the design en-gineers have two primary goals. First, to build up a system able to perform the required computation and to meet the user needs without violating the imposed constraints. Sec-ond, to optimize the general performance, the effectiveness, the reliability, the size and the cost of the final system. Because all design metrics are connected and depending on hardware/software partitioning and interaction among hardware and software com-ponents, the abstraction of separating software from hardware does not work. Instead, a co-design approach that starts from the hardware/software partitioning and includes paradigms from hardware and software design and implementation in integrated fashion is required to maximize the success of the final system.
This work presents the research achievements in the field of embedded systems for telemedicine, assistive technology and high-performance computing applications, partic-ularly focused on human-machine interfaces and architectural aspects. For each system the requirements and challenges, the golden design metrics and the desired integration level have been defined, starting from the end-user needs. Then the hardware/software co-design, implementation and testing phases have been completed. The first system is an interactive gateway for telemonitoring of vital signs using non-invasive biomedical sensors characterized by its usability and the integration with the electronic health record. The second study concerns a heavy parallel digital signal processing platform, originally conceived for space applications, that provides a significant increase of performance with respect to its predecessors.
Conferences
1. Bacchillone T., DONATI M., Saponara S. and Fanucci L., “A flexible home gateway system for telecare of patient affected by chronic heart failure”, IEEE International symposium on medical information and communication technology (ISMICT), 2011, DOI: 10.1109/ISMICT.2011.5759814
2. DONATI M., Bacchillone T., Saponara S. and Fanucci L., “A flexible home monitoring platform for patients affected by chronic heart failure directly integrated with the re-mote Hospital Information System”, SPIE Microtechnologies, 2011,
DOI: 10.1117/12.886465
3. Fanucci L., Roncella R., Iacopetti F., DONATI M., Calabró A., Leporini B. and San-toro C., “Improving Mobility of Pedestrian Visually-Impaired Users”, AAATE - Associ-ation for the Advancement of Assistive Technology in Europe Conference, 2011, DOI: 10.3233/978-1-60750-814-4-595
4. Fanucci L., Bacchillone T., DONATI M., Saponara S., Passino C., Costalli F., Petrucci S., Sanchez-tato I., Pascual F., Zlatko V. and Hrvatin O., “Health@Home: lesson learnt and future perspective in the home monitoring of patients affected by chronic heart failure”, Ambient Assisted Living (AAL) Forum, 2011
5. Saponara S., DONATI M., Bacchillone T., Sanchez-tato I., Carmona C., Fanucci L., and Barba P., “Remote monitoring of vital signs in patients affected by chronic heart failure”, IEEE Sensors Application Symposium, IEEE Sensors Application Sympo-sium (SAS), 2012, DOI: 10.1109/SAS.2012.6166310
6. Errico W., Colonna A., Piscopiello G., Tosi P., Cordiviola E., Bacci B., Pii V., Fanucci L., Saponara S., DONATI M., Vincenzi A., Reiter F., Nuzzolo F., Leupers R., Odendahl M. and Yakoushkin S., “DSPACE: a new space DSP development”, Eurospace DASIA - The International Space System Engineering Conference, 2012, European Space Agency, (Special Publication ISBN: 978-929092265-0)
7. Odendahl M., Yakoushkin S., Leupers R., Errico W., DONATI M. and Fanucci L., “A next generation digital signal processor for European space missions’’, International
(ESTEL), 2012, DOI: 10.1109/ESTEL.2012.6400064
8. Colonna A., Piscopiello G., Tuccio G., Errico W., Bigongiari F., Tosi P., Rachiele S., Cordiviola E., Bacci B., Pii V., Fanucci L., Saponara S., DONATI M., Vincenzi A., Reiter F., Nuzzolo F., Leupers R., Odendahl M. and Yakoushkin S., “Data signal pro-cessor for space applications”, Let’s Embrace Space â ˘A ¸S FP7 Space Conference, 2012, DOI: 10.2769/31208
9. Saponara S., DONATI M., Fanucci L., Odendahl M., Leupers R. and Errico W., “DSPACE hardware architecture for on-board real-time Image/video processing in European space missions”, SPIE Conference on Real Time Image and Video Pro-cessing, 2013, DOI: 10.1117/12.2002096
10. Saponara S., Fanucci L., DONATI M., Odendahl M., Leupers R. and Errico W., “Next-generation digital signal processor for European space applications”, SPIE News-room Defense and Security, 2013, (DOI: 10.1117/2.1201303.004719)
11. DONATI M., Iacopetti F., Mandoloni D., Giometti F. and Fanucci L., “ePhone - A Technical Aid to Ease Accessibility to Android Smartphones for Motor Skill Impaired Users”, AAATE - Association for the Advancement of Assistive Technology in Europe Conference, 2013, (DOI:10.3233/978-1-61499-304-9-506)
12. DONATI M., Fanucci L., Saponara S., Errico W., Colonna A., Piscopiello G., Tuccio G., Odendahl M., Leupers R., Spada A., Pii V., Cordiviola E., Nuzzolo F. and Reiter F., “A new space Digital Signal Processor design”, APPLEPIES International Confer-ence on Electronic Applications, 2013
13. Benini A., DONATI M., Iacopetti F. and Fanucci L., “User-friendly Single-lead ECG De-vice for Home Telemonitoring Applications”, IEEE International symposium on medi-cal information and communication technology (ISMICT), 2014
14. DONATI M., Benini A., Fanucci L., Bani S., Naldini M. and Bartolozzi A., “A flexible ICT platform for domestic healthcare of patients affected by chronic diseases”, IEEE International symposium on medical information and communication technology (IS-MICT), 2014
15. Fanucci L., Roncella R., Iacopetti F., DONATI M. and Giannelli N., “A mobility aid system for visually impaired people on the historical walls of the Lucca city, Tuscany, Italy”, Mediterranean conference on control and automation (MED), 2014 (accepted)
Journals
1. Fanucci L., Saponara S., Bacchillone T., DONATI M., Barba P., Sanchez-Tato I. and Carmona C., “Sensing devices and sensor signal processing for remote monitoring of vital signs in CHF patients”, IEEE -Transaction on Instrumentation and Measurement (TIM), vol 62(3), pp 553-569, 2012, DOI: 10.1109/TIM.2012.2218681
2. DONATI M., Bacchillone T., Fanucci L., Saponara S. and Costalli F., “Operating proto-col and networking issues of a telemedicine platform integrating from wireless home
and Communication (JCNC), vol 2013, pp 1-12, 2013, DOI: 10.1155/2013/781620 3. DONATI M., Fanucci L., Saponara S., Errico W., Colonna A., Piscopiello G., Tuccio
G., Bigongiari F., Odendahl M., Leupers R., Spada A., Pii V., Cordiviola E., Nuzzolo F. and Reiter F, “A new space digital signal processor design”, Springer - Lecture Notes on Electrical Engineering (LNEE), 2014, DOI: 10.1007/978-3-319-04370-85
Patents
1. Fanucci L. DONATI M., Iacopetti F. and Benini A., “ELETTROCARDIOGRAFO ER-GONOMICO E DI SEMPLICE UTILIZZO PER TELEMONITORAGGIO RESIDEN-ZIALE”, 2013, N. PI2013A000096
Others
1. Fanucci L., Saponara S., DONATI M. and Bacchillone T., “A flexible home gateway system for telecare of patients affected by chronic heart failure”, Ambient Assisted Living (AAL) Forum PhD Workshop, 2011. (short article and poster)
2. Fanucci L., DONATI M. and Mandoloni D., “e(asy)Phone: a simpler approach to mo-bile phones”, Ambient Assisted Living (AAL) Forum PhD Workshop, 2011. (short article and poster)
3. Fanucci L. and DONATI M. “Health@Home: lesson learnt and future perspective in the home monitoring of patients affected by chronic heart failure”, Ambient Assisted Living (AAL): promoting your AAL project to representatives of regional authorities, 2013. (poster)
1 Introduction . . . . 1
1.1 Embedded systems overview . . . 2
1.1.1 A general model for embedded systems . . . 5
1.1.2 Main constraints in embedded systems . . . 6
1.2 Inside an embedded system . . . 9
1.2.1 Processor . . . 10
1.2.2 Memory . . . 13
1.2.3 Peripherals and interfaces . . . 15
1.2.4 Human-Machine interface . . . 16
1.2.5 Power supply . . . 17
1.2.6 Dedicated hardware . . . 17
1.2.7 Application software and Operating systems . . . 18
1.3 The design flow . . . 19
1.3.1 The Hardware/Software co-design flow . . . 21
2 An interactive gateway for telemedicine applications . . . 23
2.1 The telemedicine for chronic diseases . . . 24
2.2 The function of the gateway . . . 27
2.2.1 The system requirements . . . 28
2.2.2 The system features . . . 29
The pool of sensors . . . 29
The data acquisition . . . 30
The data transmission . . . 30
The data storage . . . 32
The alarms detection and management . . . 32
The human-machine interface . . . 33
The operating protocol . . . 33
2.3 The gateway detailed design . . . 34
2.3.1 Communication interfaces and data formats . . . 34
ECG processing . . . 42
SpO2 and plethysmographic wave processing . . . 44
Blood pressure processing . . . 45
Weight processing . . . 45
2.4 The gateway implementation . . . 46
2.4.1 The application software . . . 46
2.4.2 The graphical user interface . . . 53
2.4.3 The 5-keys keypad . . . 55
2.5 The results and the future perspectives . . . 55
3 A high-performance DSP for harsh environments . . . 59
3.1 The computation demand in space applications . . . 60
3.1.1 The required features . . . 61
3.2 State-of-the-art analysis . . . 62
3.2.1 The TigerSHARC family . . . 63
3.2.2 The SHARC family . . . 64
3.2.3 The Texas Instruments TMS320C67x family . . . 65
3.3 The system-on-chip architecture . . . 69
3.3.1 The data processing unit . . . 71
Datapath . . . 73
Control pipeline . . . 75
Control and status register file . . . 77
Interrupts . . . 77
3.3.2 The system-on-chip peripherals . . . 79
Caches . . . 81
DMA . . . 84
Memory Controller . . . 84
Control Logic and DPU controllability . . . 85
Spacewire controller . . . 86
3.4 The implementation approach . . . 87
3.5 Some notes on the Software Development Environment . . . 88
3.6 Results . . . 90
1.1 Major application areas of embedded systems . . . 4
1.2 The general representation model for embedded systems . . . 5
1.3 Example of metrics competition within embedded systems . . . 10
1.4 The embedded system general architecture . . . 11
1.5 Metrics competition for different implementations of the processor . . . 14
1.6 Memory requirements for the different processor implementations . . . 15
1.7 Examples of human-machine interfaces of various complexity . . . 16
1.8 Example of hardware accelerators mounted in the I/O space of the processor . . . 18
1.9 The increasing complexity of the embedded software in different applications 19 1.10 The waterfall design flow . . . 20
1.11 The general hardware/software co-design strategy for embedded systems . 21 2.1 General model for the telemedicine healthcare services . . . 27
2.2 The overall ICT structure for the provisioning of innovative healthcare services . . . 28
2.3 Specification of the data acquisition process . . . 30
2.4 Specification of the data transmission process . . . 31
2.5 Specification of the manual alarm process . . . 32
2.6 Comparison among communication technologies for the sensors . . . 35
2.7 Comparison among communication technologies for the gateway . . . 36
2.8 Data format for the communication with the collection server . . . 37
2.9 Block diagram of the gateway architecture . . . 38
2.10 Gateway elements: resources, data formats and final layout . . . 39
2.11 Sensors equipment and positioning . . . 41
2.12 ECG main points and steps of the algorithm for QRS detection . . . 42
2.13 a) ECG signal; b) filtered signal plus R peaks; c) R envelope signal . . . 43
OP are also visible (1 month) . . . 45
2.16 Example of weight trend from 20th to 28th July . . . 46
2.17 Components diagram of the gateway application software . . . 47
2.18 Statechart diagram of the gateway application software . . . 49
2.19 Static class diagram of the gateway application software . . . 51
2.20 Example of the graphical user interface in IDLE state . . . 54
2.21 Example of the graphical user interface that require a measurement . . . 54
2.22 Membrane 5-key keypad installed in the gateway . . . 55
2.23 Feedbacks aggregation result of patients . . . 56
3.1 TigerSHARC architecture block diagram . . . 63
3.2 Comparison among the SHARC DSPs generations . . . 65
3.3 The ADSP21469 functional block . . . 66
3.4 The landscape of Texas Instrument floating-point DSPs . . . 67
3.5 The TMS320C6713 functional block . . . 68
3.6 The global system-on-chip block diagram . . . 69
3.7 The Data Processing Unit (DPU) functional block diagram . . . 71
3.8 Processing capabilities of each computational units . . . 72
3.9 DPU datapath . . . 74
3.10 Processor pipeline . . . 76
3.11 DPU pinout for the interrupts support . . . 79
3.12 The Data Cache block diagram . . . 82
3.13 The Instruction Cache block diagram . . . 83
3.14 Finite State Machine of the DPU . . . 85
3.15 Connection between the DPU and the control logic block . . . 86
3.16 Spacewire interface block diagram . . . 86
3.17 The LISA development approach . . . 88
3.18 Approach for the Software development environment . . . 89
3.19 Software Development architecture . . . 89
3.20 FPGA-based demonstration board layout . . . 90
3.21 Demo board with I/O expansion mezzanine board in desktop version . . . 91
1.1 Main categories of embedded systems . . . 4
1.2 The main constraints of embedded systems design . . . 9
1.3 The main elements of an embedded system . . . 11
2.1 The prevalence of chronic diseases in Tuscany, Italy and Europe (2013) . . . 26
2.2 Biomedical sensors partitioning and desired sampling capabilities . . . 29
2.3 Example of measurement activities in the Operating protocol . . . 34
2.4 Feedbacks aggregation result of clinicians . . . 56
3.1 The requirements for the space-qualified DSP established by ESA . . . 62
3.2 Computational delay vs type of instructions . . . 77
3.3 Control and status registers description . . . 78
3.4 System-on-chip peripherals . . . 80
3.5 Registers of the DMA controller . . . 84
3.6 Measured data throughputs on spacewire point-to-point link . . . 92
Introduction
Starting from the first modern embedded system developed in early 1960s, the Apollo Guidance Computer in charge of controlling the spacecraft during the lunar missions of the NASA Apollo program, in few decades embedded systems have become an integral part of the modern life and they are commonly deployed in a wide range of consumer and critical applications. Examples of actual embedded systems are stand-alone devices like a digital camera or a blood pressure monitor, parts of larger systems like the air-bag con-troller in a car, or distributed control systems in industrial automation. One of the main driver of such extraordinary evolution is the continuous advances in the field of hard-ware fabrication (e.g. processor, memories, peripherals, FPGA, PCB, etc.), that enables to dramatically reduce the cost of hardware and to increase its performance and integra-tion level. Others drivers are the improvements in the battery and packaging sectors, and the availability of dedicated software, programming languages and operating systems for embedded systems.
In general, embedded systems are specialized computing systems designed to carry out specific tasks. In contrast to general-purpose computing systems (e.g. PC), embed-ded ones are developed with specific constraints about the target application and the expected end-users, and they are not programmable by the end-user to change their function in the same way a PC is. Indeed, hardware and software components, along with additional mechanical and other technical parts if required, are highly specialized for the assigned function in order to meet appropriate design metrics (e.g. performance, size, power, costs, reliability, etc.). The embedded system design is a complex set of trade-offs the designer has to deal with. It important to know the effects that different hardware can have on the software and vice versa. The implementation of an embedded system optimized for its purpose depends on the hardware/software partitioning and the choice of the appropriate hardware and software resources at the early stage of the project. Moreover, the use of a co-design approach including methodologies from hardware and software design in integrated fashion is recommended.
This chapter gives the reader a preliminary overview of the embedded system field, draw-ing the identikit of a general system, underlydraw-ing the main design metrics and describdraw-ing in
details the benefits of the hardware/software co-design strategy. The following chapters present the results of the research activities that led to the design and development of embedded systems belonging to respectively the telemedicine, assistive technology and high-performance computing applications.
1.1 Embedded systems overview
An embedded system is commonly defined in literature as a computing system that is primarily designed and optimized to perform a specific function or a class of similar func-tions [1], albeit with different opfunc-tions and configurafunc-tions [2]. Moreover it has specific con-straints and trade-offs. In particular, embedded systems concon-straints are related to the intended applications and form factors, power, resources and features, and assumptions about end-user behaviour [3]. From this definition results that an embedded system is a special-purpose computing machine, with hardware, software and additional mechanical or other technical parts [4], designed and assembled to carry out a specific computational task within a restricted working environment. In that respect, the native function assigned to the system at design-time cannot be changed during the lifetime and no reprogram-ming features are provided for the end-user, while the existence of dedicated hardware and software resources within the embedded system could be completely unnoticed by the end-user.
This is in direct contrast with general-purpose computing systems, which are designed in order to perform sufficiently well over a broad range of tasks coming from several areas. Indeed, general-purpose systems are engineered to make the common case fast, with-out any optimizations for particular tasks. The rationale is to accommodate the largest as possible set of applications that the intended end-user segment may demand to those systems. For example, a personal computer one minute is a word processor and the next it is a multimedia player machine or a gaming console simply by running a different software on the same hardware equipment. Of course, it performs well in all cases but probably an embedded system like a DVD player or a native gaming console accomplish those tasks better and reduce the waste of resources being narrowly defined for those purposes.
The previous definition of embedded system is wider than the original one given by the IEEE in 1992: an embedded computer system is a computer system that is part of a larger system and performs some of the requirements of that system; for example, a computer system used in an aircraft or rapid transit system [5]. In fact, while the IEEE definition fits well for systems that are subparts of others like for example a controlling board within an industrial equipment, it fails to cover all stand-alone devices (e.g. MP3 players, GPS navigators, digital cameras, ovens, etc.) evolved with the advances in technology and the decreases in the cost of implementing hardware and software components.
Among stand-alone devices, the modern smartphones need a particular mention. They are considered embedded systems even if present some general-purpose capabilities
(i.e. the possibility to run a wide range of software apps that implement different func-tions), and they are designed to be able to do a variety of primary functions other than the phone calling (e.g. digital camera, web browser, GPS navigation, etc.). Smartphones are implementing one of the modern trend in the embedded system word, the so-called device convergence that is the fusion of different specialized devices into a single one, without accounting in significant way the cost, the size and the performance of the orig-inal equipment [4]. Another recent example of device convergence is the infotainment system of the high-level cars, containing navigation device, video player, parking enable-ment, voice controlled applications, internet access devices, lane departure system, GPS connectivity and Bluetooth enabled headphones.
To summarize, in this context system refers to a set of one or more electronic components with computational capabilities, eventually surrounded by other technological elements. Embedded can means:
• the purpose of the system is strongly dedicated and its behaviour cannot be changed from the native one;
• the components of the system are internal to a larger system that contains them, and not necessarily visible.
The embedded system device convergence can be viewed as the attempt to merge some “classic” embedded systems into a single system with the aim of increasing the value of the final system more than the increase of production costs.
Embedded systems are ubiquitous and they have permeated most aspects of modern life, being widely deployed in a variety of consumer, automotive, industrial, telecommu-nications, space, medical and military applications. They come in many forms, ranging from a simple stand-alone device to electronic cards hosted stationary installations to a system composed of many separate networked nodes that work collaboratively to control a nuclear plant [6]. Additionally, they span different categories including very small and large systems as well. According to some estimations, embedded systems are already more than humans on the planet and they are expected to reach 40 billion by 2020. Ex-amples of embedded systems involved in the daily life are the alarm clock, the telephone and the television, the printer and the scanner, the camera, the smartphone, the ADSL modem, and also most domestic appliances like the washing machine, the dishwasher, the ovens, etc.. In addition, there are many invisible embedded systems that everyday work for people. For example, a car contains around 50 processors for different purposes like engine control, brakes, audio, air-conditioning, etc.. They are also found in a variety of forms across industry in factory automation and robotics. Moreover, the intensive use of embedded systems like surgery equipment, monitoring instrumentations, implantable devices etc. produces benefits in the healthcare sector. Finally, several different kind of embedded systems are present in critical applications like the aircrafts stability control or the on-board data processing in satellites. Figure 1.1 shows the main application areas and some manifestations of embedded systems for each of them. Independently from the application areas, the embedded systems can be classified into four main categories
Figure 1.1: Major application areas of embedded systems [4]
according to the functions they perform. The Table 1.1 shows these classes, highlighting the main features and examples. However, hybrid embedded systems exist.
Category Description Examples
General computing Applications similar to desktop computing realized in an embedded package
Gaming console, set-top-box, wear-able devices Control systems Closed-loop feedback control and real-time
monitoring
Nuclear power, flight control, vehicle en-gines
Signal processing Computations involving important streams of data
Radar, sonar, digital camera, electocar-diographic devices Communication
and networking
Switching and data transmission Telephone systems, Internet router, ac-cess point
Figure 1.2: The general representation model for embedded systems
1.1.1 A general model for embedded systems
Despite the personal computers are the most widely understood class of computing sys-tems, embedded systems dominate the landscape of computing syssys-tems, employing the 99% of the worldwide production of processing units (e.g. microprocessors, microcon-trollers, digital signal processors, etc.).
In the simplest sense, an embedded system is an applied and special-purpose comput-ing system. Regardless of the differences among the scomput-ingle devices and the different constraints imposed by the application fields, it is possible to provide a general model valid for all embedded systems (see Figure 1.2). This is a high-level representation of the embedded system architecture compliant with all the possible combinations of hardware and software elements typically present inside these kind of systems.
In general, an embedded system is modelled as a system that includes input and output capabilities, computational resources, and it exists in some sort of environment that pro-vides the energy required to work and receives the heat produced during the run time. The classical execution envisages that a set of inputs coming from the environment are processed according to the function assigned to the system in order to determine the outputs, which are conveyed to the environment. For example, a system in charge of controlling the stability of the temperture inside a furnace uses a temperature sensor as input and acts on the warmer to adjust the final temperature, while an mp3 player oper-ated by the user receives the inputs from the interface (e.g. buttons, touchscreen, etc.) and provides the sounds of the selected audio track. Inputs may come from interactions with the end-user or from connected devices like sensors or other external systems, even
if neither of these are essential to the general model. Outputs may be provided to the end-user throght dedicated interfaces or may be used to operate connected actuators or systems. Only few embedded systems do not provide output to the environment. For example, data logger devices receive and store data without a direct output of them. The internal architecture can be divided in three main layers into which all components fall. All embedded systems have at least the hardware layer or include also optional soft-ware layers (system softsoft-ware and application softsoft-ware). The lowermost layer comprises all the major physical components located on the printed circuit board that are all the semiconductor elements (e.g. processing core, memories, etc.), buses and related elec-tronics. The uppermost layer is the application layer that includes the specific software, developed to do the dedicated task. This can be directly on top of the hardware layer in case of firmware programming, or a specific system software layer can be in between. In the latter case, a dedicated operating system is present on the board to manage and al-locate the system resources. Afterwards the most important components of the hardware and software layers are described in detail.
1.1.2 Main constraints in embedded systems
Differently from general-purpose devices, which have been always designed with few constraints, embedded systems are specialized computing systems that embody many constraints and design traoffs. A good balancing of those traoffs allows the de-signer to optimize the embedded systems for the specific purposes they are conceived to. These trade-offs are not only related to the intended application, but also concerning for example the time-to-market, the costs, the usability, the dimensions and so on. The first obvious constraint is about the functionality of the embedded system. The design goal is to implement a system that performs in correct way the desired task (correctness), being responsive to the external solicitations (responsivity), and without incurring in fail-ures (robustness). In do that the system may assume different forms and featfail-ures, it may be care about the user experience at all or not, it may be based on different hardware and software resources. The following is the list of all constraints and aspects that the embedded systems designer has to consider.
• Application: because an embedded system is typically designed for a well-defined and restricted application, the intended use of the system poses many design constraints and trade-offs. While it is difficult to identify a predefined set of constraints, these in general depend on the application area and often they are strictly related to the partic-ular application. Applications constraints can be roughly classified into two categories: reactive and execution constraints. The first specify deadlines, throughput, and jitter, while the latter regard the available processor speeds, power and hardware failure rates. For example, an application controlling the temperature of an industrial furnace deals with short and guaranteed reaction times, while a gaming console ensures a fluid user experience.
• Performance: the performance indicates how an embedded system is “good” to per-form its function. The designer has to deal with many constraints in order to select the right resources so that the system is considered good enough to do the desired task. The classical measures of performance in general-purpose systems (i.e. clock frequency) does not apply at all. Natural constraints about the performance are the processing power, the throughput, the accuracy, the precision, the resolution, the re-sponse time, the bandwidth, the latency, etc.. For example in a digital camera the sensor resolution or the processing time of images are more important that the clock frequency. Each specific application poses its own performance constraints and the role of the designer is to match such constraints by selecting the appropriate system resources.
• Environment: some environments in which the embedded system is expected to work may pose specific constraints due to their harshness. For example, the systems for space applications have to deal with the radiations, while in automotive the main concerns are heat and vibrations. Other examples of environmental constraints are the RF interference, the water resistance, the physical abuse, power fluctuations, etc.. • Form factor : the physical aspect, the size, the encumbrance, the weight and the pack-age of an embedded system represent an important set of constrains, mainly driven by the intended use. For example, wearable or human-centric devices have to be smaller than networking equipment such as router or switches because they have to comply with the portability requirement. However, they are reasonably large with re-spect to implantable systems for medical applications. Moreover, embedded systems are requiring a high degree of system-level integration, which means the integration of a number of functionalities onto a small semiconductor device. There is also the device convergence matter that is the fusion of different specialized devices into a single one.
• Power : it is one of the predominant design constraint. Depending on the target appli-cation, the provisioning of the energy required to operate the system is very different. Embedded systems range from high-performance systems connected to high-voltage power line consuming several watts to battery-operated systems designed to operate for years with a single battery (i.e. consuming microwatts). Another important con-straint about the power in embedded system is the heat dissipation, because often they work in small space and no mechanical dissipation methods can be applied. • System resources: the system resources are all the hardware and software
compo-nents that the system consists of. They include processors, memories, peripherals, interfaces, etc.. The constraints posed by the system resources are the most chal-lenging for the design engineers. One of the crucial point of the embedded system design is to balance the trade-offs about the resources in order to meet the require-ments of the application in terms of performance, minimize the implementation costs and be effective with respect to the end-users. In the embedded systems world, many resources are available and the designer has to choose or to develop the best ones in order to set up a valid system for the intended application, taking into consideration
also to the other constrains. For example, the choice of the processing unit concerns not only the required performance, but also the power budget, the dissipation, the possibility to have a real-time operating system, the unit cost and so on. Once fixed and assembled, the pool of resources employed in the system is relatively static for the entire life of the embedded system.
• User assumptions: the profile of the intended user of the system plays a fundamental role, originating constraints especially related to the user experience with the sys-tems. This is more evident for those systems that provide a direct interaction with the end-user via for example some buttons, a display or a touchscreen, but it is in general valid also for those devices that are hosted in other systems and work without interact directly with the humans. Because embedded systems are strongly application-oriented, the user expects that they behave deterministically and in reli-able way, without incurring in long response times, system failures or maintenance actions. For example, a GPS navigator with long delay in updating the information about the path may provide the guidance information too late to the driver leading to wrong directions. This represents a frustrating user experience.
• Costs and time-to-market: very often embedded systems have also some sort of economic constraints that can account in significant way the design and production processes. The embedded systems are extremely sensitive to both design and pro-duction costs that are respectively the non-recurring monetary cost of designing the system and the unit cost of manufacturing each copy of the system. Costs pose im-portant constraints during the selection of the system resources that often lead to the impossibility to select some resources, the involvement of programmable ready-to-use boards especially for the prototyping phase, or the selection of the system resources basing on the number of units to be produced. In fact, a well know trade-off between production and design costs exists, and it is affected by the expected pro-duction volume. For example, it is usually not convenient to develop custom hardware for low-volume embedded systems, while for high-volume systems the design cost cannot represent a constraint. Additionally, the time to bring the final system in the market is a constraint and pose trade-offs, because missing the market window could means loss of money. Sometimes to reach the market in-time the designer need to select the resources basing on the immediate availability, the suppliers and the time required to install the resource in the final system.
The list of constraints summarized in Table 1.2 gives an idea of how is complex the design of embedded systems and how many different variables the designer has to considered in order to achieve the goal: design and implement a system with the desired functional-ity. It is useful to remark that not all the constraints are present in all embedded systems, but often depending on the application area or the specific kind of application some con-straints are more important than others. This means that the overall functionality of the system depends on some golden constraints that have to be carefully treated, while the designed has more degrees of freedom for the rest of the constraints.
implementa-Constraint Description
Functionality Performing correctly and in reliable way the assigned function Application Matching the reactive and execution requirements of the
ap-plication
Performance Ensuring the adequate performance to accomplish the task Environment Taking into consideration the hosting envirmnent in which the
system work
Form Factor Selecting the appropriate dimensions, form and weight of the system
Power Ensuring the adequate power supply for the system to run Resources Selecting the right hardware and software elements the
sys-tem is made of
User Assumptions Considering the final user behaviour during the constraints evaluation
Cost and Time-to-Market Considering the available budget and the market strategy Table 1.2: The main constraints of embedded systems design
tion. They originates from the constraints, in the sense that a metric can be influenced by one or more constraints or a constraint can determine more than one metrics. The key design challenge in embedded systems is to optimize simultaneously numerous design metrics that in other word means to achieve the special-purpose functionality of the sys-tems being compliant with the imposed constraints.
However optimizing several design metrics at the same time is a hard task because of the design metrics competition: improving one may worsen others. Figure 1.3 shows the relation among four main metrics. For example, it is difficult to increase the perfor-mance without incurring in greater costs and account the power budget. On the other hand, reducing the size and maintaining the same performance may pose heat dissipa-tion problems. For these reasons, a designer must be comfortable with both hardware and software technologies in order to choose the best for a given application and constraints.
1.2 Inside an embedded system
Now that the embedded systems and their typical constraints have been defined in de-tails, it is useful to discuss the internal architecture, and to examine in depth the possible hardware and software components. Defining and understating the architecture of an em-bedded system is essential for a good system design, being the architectural model the first abstract representation of the system that the designer has to produce according to the analysis of the requirements.
The Figure 1.4 shows an architectural model (system architecture) that is general enough to match all the embedded systems landscape. It includes all the elements that can be found in embedded systems. Evidently, depending on the target application and the
con-straints, not all elements are implemented or part of the elements can exist externally to the system. This model is a generalization of the embedded system that represents the internal architecture as a composition of elements with their inter-relationships, instead of showing the implementation details (i.e. software source code, hardware devices and circuits). The latter are instead contained in the structural model (system structure), that represents the implementation of the architecture with the physical resources and the algorithms selected by the designer.
Depending on the kind and the complexity of the target application, the processing core can range from an ASIC to a general-purpose microprocessor, the memory can vary in technology and organization and the power requirements can be very different. Many forms of power supply and heat dissipation can be employed. Moreover, the embedded system can receive input directly from the end-user or exploiting specific sensors and produce output in various ways (actuators, LCD displays, LEDs, application-specific in-terfaces, etc.). From the software point of view, the application program can run directly on the hardware resources in the simpler cases or leverages an embedded operating system, eventually with real-time scheduling capabilities in case of time-critical tasks. Hereafter the main elements (see Table 2.1) are described and analysed in detail, pro-viding information on the related constraints that can be considered when the designer has to move from the architectural to the structural model of the embedded system.
1.2.1 Processor
The processor is the central element of the embedded system, being in charge of pro-viding the computational power that enables the system to execute the application-dependent algorithm developed and implemented by the designer. Moreover, the pro-cessor coordinates the rest of the components within the systems, arbitrating the com-munications, defining the timing of the operations, detecting failures and so on. To accom-modate the wide range of computation demand and constraints in embedded systems, many types of processors can be involved. In hardware-based systems (i.e. FPGA and
Component Description
Processor Computational element of the system executing the application-dependent algorithm
Memory Container for program instructions, data and configuations Peripherals and
Inter-faces
Input and output ports of the system to interact with the envi-ronment
Human-Machine interface Particular interface for the interaction with the human user Power supply Source of power for the system to run
Dedicated hardware Parts f the algorithm implemented in hardware to speed up the operations
Application software Software implementation of the application-dependent algo-rithm
Operating system Dedicated operating system for embedded applications Table 1.3: The main elements of an embedded system
ASIC) the processor corresponds to the hardware circuitry that implements the algorithm. In software-based systems (i.e. microprocessor, microcontroller, DSP) the algorithm is implemented by using a programming language and run on a programmable hardware platform, eventually equipped with a dedicated operating system.
The following is a list of the main processor manifestations in embedded systems.
• ASICs: in the segment of hardware-based systems, processors based on applica-tion specific integrated circuits are the most optimized from the technical points of view. The designer uses hardware description languages (HDL) to map directly on the silicon the application algorithm, realising a fully customized processor that can be surrounded, externally or in the same die (i.e. system-on-chip paradigm), by the other required elements like memories, interfaces, etc.. ASICs allow a very high inte-gration level and present the lower power consumption and the higher performance. Conversely, they are the costliest solution and require very long development time. • FPGAs: a field programmable gate array is an integrated circuit that provides
pre-defined hardware resources (i.e. programmable logic blocks) that can be configured using HDL to implement specific algorithms. Like ASIC, the FPGA allows to build a customized processor or system-on-chip, but relying on fixed blocks that are wired to-gether during the programming phase the result is less optimized even if it maintains a high level of integration. Performance and power consumption get worse, while the costs are dramatically lower than ASICs. Moreover, in some cases FPGAs allow re-programming and updates (i.e. SRAM based systems) improving the flexibility of the processor.
• DSPs: digital signal processors represent a class of specialized processors with inter-nal architectures and instruction sets optimized for the fast process of digital siginter-nals (e.g. FIR filtering, data compression, audio equalization, etc.). Peculiar characteristic of DSPs are the circular addressing, the hardware multiply-and-accumulate instruc-tion (MAC), the address generainstruc-tion unit, the single-instrucinstruc-tion-multiple-data (SIMD) operations, the very long instruction word (VLIW) techniques and many others fea-tures that allow to speed up the signal processing. They usually have the program memory separated from the data memory. In this case, the designer implements the application algorithm exploiting programming languages like C/C++ or directly by using the processor-dependent assembly language and specific programming tools. DSPs provide better performance and lower latency with respect to the rest of software-programmable processors, and they do not poses particular constraints about the power. They are cheaper than all hardware implementations of the proces-sor and allow reprogramming for maintenance or features upgrade.
• Microprocessors and Microcontrollers: these general-purpose processors represent the most general and flexible way to implement the processing element of the embed-ded system architecture. Both types present an internal computational unit designed to execute sequentially the instructions of the application program. An immense range of such devices exist, with complex instructions set (CISC) or reduced (RISC), with registers varying in quantity and width, with different clock frequencies, including one or more processing units eventually cache-aided and so on. There is a substantial difference: microprocessors contains only the circuitry to execute the instructions and require external memories and peripherals to contain the program, the data and to communicate with the environment; microcontrollers self-contain also memories, pe-ripherals and communication interfaces within the native architecture. In case of
mi-croprocessors or microcontrollers, the key point is the application software that trans-forms such general elements into specialized ones dedicated to a restricted task. Even if implementing the system function in software leads to a minor optimization of the overall performance, these cheap processers are widely deployed in embed-ded systems. In some cases (especially with powerful microprocessors), the power constraints may represent a drawback. Anyway, given the wide availability of micro-processors and microcontrollers the designer has a high probability to find a solution that meets the application requirements and mitigates the drawback of choosing a general-purpose software-programmable processor.
The choice of the processor plays an important role for the performance of the embedded system and accounts in significant way a lot of others aspects such as the power con-sumption, the heat dissipation and the flexibility of the system to be easily upgraded in the future. In addition, the costs and the development time are influenced by this component. By selecting the processor compliant with the application requirements, the designer be-gins to identify the hardware/software partitioning that is one of the key point for the success of the final system.
The Figure 1.5 shows that specializing the processor according to the application re-quirements towards customized in-hardware implementations results in better perfor-mance and power consumption that using programmable solutions, while this leads to a worsening in terms of flexibility because the processors cannot or can be hardly up-graded once implemented. In other words, the software-based implementations of the application-dependent algorithm are more flexible than in-hardware ones, but they intro-duce a general worsening of the others metrics.
Instead considering the costs and the development times as golden metrics, software-based solutions require less time to be implemented and tested than the hardware-software-based solutions while the development costs differ by several degrees of magnitude. The cost of the previous class of processors ranges from some euros for a microcontrollers to tens of euros for microprocessors, DSP and FPGA to millions of euros for full-custom silicon processors. Given this step, especially for in-hardware solutions, often the FPGA proto-typing is used to assess the system and it precedes the ASIC implementation once the system is fully working.
1.2.2 Memory
The memory represents another important element of the embedded system architec-ture, along with the processor. There is a close connection between the processor and memory in embedded systems and often the memory amount and the type are strongly influenced by the processor. Memory may vary from few registers in hardware-based so-lutions to large portions of RAM and non-volatile memory to support a general-purpose processor (software-based solutions). It can be contained in the processor as happens in microcontrollers or exists as separate elements connected to the processor (e.g. in micro-processors). In ASIC and FPGA solutions, the designer decides, if necessary, whereas
Figure 1.5: Metrics competition for different implementations of the processor [7]
include the memory in the processor or rely on external memory.
The memory can be classified into two main class: volatile (i.e. RAM family and regis-ters) and non-volatile (i.e. ROM family and other permanent supports). The content of non-volatile memory survives when the power is removed. In general, volatile memory is more expensive than non-volatile and for this reason embedded systems usually have small amounts of RAM compared to the ROM. The memory has in general two important functions within an embedded system: providing non-volatile storage capabilities for the program instructions in software-based solutions and providing volatile storage for data to be processed, intermediate results and status information created throughout the run time of the system. Often to dealt with harsh environment particular techniques have to be used to protect memories and registers: Triple Modular Redundancy (TMR) and Error Detection and Correction (EDAC).
The processor and the memory can be organized following the Harward architecture or the Von Noeman architecture. The first includes separated memories for the data and the program, while in the latter data and program occupy different areas of the same mem-ory.
The way the application algorithm is designed is correlated with the presence and the size of the memory within the embedded system. Often the programmable processors are chosen basing on the quantity of internal or addressable memory, while in FPGAs a part of the resources is dedicated to the memory.
Figure 1.6 shows that the more flexible solutions rely on larger amounts of memory and the increasing of memory determine a worsening of the energy efficiency and the perfor-mance, mainly due to the accessing time to memory to retrieve program instructions, to read data or to write results of computations.
Figure 1.6: Memory requirements for the different processor implementations [8]
1.2.3 Peripherals and interfaces
Peripherals are in general devices that allows a system to communicate with the out-side environment. In particular, in embedded systems all components connectable to the system processor, externally via some interfaces or permanently attached in the internal architecture are considered peripherals. They expand the communication capabilities of the system.
Three kinds of peripherals exist: input, output and storage. Input are usually associated with sensors to measure the external environment (e.g. accelerometers, temperature, pressure, etc.), buttons or keypads to allow the end-user to submit information to the system and so on. Main output peripherals are actuators to return the results of the com-putation to the environment (e.g. motors, etc.), displays (7-segments, LCD, monitors, etc.) and printers to show information to the end-user and others. Flash driver, hard disk and similar provide storage capabilities.
Peripherals are connected to the processor exploiting the interfaces. The following is a list of the some possible interfaces available for input, output and storage peripherals.
• Serial Communication Interfaces (SCI): RS-232, RS-422, RS-485 etc.
• Synchronous Serial Communication Interface: I2C, SPI, SSC and ESSI (Enhanced Synchronous Serial Interface)
• Universal Serial Bus interface (USB) • Video interfaces: VGA, DVI, HDMI, etc.
• Network interfaces: Ethernet, LonWorks, etc.
• Fieldbuses interfaces: CAN-Bus, LIN-Bus, PROFIBUS, etc. • Discrete IO interfaces: General Purpose Input/Output (GPIO)
• Communication interface for harsh environments: SpaceWire, MIL, SpaceFiber, etc. • Analog to Digital/Digital to Analog (ADC/DAC) interfaces
• Debugging and programming interfaces: JTAG, ISP, ICSP, BDM Port, BITP, and DP9 ports.
1.2.4 Human-Machine interface
The human machine interface (HMI) represent the way the interaction between the em-bedded system and its user occur. The end-user may want to interact with the system for several reasons: to control or change the running of the system, to receive feedbacks or to check the system status. These interactions occur through one or more of the pe-ripherals above described, displays, LEDs, buttons, etc., connected using the interfaces offered by the system. Some systems are not provided with a real HMI because they do not require a direct interaction with the end-user during the run time. Figure 1.7 shows some example of HMIs of different complexity.
The complexity of the HMI depends on the target function. While for embedded systems dedicated to control or signal processing functions the HMI is not the primary concern (i.e. status information to assess that everything is going well are enough), this is a cru-cial aspect of those systems that require a heavy interaction with the user during the operations. In such cases, the goal of designer is to employ a HMI that makes it easy, efficient, and user-friendly to operate the system in the way that produces the desired results. This generally means that the user needs to provide minimal input to achieve the desired output, and that the system minimizes undesired outputs.
The complexity of the HMI is strongly correlated with the concept of usability, that is the measure of how the system is simple to be operated. This is another important design metric, especially for consumer embedded systems and particular equipment like aid systems for people with disabilities. Usability issues may determine the total failure of the embedded system, even if all other metrics are perfectly optimized.
1.2.5 Power supply
The embedded system consumes energy to perform the computation and one of the task of the designer is to provide the right power supply to the embedded system. The power required depends on the single consumption of the internal components, which in turn depend on the performance needed for the assigned task and so on. According to the power supply, embedded systems can be classified into three categories.
• Battery operated : they are usually portable devices and in this case one or more batteries are present within the system. The capacity and the duration of the battery is a critical point and have to be selected according to the requirement of the application domain. For example, it would be intolerable if the battery of the wireless mouse would be replaced every day, while it is acceptable to recharge the battery of the smartphone once a day. In applications such as sensors networks, where systems have to run for long time with small batteries, particular energy saving policies can be implemented (e.g. timed activation, etc.).
• Power line operated : such systems get the energy from the power line exploiting a sort of plug. These systems are usually small to large stand-alone devices like ovens, TV, etc.. Differently form the battery case, there is no problem of duration because the power supply is continuous, but dissipation constrains may originate.
• Alimented by the hosting system: the class include those systems that get the energy required to run from the larger systems in which they are installed. They often come as boards and do not have batteries or plug. Instead, specific connections in the interface with the hosting environment allow the system to be supplied of the required energy.
1.2.6 Dedicated hardware
Sometimes complex and challenging applications are implemented with software-based solutions for cost reasons that do not permit a full-custom or semi-custom hardware de-sign, by simply selecting very performing processors. Anyway, it is possible that some operations recur often or require specific attention to match the performance requirement of the target application. In such cases, if an economic margin exists, it possible to de-mand specific parts of the computation of the algorithm to application-specific hardware blocks designed ad-hoc using small FPGA or other integrated circuits. Those accelera-tors are attached to main processor, via PCIe, I2C, USB, etc., to quickly execute certain key computational functions.
Hardware accelerators allow greater throughput for those operations that otherwise are a bottleneck for the entire embedded system. They provide performance increases for applications with computational functions that spend a great deal of time in a small sec-tion of code (e.g. FFT, encoding, etc.). Accelerators also provide critical speedups for low-latency I/O functions. The typical connection is shown in Figure 1.8.
Figure 1.8: Example of hardware accelerators mounted in the I/O space of the processor
1.2.7 Application software and Operating systems
In software-based embedded systems, the application software is specific to the dedi-cated function. It implements control laws, finite state machines and signal processing algorithms with application-specific human-machine interfaces, instead of general appli-cations like word processors or spreadsheets.
The application software is the executable implementation of the algorithms dedicated to solve the specific problem. The software designer uses specific CADs and tools (i.e. compilers, assemblers, programmers, debuggers, simulators), often provided by the pro-cessor manufacturer, to codify the algorithms in a supported programming language and make them runnable. On the other hand, in hardware-based solutions, there is not an application software and the system designer uses CADs to codify the algorithms in a way that is synthesizable on hardware resources.
Figure 1.9 shows the evolution of some areas of embedded systems in terms of software size over time. The growth rate has accelerated in the last years, when also the diffusion of embedded systems has grown.
Usual programming language range from processor-specific assembly that allows fast and optimized programs but has an high development time, to C/C++ that is widely used for its efficiency similar to assembly and requires less development time, to high-level languages like Java or Python that are portable and allow a further reduction of the de-velopment time providing anyway slower programs.
In the simplest cases the applications run directly on the hardware resources of the pro-cessor, like happens for example in DSP or microcontrollers. It is defined firmware. When the complexity of the application requires microprocessors, several peripherals etc., the application program often leverages an embedded operating system (eOS). Those ded-icated operating systems are software environments that provide a layer between the application software and the hardware resources, which improves for example the
re-Figure 1.9: The increasing complexity of the embedded software in different applications [9]
sources scheduling and allocation, the memory management, the I/O operations and so on. Particularly interesting are the real-time operating systems, that in addition to the classical functions of a normal eOS, provide guarantees that specific computations are executed within specified time deadlines. For example, they are used in critical control applications like flight control, etc.
1.3 The design flow
In general, a design flow is a series of steps to be followed during the design of a system, that allow to go from the initial requirements coming from the analysis of the problem to the tested implementation of the final system. Some of the steps can be performed by tools or can be automated exploiting other systems; others steps require to be done directly the designer. Very often the design steps involve a group of designers organized in a team in which they collaborate to accomplish the common goal established in the particular design step.
Figure 1.10 shows the classical waterfall design methodology. This top-down design model is largely one-way and goes from higher levels of abstraction to more detailed design steps, even if bottom-up feedbacks are present to check the correctness of the design work. Going ahead through the steps it is possible to achive the desired system starting from the initial idea of its functionality, while backwards jumps to previuos phases are needed in case of bugs. It is important to note that a bug found in the early phases is cheaper in term of time, efforts and money to fix than the same bug found later on in the process. The evolution of the waterfall model is the successive refinement model, which is based on several iterations of the basic model, or some of its steps, in order to refine more and more the system towards the final realization.
The main stages of the waterfall model are:
• Requirements: the first step of the process includes the analysis of the problem and its constraints in order to figure out what the system is required to do and what the
expected customers want from the system to agree it is a success. Requirements represent an informal description of the system features and behaviour.
• Specification: it represents a more precise and formal description of the system re-quirements, and contains enough information to begin the design of the system archi-tecture. Often a specification comes from a particular requirement, but some speci-fications may result as the combination of several requirements. The specification is the contact point between the customers and the designer.
• Architecture design: the purpose of this stage is to realize an abstract block diagram that satisfies the specifications, representing the system as a set of interaction blocks, without specifying implementation details (e.g. the kind of the processor, what will be done by software and what by hardware, what will be done by special-purpose hardware, etc.).
• Structure design: this step includes the initial refinement of the block diagram pro-duced in the previous step and the production of a more detailed structural diagram. This includes the decision about the hardware/software partitioning (i.e. what part will be realized in hardware and what in software), the chosen hardware resources and the schematic representation of the algorithms independent from the real implemen-tation that will be done in the next step. Formats, division of functionality into modules, and algorithms selection are all part of this stage.
• Implementation: it is the real implementation phase. The designer builds, develops or simply get the required components, both hardware and software, to build a system that reflects the structural model.
• Integration and Testing: all the developed parts are tested and assembled to form the final system. System-level testing procedures, certifications and other operation before the system reaches the end-users are done in this stage.
1.3.1 The Hardware/Software co-design flow
Embedded computing systems are particular computing systems and poses numerous design challenges [10]. As many other computing systems, the constituents of embedded systems are hardware and software parts. Anyway, embedded systems have a peculiar feature: since they commonly have a more narrowly defined purpose, designers have the benefit of more precise domain information that allows the design of architectures and systems really optimized for the assigned task. The realization of optimized embedded systems requires hardware as well as software design and development. In that respect, a co-design approach that includes paradigms from hardware and software design in in-tegrated fashion enables to achive better results and allows to maximize the target design metrics.
Figure 1.11 shows the hardware/software co-design methodology for combined projects in which part of the computation is realized via software and part via hardware resources. With respect to the classical waterfall model, the front-end activities (i.e. requirements, specification and architectural design) remain unchanged because they refer to an ab-stract and global view of the system, considering simultaneously both hardware and
ware aspects. Similarly, the back-end integration and testing stages consider the entire system and they are the same as in the traditional waterfall model. In the middle, the key point of the co-design flow is the structural design, where the required computation is splitted and assigned to different hardware or software modules, the interfaces among dif-ferent modules are defined and the specific system resources are chosen. Consequently, all hardware and software components are designed (or assembled) and implemented respecting the structural design, and then tested independently before in system integra-tion stage.
It is clear that the opportunity to optimize the final embedded system and to meet all the design metrics pass through a good hardware/software partitioning and the selection of the appropriate technology operated in the structural design stage. For this reason, it is important to have a multidisciplinary design team, containing both hardware, software and control theory knowhow. Moreover, the possibility to develop hardware and software modules in parallel within the team allows a reduction of the required time and enables a strong optimization of those resources and their interactions.
The following chapters describe the results of the research activity coming out from the application of the hardware/software co-design strategy in complex and challenging em-bedded systems, where the reliability, usability and performance are the golden metrics.
An interactive gateway for telemedicine applications
Telemedicine consists of the use of information and communication technologies (ICT) to deliver health services to remote patients and to facilitate the information exchange among patients, primary care physicians and specialists at some distance from each others [11]. This healthcare model enables to overcome geographic, time and social bar-riers. Telemedicine allows a real-time, automatic and remote monitoring of vital signs, emergencies and lifestyle changes over the time in order to manage the risks associated with independent living of chronic patients and vulnerable people at their own domicile. In this way it is possible to promptly recognize and act, and sometimes to prevent, dan-gerous alterations of the health status without recurring to hospitalizations. In the group of chronic diseases (e.g. Chronic Heart Failure, Chronic Obstructive Pulmonary Disease (COPD) and diabetes), where incidence and prevalence are increasing due to the general aging of population, the telemedicine has the great potential to reduce the management costs and to ensure a better quality of life to the patients.
The scientific community shows interest in telemedicine since 1974, when the first pa-per presenting the telemedicine as a new application of communication technology was published. Anyway, a series of limiting factors have slowed down the diffusion of the telemedicine for many years. First of all the cost of the enabling equipment and the com-plexity of the measurement devices that were difficult to be operated autonomously by the patient. In addition, the lacking of a widespread access to the public network limited the diffusion of the telemedicine. The recent technological advances and the research of innovative solutions to provide effective healthcare services have led to a significant de-crease of the cost and the complexity of telemedicine devices. This is one of the key point for a large-scale deployment of telemedicine, along with the increasing of the coverage of Internet connection. Another factor that impacts the deployment of the telemedicine is the recent position of clinicians and administrators that acknoledged the telemedicine as cost-effective paradigm of intervention for chronic diseases.
From the technological point of view, one of the main enabling factors of the telemedicine is the availability of embedded systems for the acquisition, elaboration and exchange of vital signs and other information related to the status of the patient. Considering the