UNIVERSITÀ DEGLI STUDI DI PISA
Dipartimento di Ingegneria dell’Informazione
Dottorato di Ricerca in Ingegneria dell’Informazione
ING-INF/01 ELETTRONICA
XVIII ciclo, 2003
Tommaso Ramacciotti
___________________________
Relatori:
Prof. Luca Fanucci
___________________________
(Università di Pisa)
Prof. Roberto Saletti
___________________________
(Università di Pisa)
Contents
1 Introduction ...8
2 Dynamic Reconfiguration on FPGA ...11
2.1 Terminology ...11 2.2 Design Structure ...12 2.3 Reconfiguration Classes ...14 2.4 Scheduling ...15 2.5 Partitioning ...15 2.6 Reconfigurable VHDL Code ...15
2.7 The Reconf Design Flow...16
2.7.1 Front-end tools...18
2.7.2 Back-end tools...22
2.7.3 Modular design step by step ...27
2.8 Design Organization ...34
2.8.1 Directory structure...34
2.8.2 Automated partitioning...36
2.8.3 Dynamic design from scratch...36
3 Design Strategies ...37
3.1 Design Steps ...37
3.2 Manual Design For Reconfiguration ...38
3.2.1 Usage ...38
3.2.2 Design Strategies ...39
3.2.3 Design Rules...39
3.3 Dynamic Module Behaviour ...42
3.4 Possible Limitations ...43
3.5 Performance Issues...43
4 Application Types...45
4.1 Stream-Type Applications...45
4.2 Control-Type Applications ...47
4.3 Thinking Dynamically From The Outset...48
4.4 Existing Designs ...49
6 Extension Of The ATSTK94 Platform...54
6.1 AT94K Fpslic ...54
6.2 Extending The AT94K Power ...55
6.3 Data Sharing between AVR and FPGA ...56
6.4 FPGA Run-Time Reconfiguration...56
6.5 Reconfiguration Function ...58
6.6 Generation of Reconfiguration Bitstreams ...59
6.7 Definition of auxiliary design tools ...59
7 Examples ...62
7.1 An Initial Example...62
7.1.2 Running the example ...73
7.2 State Sharing In Reconfigurable Designs...75
7.2.1 Implementation...77
7.2.2 Sample execution...79
7.3 Saw tooth example...80
7.3.1 Regular macro approach...80
7.3.2 Supermacro approach ...81
7.3.3 Reconfiguration Process...81
7.3.4 Implementation...83
7.3.5 Reconfiguration speed ...83
8 Demonstrator for a space application ...86
8.1 Design consideration for a space application ...86
8.2 Objectives of the Demonstrator...88
8.3 A new design approach...88
8.4 Description of the Demonstrator ...90
8.4.1 Mission And Objective...90
8.4.2 Experiment...91
8.4.3 Temperature...92
8.4.4 Pressure...92
8.4.6 Electric Field...93
8.4.7 Heating Surface Temperature...93
8.5 Expected Benefits achievable with Dynamic reconfiguration...94
8.6 Possible Limitations ...96
9 Application Definition ...97
9.1 Application Basics...97
9.2 Application Detailed Architecture...98
9.2.1 Old Electronic Design ...98
9.2.2 New Electronic Design ...100
9.3 D_FPGA Modules ...102
9.4 D_module scheduling ...105
9.4.1 Timing requirements...107
9.4.2 D_module Pre_fetch...108
9.4.3 Reconfiguration process ...109
9.4.4 C–Code definition (Demonstrator Configuration Controller) ...109
9.5 Use of the project tools...111
9.5.1 Front-End tools...111
9.5.2 Back-end tools...113
9.5.3 Design flow applied...113
9.6 Dynamic Design files ...114
10 Definition of Interface between Simulator and Controller ...116
10.1 Functional architecture ...117
10.2 Communication between Controller and Simulator ...117
10.3 Definition of Simulation Platform...119
11 Results ...121
12 Conclusion ...123
Acronyms And Abbreviations ...127
Bibliography ...129
Figure 1 - An example of the D_Reconfigurable application... 14
Figure 2 - D_FPGA design flow... 16
Figure 3 - Constraints editor, general view... 18
Figure 4 - Detailed View of the partitioner... 19
Figure 5 - Global organization of the partitioning algorithm. ... 21
Figure 6 - The back-end modular design tool - design flow... 24
Figure 7 - Example of the static part of a modular design... 26
Figure 8 - The static part along with one d_module. ... 26
Figure 9 - Creating a reconfigurable macro... 28
Figure 10 - The modular place & route tool. The reconfigurable macro represents both dynamic modules D_M1 and D_M2... 31
Figure 11 - Placement of the first design configuration, unloading the d_modules. ... 32
Figure 12 - Placement of the second configuration, new nets. ... 33
Figure 13 - Structure of a design directory. ... 34
Figure 14 - A typical stream-type application. ... 45
Figure 15 - A stream-type application implemented using dynamic reconfiguration. .... 46
Figure 16 - A typical control-type application... 47
Figure 17 - Implementation-dependant part of the RECONF flow. ... 50
Figure 18 - Bitstream logical organization. ... 52
Figure 19 - AT94K with an external memory. Arrows denote the direction of the data ports... 54
Figure 20 – Reconfiguration Flash memory interface ... 55
Figure 21 - The AT94K-based reconfiguration platform. ... 57
Figure 22 - Reconfiguration routine ... 58
Figure 23 – Context switching... 59
Figure 24 - Bitfile storage format ... 60
Figure 25 - Screen shot of file format converter... 60
Figure 26 - The simple example. ... 63
Figure 27 - The reconfiguration scenario... 64
Figure 29 - Setting up the ‘Open a Dynamic Macro” dialog’ for a supermacro... 66
Figure 30 - A sample Context Browser window. ... 67
Figure 31 - Setting up the design directory for a modular design. ... 68
Figure 32 - The Temporal System Planner – initial state. ... 69
Figure 33 - The Temporal System Planner – all design configurations processed... 71
Figure 34 - The Temporal System Planner – the bitstream generation options... 72
Figure 35 - Generating the bitstream files ... 73
Figure 36 - State sharing - design structure. ... 76
Figure 37 - FPSLIC bitstream generation... 78
Figure 38- SCR setting during FPSLIC bitstream generation. ... 79
Figure 39 – Regular macro, design structure... 80
Figure 40 – Super macro, design structure ... 81
Figure 41 - Details of the Reconfiguration Mechanism... 82
Figure 42 Measurement of Reconfiguration speed (scope#1) ... 84
Figure 43 - Measurement of Reconfiguration speed (scope#2)... 85
Figure 44- Probability of SEU intrinsically reduced because of the reduction of exposed area at any given time ... 87
Figure 45- Serial reconfiguration... 90
Figure 46 - Functional block diagram for one cell ... 91
Figure 47 - Functional block diagram of the original GABRIEL experiment set-up... 94
Figure 48 - Interconnections between the system under test (MCM board) and the simulation platform... 97
Figure 49 - Gabriel MCM block diagram ... 98
Figure 50- Architecture of current system (FPGA) ... 100
Figure 51 - General Architecture of Controller Board... 101
Figure 52 - Internal Architecture of the system implemented into the D_FPGA... 103
Figure 53 - Final version of D_FPGA based controller board... 105
Figure 54 - D_modules switching context... 106
Figure 55 – Timing of the dynamic reconfiguration... 106
Figure 56- Error detection and correction... 107
Figure 57 - D_Module scheduling and FPGA area allocation... 108
Figure 58- Super macro use ... 109
Figure 59 - Simplified flow chart of reconfiguration controlling mechanism (AVR implementation) ... 110
Figure 63 - General Architecture of the Demonstrator... 117 Figure 64 - Timing / Handshaking of the communication protocol ... 118 Figure 65 - Control Panel of experiment simulator ... 120 Figure 66 - Hierarchy of LabView Virtual instruments composing the application ..120
List of Tables
Table 1 - File list for “File Format Converter” application (LabView 6.1)...61
Table 2 - Comparison of different design strategies... 95
Table 3 - AVR C project file list... 110
Table 4 - FET Output file list... 115
Table 5 - BET Output file list ... 115
Table 6 - File list for Simulation Platform application (LabView 6.1) ...119
Nowadays the vast majority of SRAM based FPGA designs result in the generation of a bit-stream that is loaded at start-up into the device configuration memory (be it SRAM or FLASH based) and stays in place for all the operating time of the device and it may be reloaded completely only when the system is reset.
At any given time the device configuration is present in the configuration memory and no changes of functionality is possible unless a new bit-stream is generated and loaded with the usual process.
With dynamic configuration this paradigm is no more valid and new possibilities open up for the generation of highly flexible and modifiable designs that can give the designer more degrees of freedom in the development of complex and adaptable systems with somehow limited resources.
In fact dynamic reconfiguration of digital devices is a means to increase functionality of a design with fixed implementation resources. Although this topic has been researched for almost a decade, only until recently has it been used in industry applications. This has been caused by two major reasons: a lack of suitable devices and a lack of design tools with a suitable methodology.
In the framework of the European Community funded project RECONF a dedicated design flow was developed in answer to this deficiency.
For clarity in the continuation of the document the Dynamically Reconfigurable FPGA will be termed as D_FPGA. This does not mean that a D_FPGA is a special silicon device much different from ordinary FPGA. Dynamic Reconfiguration can be achieved with those devices that allow partial access to the configuration memory while the digital system is operating (some Xilinx and Atmel products have this feature).
Dynamic reconfiguration is all about loading and unloading modules (read pieces of bitstreams performing different task) during normal operation of the FPGA without halting the system. But of course there is much more to it. In the first place one needs to know how to partition a classic design (a VHDL code describing a digital system [8]) into smaller pieces that can fit at different times in a given device and guarantee the same performances. Some modules will be loaded/unloaded at predefined time intervals or upon the generation of external or internal triggers (the dynamic modules). Some others will need to stay in place at all the time because they implement basic time–invariant tasks of the system (the static part). A dynamic design is therefore composed of a set of dynamic loadable/unloadable modules and a static module. The static module will also need to contain a part that takes care of the loading/unload mechanism and a data management subsystem that is responsible for storing and retrieving internal information (FSM states, temporary registers) and keeping valid interface values when the driving d_modules are not present in hardware. This part is called the Configuration Controller and needs to be generated when all the dynamic modules and the static part have been defined along with their time scheduling constraints and their mutual interfaces.
Introduction
The logical steps involved in the generation of a dynamically reconfigurable design starting from a classic static design can be dived in two main branches termed the Front-end flow and the Back-Front-end flow.
The Front-end flow is implemented by a set of tools named the Front-end tools that basically carry out the following task.:
Partitioning of a static design in sub modules including dynamic modules and the static part. Scheduling mechanism based on user defined constraints that defines what are the events (external or internal signals and time delays) that will trigger the load/unload of the modules.
Generation of the Configuration Controller (can be either an HW version described in VHDL to be implemented in the static part or a SW version coded in ANSI C to be implemented in an external microprocessor) that takes care of generating the triggers for reconfiguration and accesses the external bit-file memory where the dynamic modules reside.
The Back-end flow is implemented by a set of tools named the Back-end tools that basically carry out the following tasks:
The basic operation performed by the Modular Place & Route Tool is the placement and routing of modular designs on the FPGA based on the information included in an associated constraints file. The tool determines different configurations that the design actually takes in time. These design configurations will in fact be the static part of the design along with the maximum sets of dynamic modules that will coexist in time.
Generation of the Bitstreams, the initial bitstream for the static part, the bitstreams for each d_module and the bitstreams that perform the unloading are generated. A file is created with all the bitstreams for the d_modules (this will be stored in the memory). An index file is also created that describes the order of the bitstreams and their size.
Dynamic reconfigurable design cannot be successfully achieved unless a proper design strategy is adopted from the outset. In fact reconfiguration adds a new dimension to the design process. All design practices used for classical designs apply to reconfigurable designs, but in addition the designer has to be well aware of exact timing schedules and scenarios. In addition, the design must be augmented to model the reconfiguration process in simulation so that the final behaviour can be compared to the corresponding static reference design. A set of rules and design steps is proposed and explained in order to obtain the best result and performances.
The design tools have been developed during the course of the project having in mind a strong focus on practical application scenarios with real cases. The case described in this thesis is based on the application of a process control task for an experiment module originally developed by Kayser Italia Srl for a space payload that flew on a Russian rocket. The experiment is about thermo fluid dynamics in zero gravity condition. Dynamic reconfiguration has been applied to the re-design of the controlling electronics of a mentioned system. The experiment had closed loop controls for pressure, temperature and critical heat flux of an experiment chamber where a platinum wire immersed in a coolant liquid was taken to the boiling point to study the heat transfer efficiency in zero gravity with an applied high voltage electric field. The original system based on an OTP FPGA and a microprocessor has been re-designed using an ATMEL FPSLIC AT94K, a system on chip
device composed of an AT40K FPGA (40K gates) and an AVR enhanced RISC architecture 8 bit microcontroller with SRAM, UARTS and other common peripherals. The aim was to obtain the same functionalities of the original design with unchanged or better performances and the reduction of power and board surface (this two parameters being of primary importance for space payloads). The redesign of the application it’s based on the AT94K which implements the same functionalities that were implemented by means of an OTP FPGA and a microcontroller. The process controls of the experiments (pressure, temperature and heat flux) run at a relatively low speed (hundred of Hz for the fastest control) and this suggests that the dynamic implementation at different time slots can meet the requirements of the system. The FPGA implements, in its static part some functions such as bus interface, watchdog timer, system reset, AD converters control, and the memory address decoder. The three controls which are of the type PID, are implemented as dynamic modules and get loaded and unloaded sequentially at predefined time intervals. Thanks to the possibility to dynamically reload a module the system can made tolerant to faults. We have explored the possibility to implement the static part with a double redundant circuitry and an error detection system. If an event occur that corrupts the bitstrem (for example a Single Event Upset due to cosmic radiation which is relatively likely to happen in space) this can be detected by comparing the outputs of the two redundant parts and in case the whole static part gets reloaded from radiation hardened external memory where the bitfiles reside. In this case the static part is actually a dynamic module that behaves like a real static part if no error occurs. We have named this module as Pseudo-Static for this reason.
The Configuration Controller for this application has been implemented in the AVR microcontroller embedded in the AT94K device. The AVR microcontroller share some data lines with the FPGA and can receive interrupts from it. When it is time to reload a module the external memory is accessed by the microcontroller that has special functions to access the configuration memory of the FPGA core for reconfiguration.
In order to assess the performances of the re-designed system implemented with a D_FPGA a simulator has been developed to replace the real experiment module. The simulator is based on a LabView® application and a data acquisition board that communicates through a dedicated analogue and digital bus with the D_FPA board. The software implements second order systems that emulates the behaviour of the physical parameters (Temperature, Pressure etc) inside the experiment chamber. From the simulator graphical user interface the user can inject disturbances in the control loop and observe the system while recovers its steady state.
Further laboratory measurements are carried out to estimate power consumption and reload time of dynamic modules in order to make an assessment of the technology also with respect to the original design based on a non reconfigurable FPGA.
A summery of the main results with considerations on the advantages and drawbacks of the dynamic reconfiguration will be presented in the conclusions.
Dynamic reconfiguration of digital devices is a means to increase functionality of a design with fixed implementation resources. Although this topic has been researched for almost a decade, only until recently has it been used in industry applications. This has been caused by two major reasons: a lack of suitable devices and a lack of design tools with a suitable methodology.
The RECONF design flow was created in answer to this deficiency. The basic objective was to create design tools for dynamic reconfiguration that would make the reconfiguration issues transparent to the designer (or at least minimize the required extra design effort).
This part of the document provides basic design guidelines that should help a designer in implementing dynamically reconfigurable designs on field-programmable gate arrays (FPGAs) using the RECONF design flow. Its focus on one particular class of devices does not reduce its applicability; the presented topics are relevant to any digital reconfigurable circuits.
2.1 Terminology
This section defines basic terms connected with dynamic reconfiguration used in this document.
The dynamic reconfigurable FPGA (D_FPGA) is a field-programmable gate array with a capability to change the behaviour of one part of its logic infrastructure while the rest is running. From now on we will assume D_FPGAs when talking about target devices or target technology.
The static part is such a part of an input design that is active during the whole application runtime. It is placed in the “static” area of a target device that is kept intact all the time. In addition to its standard function it has to provide an infrastructure to load and unload other (dynamic) parts of the design, which is system scheduling, data management, and interface management.
The dynamic parts (dynamic modules, dmodules, d_modules, etc.) are independent parts of the input design that need not be active during the whole application runtime. They share common areas (slots) inside a target device; this is based on the assumption that they are not required to run at the same time in parallel. They are loaded to and unloaded from a target device as requested by the system scheduler.
The supermacro (smacro) is a special dynamic macro that represents a set of dynamic modules that must be declared with the same interface, and that are not required to function at the same time. These two facts enable the back-end tools to use more efficient routing algorithms for generation of the partial bitstreams
The system scheduler ensures exact load/unload timing of all d_modules so as to guarantee proper function of the application. Its behaviour is based on results generated by the system scheduling process in the implementation design flow. An automated partitioning tool can automatically generate a controller with the corresponding behaviour according to a design constraint file (DCF) specified by a designer. The system scheduler can be placed in the static part of a design or it can be implemented externally using a C program in a micro-controller.
The load and unload logic ensures that all d_modules required for a proper operation are placed and/or removed from the D_FPGA. The system scheduler controls the operation of this logic.
Data management takes care of internal states of d_modules. It saves all marked signals before a module is unloaded and it restores them whenever the corresponding d_module is loaded again. The designer must specify these signals in the constraint file.
Interface management handles interfacing between different dynamic parts and the static part. It holds the last values of interface lines when the dynamic modules that generated them are removed. The designer must specify these interface lines in the constraint file.
2.2 Design Structure
A design based on dynamic reconfiguration can be conceptually structured into two basic classes: one class is present in hardware during the whole application runtime, whereas only small parts of the other class are loaded in and unloaded from hardware on demand during the runtime. The first class is usually referred to as the static part and the other as the dynamic part. Since it is more convenient to consider all subparts of the dynamic part as self-contained modules, they will be individually referred to as dynamic modules or d_modules.
In the approach described in this document the static part can contains a (re)configuration controller (CC) that is responsible for proper loading/unloading of d_modules (the CC can be also implemented in a external microcontroller). The static part moreover contains a data management subsystem that is responsible for storing and retrieving internal information (FSM states) and keeping valid interface values when the driving d_modules are not present in hardware.
The dynamic modules are the actual functional units that perform the majority of computation. Each d_module is identified by its unique instance name and described by the VHDL (EDIF) code of the corresponding entity (macro). The loading/unloading of a d_module is triggered by user-specified dynamic constraints. The constraints can be divided into those based on specific time intervals and those derived from external events. The first group can be scheduled at compile time, but the second group is the problematic one; it is the task of the designer to decide whether the modules from the second group will be really implemented as a reconfigurable hardware or whether they will be included in the static part.
A special kind of a d_module is a super-macro, which is in fact a special d_module wrapper that can be implemented in a very efficient way. The super-macro concept introduces design size reductions by avoiding logic and routing that would be required if all
Chapter 2 - Dynamic Reconfiguration on FPGA
the dynamic modules contained in the super-macro were to be put in the FPGA individually. If the super-macro is present in all design configurations, then the content of the super-macro does not depend on any other dynamic modules loaded in the application. On the other hand, all dynamic modules that can be placed in one super-macro must be declared with the same interface.
The supermacro concept represents a special design structure that cannot be found in conventional designs. It can replace a set of design units with the same (or nearly the same) interface when the design implementation requires data calculated by only one unit at a time. Unfortunately, it is very hard to detect such situations by an automated design partitioner. Generally it is hard to find matching ports without further knowledge of the application function. A typical example of such a situation may be image filtering: One filter from a set of filters is applied to an image data to obtain the requested result.
A d_module can have inputs and outputs connected to other d_modules or to the static part (including pads). d_modules can be hierarchical; they can include other d_modules with different dynamic constraints. This allows the use of incremental design techniques and easy integration of two reconfigurable designs with their associated constraints in one D_FPGA.
Classical top-down hierarchy rules must apply to d_module constraints. A constraint applied to a hierarchical d_module is also applied to all its internal modules, but a constraint applied to a given d_module is not applicable to the upper levels of the hierarchy. Figure 1 shows an organization of a dynamically reconfigurable design. The depicted application consists of a static part (the blue rectangle) and two d_modules (the yellow rectangles). Note that the spatial arrangement of the shown design is very simple. It does not exactly describe the detailed hardware implementation, but it is sufficient to explain the methodology.
The static part can include the configuration controller and logic required for data and interface management. All inputs/outputs of the application are managed by the static part that communicates with d_modules through a fixed interface.
Several prepared design partitions can be plugged into a single d_module. Any partition must include a union of ports that appear in all partitions that belong to a particular d_module. There is no need to use all ports in each partition (see D_MODULE 1).
The interconnection between d_modules and the static part is set at the compile time and it cannot be modified during the application runtime. The connections inside d_modules can be modified by the means of the reconfiguration.
Figure 1 - An example of the D_Reconfigurable application
2.3 Reconfiguration Classes
From the application point of view three classes of dynamic reconfiguration can be identified.
The first class is based on switching between 2 or more d_modules that have the same interface and that share the same area on the FPGA (supermacros). If the sizes of d_modules are different, the biggest one determines the allocated size of the area. Given the fact that they cannot be present at the same time on the FPGA, different d_modules are functionally linked only through their interface.
The second class of dynamic reconfiguration is based on replacing several d_modules with a bigger one; the replaced d_modules need not be functionally linked.
The third class of dynamic reconfiguration is based on partitioning the functionality of a functional module into many sub-modules and thus implementing the same function more efficiently as a result of an increased functional density of the design. In the RECONF flow this partitioning is done automatically without any explicit link to the use of module scheduling inside the application, but it does not provide a cycle-accurate implementation.
The first class is a subset of the second, but it allows the designer to create explicit links between two or more d_modules to facilitate the partitioning and to optimise the efficiency of the implementation.
Chapter 2 - Dynamic Reconfiguration on FPGA
2.4 Scheduling
The designer has the possibility to define scheduling constraints for each hierarchical or elementary d_module. The RECONF flow supports specification of the constraints using both a graphical tabular entry and a text-based entry.
The designer can enter scheduling constraints as signal events such as after signal rises, until signal falls, starting with s1 (Signal 1) and s2 active and finishing when s1 is not active. This case may be limited to the first reconfiguration class and the designer must check the resulting time consistency resulting from the constraints.
For each dynamic or static module additional constraints can be entered that determine the final sequencing of the computation inside the module. The designer can specify if the implementation of a d_module must be cycle accurate with respect to the VHDL behavioural description or if automatic partitioning can be used inside the module that will delay output results.
2.5 Partitioning
The partitioning process is the most critical one in the RECONF flow. A partitioning can depend on external (performance, size) or internal (synchronisation reference, etc.) design criteria.
Partition granularity is not only design-dependent, but it can also depend on the targeted device (on its dynamic reconfiguration capabilities).
Data management concerns all storage elements present inside a d_module. The designer must be able to specify whether data generated inside a d_module must be saved and restored at each reconfiguration event or if it can be lost. In the first case, the RECONF front-end tools implement automatic context swapping, the output being the corresponding supplementary VHDL files. In the latter case storage elements can be shared between d_modules without any restrictions.
For external D_FPGA interface management, the designer has the ability to describe the states of the outputs when d_modules that are normally driving these outputs are removed from the FPGA. Features like pull-up or drive strength must be maintained if desired.
2.6 Reconfigurable VHDL Code
Dynamic VHDL output from the front-end tools is fully compatible with the initial design specification, taking into account the dynamic reconfiguration process. This output can be used for functional simulation and for synthesis.
This output consists of:
• A primary VHDL design description, if possible, without any modification,
• Specific VHDL files for dynamic reconfiguration management created by the front-end tools.
If necessary, the front-end tool can do modifications in the primary files, but these must be clearly defined, identified, and readable. The supplementary VHDL files can be created from the scheduling constraints entered by the designer and if necessary from automatic analysis done by the tool.
These outputs allow the designer to simulate and discover as soon as possible (i.e. before the synthesis process) all inconsistencies between the primary VHDL description, the scheduling constraints entered by the designer, the output of the front-end tools, and the simulation stimuli used to validate the application.
As the front-end tools do not depend on a specific target technology, the dynamic VHDL output does not reflect with accuracy timing characteristics of the dynamic reconfiguration process inside the hardware, still some parameters are available to the designer to instruct the front-end tools about the evaluated final performance (for example reconfiguration times for each d_module).
If necessary, different dynamic VHDL code can be generated for simulation and for synthesis purposes. The VHDL code used in the synthesis process is compatible with the simulation code.
2.7 The Reconf Design Flow
Figure 2 - D_FPGA design flow
Figure 2 represents a simplified view of the RECONF design flow for dynamically reconfigurable FPGAs. Initially a design is entered in the form of its static VHDL description. Since any design entry method is usually converted to a VHDL code, we will assume only VHDL inputs in the following text. To support design reuse the flow supports all common VHDL structures that are compatible with current synthesizers or simulators.
Chapter 2 - Dynamic Reconfiguration on FPGA
These include generics, packages, hierarchical VHDL constructions, multiple instantiation of the same VHDL entity, and full use of common VHDL library organisation. The input VHDL specification does not need any attributes or directives specific to the D_FPGA design flow, even if they are transparent to a standard design flow (pragmas, VHDL attributes, etc.). All D_FPGA specific constraints must come from additional input files. Such attributes or directives could be present in the automatic generated VHDL code from scheduler as soon as they do not imply specific features from the RTL synthesiser.
Existing design flows for FPGAs include a variety of specific design entry methods and tools. One of the concerns related with the presented D_FPGA design flow is to ensure compatibility with most of the current design tools and maximal efficiency in design reuse, and to limit the duration of the learning period of designers.
The RECONF flow accepts standard VHDL inputs, which ensures a portability to different targets and an ability to use standard tools for an automatic VHDL code generation, graphical capture, simulation, and synthesis. If macros or pre-synthesised black boxes are used in a design, the designer must supply the corresponding behavioural VHDL code for simulation purposes.
The primary VHDL code that is input to the design flow must be suitable and sufficient for a standard FPGA design flow. It must represent the entire functionality of a design within its application domain. If some additional non-VHDL design elements are supplied (e.g. scheduling constraints), they should not modify the external behaviour of the FPGA as long as the FPGA inputs are maintained in its defined functional domain. These supplementary inputs should only maximise the FPGA efficiency or restrict the functional domain.
The primary VHDL code is processed by the main part of the front-end tools - System Scheduling, where the static design is partitioned. The partitioning is based on user-specified constraints. The output of the scheduling process is a set of static VHDL descriptions. Each of them is a subset of the input description of the design, and each will go through the synthesis, placement, and routing processes.
Within the development tools, the designer can easily switch between a dynamically reconfigurable and a standard static FPGA implementation. This choice does not imply any change inside the primary VHDL database. This way the designer can clearly distinguish between functional bugs and wrong constraints for dynamic reconfiguration. As it can be clearly seen in Figure 2, the specific blocks in the design flow that are related to dynamic reconfiguration can be easily bypassed - what then remains is a traditional FPGA design flow. The back-end tools need to be adapted to each device architecture used, but the use of standard input and output descriptions minimizes this effort. It has to be noted that the inputs to the back-end tools are the outputs produced by today’s standard synthesis tools.
Both functional and timing simulation can be done using a standard static VHDL simulator [9][4]. Either the description produced after the system scheduling process or the back-annotated one that is based on the synthesis, placement, and routing processes are input to the VHDL post-processing block that provides the necessary additions to account for the dynamic behaviour of the FPGA.
A VITAL VHDL output from the back-end tools is available for post-placement and routing timing simulation. It enables a designer to perform time-accurate simulation for all parts of the design, including mechanisms that implement dynamic reconfiguration and
context swap. Transient states associated with the reconfiguration are modelled, but they may not always be time-accurate. If a transient appears at the input of an active module while this module is not supposed to use this input, the transient is modelled to verify the immunity of the active module.
The primary VHDL entry of the RECONF flow must represent the entire functionality of the design within the application domain. If some additional non-VHDL design elements are supplied (e.g. scheduling constraints), they should not modify the external behaviour of the FPGA as long as the FPGA inputs are maintained in its defined functional domain. These supplementary inputs should only maximise the D_FPGA efficiency or restrict the functional domain .
2.7.1 Front-end tools
The front-end tools form the basic interface between the designer and the RECONF flow. They take an input design specified in terms of a synthesizable, static VHDL code and transform it according to the above description. The front-end tools important to the designer are the constraints editor, partitioner, and system scheduler.
2.7.1.1 Constraints editor
Figure 3 - Constraints editor, general view
The final goal of the constraints editor is to partition the VHDL description according to the specified constraints, and for that several tasks have to be performed. Figure 3 shows a general view of the constraints editor with its main parts. In this figure shaded boxes represent tasks, while the other boxes represent some form of data: inputs, outputs or intermediate results. The external inputs to the constraints editor are a static VHDL description, and a set of constraints that will be applied to it defined in a specific format referred to as DCF, for Dynamic Constraints Format. The outputs generated by the constraints editor are a set of VHDL descriptions and a log file.
Input files
The static description accepted by the constraints editor is a subset of VHDL as defined in IEEE standard 1076.6 [1] plus extensions that cover other constructs accepted by the most widely used synthesis tools. A restriction imposed on the input description by the
Chapter 2 - Dynamic Reconfiguration on FPGA
constraints editor is that for all libraries used within the input VHDL description, except for those defined by the IEEE, the source files must be made available.
The constraints file contains the definition of d_modules and the conditions triggering their loading and unloading, as well as context data saving and interface management information. The purpose of the optional input DCF file is to let a designer supply hints that will help the partitioner.
The commands in the DCF file can be divided in four different types: 1. D_modules definition.
2. Definition of the conditions triggering the loading and unloading of d_modules. 3. Specification of signals that need to be saved when a d_module is removed. 4. Specification of interface signals that need to maintain a valid value after a
d_module is removed.
More information about the DCF file syntax can be found in [1].
VHDL processing
The task of the VHDL parser is to process the input files that represent the VHDL description and to convert it into an internal representation format (IRF). An important assumption at this stage is that the VHDL code is correct. Neither syntactic nor semantic checks are performed at this stage.
The task of the partitioner is to apply the commands identified by the DCF parser to the IRF, which represents the input description. It can be decomposed into several subtasks as shown in Figure 4. Each of the subtasks modifies the IRF until the final result is achieved.
Figure 4 - Detailed View of the partitioner
The first subtask is in charge of partitioning the input description into the d_modules defined in the DCF file. The static part of the design is implicitly defined, i.e. any part of the description not included in any d_module will be part of it. In general the input/output interface of the original description will not be modified because what will usually happen is that one or more of its processes will be extracted as d_modules.
Data saving constraints are then implemented. The method chosen to achieve this goal is to define registers in the static part of the design that hold the value of the desired signals when d_modules are no longer active or present.
The approach followed for managing interface management constraints is to define latches in the static part of the design. These latches will be in transparent mode when the respective d_modules are present on the FPGA and they will be in retention mode when they are not present or active.
Next, the VHDL structure has to be completed for each d_module and the static part to reflect all the modifications done.
Finally, the IRF has to be converted back to text format, and the different output VHDL descriptions written.
Outputs
Two different outputs are generated: VHDL output files, and a log file. The VHDL output files correspond to different d_modules are ready to be processed by the back-end tools The log file contains several different types of information: like details of any modifications done, warning messages and error messages.
2.7.1.2 Partitioner
The Vpart partitioner tool (see Figure 5) automatically analyses a static design and creates a set of d_modules that can be implemented in different time windows in hardware. This is accomplished by performing a partitioning process based on the original VHDL description provided. The partitioning process can be guided by supplying a constraints file prepared by the designer.
An efficient implementation of d_modules in hardware guarantees that the generated hardware contexts are time-independent, so that the global execution of the system requires just one execution cycle of these contexts. Apart from this constraint, the partitioning technique keeps the cut size of the partitioning results as small as possible. This requirement is imposed by the need to minimise buffer resources required to communicate signals between hardware contexts. Another goal achieved by the partitioning technique is achieving balanced sizes of the resulting hardware contexts, which improves the final functional density of the implementation.
The designer can interact with this tool during the automatic partitioning process and help it optimise the final results. This interaction can provide accurate estimations of the area occupied by the resources implemented in each hardware context. The area estimation that drives the partitioning process is based on a high-level description of the system functionality and therefore it cannot be very accurate. Anyway, this rough estimation can be improved using a technology–dependent database. The results can be improved if the user supplies exact data concerning the area that is required to implement each d_module.
Chapter 2 - Dynamic Reconfiguration on FPGA
Figure 5 - Global organization of the partitioning algorithm. The partitioning algorithm
As it can be seen in Figure 5, the first part of the flow is driven by a concurrent execution of two independent processes. The first one is in charge of determining the time dependencies between code elements that constitute the static VHDL description of the system to be partitioned. The main goal of this step is to build a representation based on a directed hyper graph, which constitutes the base for the partitioning algorithm. The vertices of this graph correspond to code elements, the edges correspond to signals which connect the elements.
At the same time another process tries to determine the size (i.e., the number of physical resources) required to implement these code elements. If the size is determined by extracting information only from the VHDL input code, the estimation is not very accurate. An external, technology-dependent database is provided so as to improve the estimation results.
Once both processes are completed the partitioning takes place, finally producing partial VHDL descriptions that correspond to d_modules to be implemented in hardware. titioning tool (e.g., files opened, location of the files and results, ...).
2.7.1.3 Configuration controller generator (CCG)
The configuration controller generator is in charge of generating both hardware and software descriptions of the configuration controller. The configuration controller is in charge of generating signals that control loading and unloading of d_modules as well as data management.
The configuration controller has two basic running modes: an initialization mode and a run-time mode. During the initialization mode, the configuration controller has to load in hardware the static part and some of the generated d_modules. In the run-time mode the configuration controller monitors a set of signals and detects events that trigger loading and unloading of d_modules. On these events the configuration controller proceeds to load/unload d_modules and to manage context data used inside the d_modules and interface signals generated by them.
Configuration files (bitstreams) reside in an external memory; the process of loading a configuration in hardware can also be seen as a memory-write operation, operating with both data and address signals. This means that (un)loading dynamic modules can be seen as a memory management problem. An important consideration for the configuration controller is the organization and management of the memory that contains the configuration files that correspond to the d_modules and the static part of the design. The main task of the configuration controller during run-time is t/he transfer of data between two memories.
Inputs
The inputs to the tool are the dynamic constraints format file (DCF) and configuration files that correspond to different modules and the static part, The DCF file contains information about the signal combinations triggering the loading of d_modules and the data management. The data management information present include signals that need to be saved or restored or interface signals that need to maintain or update their values when d_modules become active or inactive. The back-end tools generate the configuration files
Outputs
The configuration controller generator produces two outputs. The first corresponds to the case where the configuration controller is implemented inside an FPGA (VHDL description), the second to the case where the configuration controller is implemented using an external micro-controller (C description).
2.7.2 Back-end tools
The basic operation that the Modular Place & Route Tool performs is the placement and routing of modular designs on the FPGA based on the information included in an associated DCF file. In order to do so the tool determines different configurations that the design actually takes in time. These design configurations will in fact be the static part of the design along with the maximum sets of dynamic modules that will coexist in time (see Section 2.2). In other words, the tool must decode the information in the DCF file in order to form all the different groups of d_modules that coexist in time. When these groups are combined with the static part, they form design configurations. The tool must identify the modular design configurations so that it can determine if the device chosen is large enough to implement the largest design configuration - and perform effective placement of the d_modules.
The DCF file uses clock cycles and signal-related constraints that determine loading and unloading of the dynamic modules. The tool can easily decode the information about the clock cycles constraints and determine points in time when the configuration changes, but this is not the case for signal related constraints or in cases when the clock constraints use
Chapter 2 - Dynamic Reconfiguration on FPGA
more than one signal for synchronizing the clock. For this reason the DCF file supports another type of constraint, the exclusive relation. This type of constraint states a possible coexistence of dynamic modules; this is controlled by either a single signal constraint, or by signals that synchronize the reconfiguration clock with the rest of the dynamic modules the design uses. After processing this type of constraint the tool is able to dismiss several combinations and identify possible configurations that the design invokes in time.
2.7.2.1 Tool overview
The primary goal of this tool will be to perform the reconfiguration data management in a successful manner. The main features of the RECONF back-end tools are:
• Configuration file manager that generates and manages bitstream files
• Generators that generate back-annotated files such as VHDL, SDF (standard delay format) and constraint files
• Report generators
• Graphical user interface that generates and presents the design files
Figure 6 shows the structure of the modular design flow implemented in the back-end tools. The modular place & route tool inputs netlists in the edif format, back-end tool constraint files generated during a previous run of the back-end tools (pinout, timing or mapping constraints), and DCF files (Dynamic Constraint File) generated by the front-end tools. The DCF file includes information about reconfiguration cycles that the tools need to possess.
Figure 6 - The back-end modular design tool - design flow. 2.7.2.2 Configuration file manager
The main task of the configuration file manager is to generate bitstream files that are used for programming the FPGA, in other words, to implement the whole modular design (all reconfiguration cycles). This task is more complicated than the corresponding one for a regular design because of the existence of different reconfigurable areas on the FPGA. The process steps that the tool follows are described below (note, however, that these steps
Chapter 2 - Dynamic Reconfiguration on FPGA
assume that all different design configurations have been processed and each d_module has been assigned a fixed location on the FPGA):
1. Unload all d_modules so that only the static part of the design is present on the FPGA
2. Generate the initial bitstream file for the static part
3. Load the first d_module at its fixed location while the static part is preserved 4. Generate the bitstream file for the static part along with the first d_module
5. Compare the two files, the resulting file is the bitstream file for the first d_module alone as it contains only the resulting differences (static part remains unchanged)
6. Unload the first d_module and load the second one 7. Repeat steps 4 to 6 for each d_module
8. The bitstreams that implement the unloading of the d_modules are formed by “undoing” the changes that are made during their loading
At the end of the process the initial bitstream for the static part, the bitstreams for each d_module and the bitstreams that remove the d_modules are generated. The final step is the management of the bitstream concatenation over the d_module’s bitstreams and the initial bitstream file in order to provide a global bitstream for the whole modular design that will in turn be loaded into the configuration memory of the FPGA.
Figure 7 presents an example of the static part of a modular design (depicted in green) this floor plan will give the initial bitstream file.
Figure 7 - Example of the static part of a modular design.
Figure 8 shows the loading of a d_module along with the static part. If a bitstream is generated for this floorplan and compared to the initial bitstream, a bitstream for the d_module is obtained.
Chapter 2 - Dynamic Reconfiguration on FPGA
The bitstream that removes the d_module resets the programming actions made during loading of the d_module.
The process performed by the configuration file manager is automated, but it assumes that the user is familiar with the sequence of the bitstream generation process steps.
2.7.2.3 Files for back-annotation
The FPGA reconfiguration tool is responsible for generation and management of all the remaining data that are required in the RECONF design flow and that can be obtained from a well placed and routed modular design. These data are typically used for back-annotation, timing analysis and error reports; these data are generated by all common design tools for regular designs, but need modifications or even additions in order to fully support the characteristics of a modular design. A detailed description of the files that are generated is as follows:
• A back-annotated VITAL VHDL netlist for each module that the modular design holds as well as for each design configuration
• One SDF file (Standard Delay Format) for each configuration as well as one SDF file according to the global design in order to allow the simulator to take the dynamic and static delays into account.
• A bitstream Size Estimation File that will give information about the size of the concatenated bitstream files
• A floorplan constraint file. This file is generated by the modular floorplanner as a feedback to the front-end tools as a response to any errors detected through the modular placement & routing of the design.
• Report files about the dynamic usage of silicon. This feature helps the designer identify critical areas of his design.
The generation of the above files is based on techniques used for regular designs enriched to utilize the new characteristics arising from dynamic reconfiguration.
2.7.3 Modular design step by step
2.7.3.1 Creating the design’s library
The tool reads in a modular design by processing all the necessary netlists. First, netlists of dynamic modules and regular macros are read so that the modular design's library can be created. Simple macros are created as usual, while dynamic modules are further classified by the tool as swappable (smacro) and non swappable.
The non-swappable macros are created basically as regular macros by reading the corresponding edif files, but they acquire a new capability - loading and unloading. They also store their fixed location on the FPGA, but that happens in a later stage – after they are placed for the first time.
Swappable macros are logically grouped to reconfigurable super macros. The tool reads the corresponding DCF file and processes the information it holds; based on this information the tool groups different swappable dynamic modules into new reconfigurable macros. Each reconfigurable macro identified is created from different d_modules it holds; each d_module forms a view of the reconfigurable macro. The user is allowed to make changes to these d_modules by deleting or adding new netlist files, but this requires deep knowledge on the reconfiguration mechanism and careful handling.
Chapter 2 - Dynamic Reconfiguration on FPGA
Figure 9 illustrates how the designer can create a new reconfigurable macro. First, the tool places and routes each view of the macro (view d_module). When all d_modules are placed and routed successfully, the tool creates a new reconfigurable macro with the size given by its biggest view. The d_modules that are smaller are enlarged using some “dummy” cores, which prevents the resources from being used by the static part. As the final step the created macro is added to the design library.
2.7.3.2 Setting up the design
After the design library is created, the designer can open the design by reading the netlist file that corresponds to the top-level entity of the design. This top-level entity (generated by the front-end tools) includes the static part along with all the d_modules that the design uses instantiated as black boxes. All the d_modules must exist in this top-level design so as to preserve the whole structure of the static part during logic optimisation and mapping, and also to have information about all connections between the d_modules and the static part.
2.7.3.3 Modular mapping
The next phase of the design flow is modular mapping. The mapping tool analyses the static and the dynamic parts of the design with respect to the elements of the vendor library and maps them to the selected architecture. The result of this procedure is a compact and fast design. In this phase the tool identifies any structures that must not be trimmed (like macro interface cores) and excludes them from optimisation.
2.7.3.4 Temporal system planner
The temporal system planner is a sub-module of the place and route tool. Its main task is to organize and implement the reconfiguration process in hardware. This tool decodes the information included in the DCF file and uses it to identify different modular design configurations. This is a pre-processing stage of the modular design and gives the tool an ability to proceed with modular placement and routing.
At the end of its run the temporal system planner creates a new view and presents it to the user. This view holds each design configuration that the tool has identified while the ordering of the configurations is based on their timing properties.
2.7.3.5 Modular floorplanning
Modular floorplanning forms the core of the modular design. This tool uses the information created by the temporal system planner and the modular mapping tool. The modular floorplanner creates a floorplan for each design configuration, which enables the later placement and routing of the whole design. At the end of this phase each d_module and each reconfigurable macro is assigned to a specific location on the FPGA, while the static part – the static modules and static nets (i.e. nets that are used to connect static modules) – are locked and preserved. If no errors are traced during this process, the tool proceeds to bitstream generation.
1. Place & route the first configuration
2. Assign each d_module that is used in this first configuration a fixed location 3. Lock the static part of the design and all static nets
4. Unload the d_modules that do not belong to the next configuration 5. Lock the d_modules that are still loaded
6. Place the new set of d_modules according to the new design configuration (modular placement)
7. Route the new nets (modular routing)
8. For the rest of the configurations execute steps 3 to 7 9. If no errors are found, generate the bitstream files
In more detail, the tool starts by placing the first design configuration that was identified by the temporal system planner. The modular placer handles this first placement as if it were a regular design. The locations and orientations of the d_modules that belong to this configuration are identified; the d_modules are assigned a fixed position. At this stage the static part of the design is fully placed and routed.
After the placement and routing of the first configuration the static part of the design is locked in order to preserve its placement and routing. Then the appropriate loading – unloading of d_modules is performed in order to form the next design configuration. The static part remains locked as well as any other d_modules that belong to this new configuration. The modular placer finds locations for the new d_modules, while the modular router routes only the new nets that appear on the FPGA due to the appearance of the new d_modules. This process is repeated until each design configuration is placed and routed on the FPGA.
To explain the appearance of the new nets that was discussed in the previous paragraph let us consider a simple example: suppose that a sample design uses a d_module that is connected to a pin and that this d_module is not a part of the first configuration. The approach that is used by the front-end tools is to connect this pin to a flip-flop that belongs to the static part and to have the d_module connected to this flip-flop upon loading. The net that connects the d_module to the flip-flop does not appear on the FPGA until this d_module is loaded since the net does not have a source or destination and the net cannot be defined before this time. The modular placer and router do not form this net until the configuration that holds this d_module is processed (an example of such a net can be seen in Figure 10 and Figure 11).
When this process is finished and if no errors are detected the tool can unload – load each d_module one by one to its fixed location and the bitstream files can be generated.
2.7.3.6 Modular place
The modular place tool is responsible for the placement of the dynamic modules. As was already explained the static part will be not be placed alone, but it will be placed as a part of
Chapter 2 - Dynamic Reconfiguration on FPGA
the first design configuration. This approach leads to better placements of the design as the placer takes into account at least some of the d_modules that will accompany the static part. Moreover, the modular placement is not performed on one d_module at a time, but on groups of d_modules when possible, which results in better placements. The first time a d_module is placed it is assigned a fixed location that remains unchanged until the end of the process when all design configurations are placed and each d_module is bound to a fixed location. At this stage it is possible to load single d_modules to fixed locations.
Modular place performs many different kinds of operations. The first one works with dynamic modules that are to be placed on fixed locations with fixed orientations on the FPGA. The second operation is the unloading of the d_modules and is actually implemented by clearing the area used by the d_modules.
Another operation performs all swaps between dynamic modules within a bounding box of the corresponding reconfigurable macro. The first view (i.e. d_module) of a reconfigurable macro is placed as if it were a regular d_module and the reconfigurable macro is assigned a fixed location, but the swapping process is implemented by removing the previous d_module and placing the new one in the same location and area that the previous one occupied. The way the reconfigurable macros are created guarantees that the previously used area is sufficient for all d_modules within the macro.
Figure 10 illustrates a swap of two d_modules that are in one reconfigurable macro. Its left part shows an FPGA (denoted as D_FPGA); the static part of the modular design is depicted in green, while the reconfigurable macro is shown in black. The macro has two views (it represents two different d_modules with the same interface). The modular floorplanning tool implements the swap of these two d_modules.
Figure 10 - The modular place & route tool. The reconfigurable macro represents both dynamic modules D_M1 and D_M2.
2.7.3.7 Modular route
The modular routing tool is in charge of routing and preserving the nets of the static part, the macro nets (i.e. nets internal to a d_module), and the nets that connect d_modules to other structures (i.e. other d_modules or the static part).
When a new configuration is processed, some nets become functional while others lose their functionality. A net is functional if at least one source and one destination are used. When a d_module is unloaded, some nets lose their sources/destinations. Vice versa, when a d_module is loaded, some nets obtain sources/destinations. The modular routing tool must route the nets that become functional without corrupting the ones that have already been processed (i.e. the static nets). Moreover, modular routing should unroute the nets that lose their functionality.
To be more precise, we should distinguish between two categories of nets that lose ports due to removing d_modules: the nets that lose their functionality and the ones that continue to be functional. The first ones are fully unrouted by the tool, while the latter ones are unrouted up to a certain point when they are no longer connected to the port of the d_module that was unloaded (see Figure 11). The nets that become functional are fully routed as expected.
The modular routing tool is also in charge of routing all the new views of the reconfigurable macros, the term routing refers to both macro-routing (the nets inside the macro) as well as preserving the routing between the macro and the static part of the design or even other reconfigurable structures while the macro is loaded. The nets between the reconfigurable macro interface ports and the static design will remain unchanged at least up to the bounding box of the reconfigurable macro. For more details see Figure 10.
Chapter 2 - Dynamic Reconfiguration on FPGA
Figure 11 illustrates an example of how a modular design with two configurations is processed by the modular place and route tool. Figure 11.a shows the first configuration with four d_modules (in red) and the static part (in green). The nets that appear in the figures follow the same colour convention, so the static nets are green while the nets that are a subject of change are red. If we consider that the second configuration holds completely different d_modules then the next step is to unload all the d_modules and lock the static part, this is shown in Figure 11.b. The net that has a green and red part loses one port when the d_module that it is connected to it is unloaded, but it remains functional as if it had both source and destination.
Figure 12 shows the second configuration, the new d_modules are placed, the static part is preserved, the previous static nets are preserved and some new nets have appeared.
The next phase in the tool flow is the generation of the bitstream files. This is performed by the following sequence: unload all the d_modules, generate the bitstream for the static part, load one d_module, generate the bitstream of the static part along with the d_module and compare these two bitstreams. By loading/unloading all the d_modules one by one all bitstream files can be generated.
Figure 12 - Placement of the second configuration, new nets. 2.7.3.8 Incremental design changes
When the modular place & route tool has already processed a modular design and the designer wishes to add a new dynamic module later, there is an easy way to do so without running the whole modular design flow. The only requirement is that the static part must remain unchanged, i.e. the new d_module must not be connected to a new pin or to a net
that did not exist. A new dynamic module can then be inserted to the existing implementation of the design by reading its netlist and the corresponding DCF.
If the newly added d_module is a swappable one, then it is necessary to update only the reconfigurable macro that holds it. If it is not swappable, then the configuration that includes it must be reprocessed so that the d_module is assigned a fixed location and orientation.
2.8 Design Organization
2.8.1 Directory structure
An effective design of dynamically reconfigurable applications must be followed by an implementation procedure starting with a design entry and ending with the final application. To separate different implementation phases of a given design, it is required to set a suitable directory structure so that it provides a good design arrangement in all design phases while retaining the comfort of the automated design process.