• Non ci sono risultati.

The Software Defined Radio

N/A
N/A
Protected

Academic year: 2021

Condividi "The Software Defined Radio"

Copied!
17
0
0

Testo completo

(1)

The Software Defined Radio

(SDR)

(2)
(3)

1.1 An overview on SDR

The Software Defined Radio (SDR), was the aim of many radio developments for a number of years. The roots of software defined radios can be traced back to the days when software was first used within radios and radio technology.

The basic concept of the SDR software radio is that the radio can be totally configured or defined by the software so that a common platform can be used across a number of areas and the software used to change the configuration of the radio for the function required at a given time. There is also the possibility that it can then be re-configured as upgrades to standards arrive, or if it is required to meet another role, or if the scope of its operation is changed.

One major initiative that uses the SDR, is a military venture known as the Joint Tactical Radio System (JTRS). Using this a single hardware platform could be used and it could communicate using one of a variety of waveforms simply by reloading or reconfiguring the software for the particular application required. This is a particularly attractive proposition, especially for coalition style operations where forces from different countries may operate together. Radios could be re-configured to enable communications to occur between troops from different countries, etc.

The SDR concept is equally applicable for the commercial world as well. One application may be for cellular base stations where standard upgrades frequently occur. By having a generic hardware platform, upgrades of standards can easily be incorporated. Migrations for example from UMTS to HSPA and on to LTE could be accommodated simply by uploading new software and reconfiguring it without any hardware changes, despite the fact that different modulation schemes and frequencies may be used.

There are many opportunities for considering the use of the software defined radio, concept.

As time progresses and the technology moves forward, it will be possible to use the concept in new areas.

1.1.1 History

The term “software radio” was coined in 1984 by a team at the Garland Texas Division of E-Systems Inc. (now Raytheon). A classified, yet fairly well known, “Software Radio Proof-of-Concept” laboratory was developed at E-Systems that popularized software radio within various government agencies. This software radio was a digital baseband receiver that provided programmable interference cancellation and demodulation for broadband signals, typically with thousands of adaptive filter taps, using multiple array processors accessing shared memory.

(4)

Perhaps the first software defined radio transceiver was designed and implemented by Peter Hoeher and Helmuth Lang at the German Aerospace Research Establishment (DLR, formerly DFVLR) in Oberpfaffenhofen, Germany, in 1988. Both transmitter and receiver of an adaptive digital satellite modem were implemented according to the principles of software defined radio, and a flexible hardware periphery was proposed.

The term “Software Defined Radio” was coined in 1991 by Joseph Mitola, who published the first paper on the topic in 1992. Though the concept was first proposed in 1991, software defined radios have their origins in the defense sector since the late 1970s in both the U.S.

and Europe (e.g., Walter Tuttlebee described a VLF radio that used an ADC and an 8085 microprocessor).

One of the first public software radio initiatives was a U.S. military project named SpeakEasy.

The SpeakEasy project was started in 1991 and was the first large-scale software radio.

SpeakEasy was motivated by the communications interoperability problems that resulted from different branches of the military services having dissimilar (non-interoperable) radio systems. This lack of communications interoperability can be directly linked to disasters in several conflicts. The primary goal of the SpeakEasy project was to use programmable processing to emulate more than ten existing military radios, operating in frequency bands between 2 and 2000 MHz on a single platform. The designers chose the fastest DSP available at the time, the Texas Instruments TMS320C40 processor, which ran at 40 MHz. Since this was not enough processing power to implement all of the waveform processing, the system boards were designed to each support four ’C40 s as well as some FPGAs.

In 1994, Phase I was successfully demonstrated; however it involved several hundred pro- cessors and filled the back of a truck. Moore’s Law provides a doubling in speed every eighteen months, and since it had taken three years to build the system and write all of the software, two doublings had taken place. This seemed to indicate that the number of processors could be reduced by a factor of four. However, SpeakEasy could not take advantage of these newer faster processors, and the reason was the software.

The software was tied to ’C40 assembly language, plus some specialized glue code to get four ’C40 s to work together. The observation was that it had taken three years to write software for a platform that Moore’s Law made obsolete in eighteen months. Furthermore, a software radio pushes most of the complexity of the radio into software, so the software development could easily become the largest, most expensive part of the system.

These observations led to software portability being a key goal of a DARPA-sponsored MIT research project, named SpectrumWare, formed in 1994 to investigate the potential for building software radio systems that implemented as much as possible in software, while

(5)

leveraging Moore’s Law and the price performance curves of industry standard servers. The SpectrumWare architecture consisted of a front-end that was responsible for RF up/down conversion and well as A/D and D/A conversion. The system I/O was handled by a PCI- bus DMA engine called the GuPPI, and the processing was implemented on a standard PC running Linux. There were small extensions to the OS to implement a zero-copy I/O scheme to reduce the processor overhead required for data transfer. This architecture was designed to support a portable object-oriented software architecture, as well as leverage high-volume commercial components.

1.1.2 Definition

Although it may sound a trivial exercise, creating a definition for the Software Defined Radio is not as simple as it seems. It is also necessary to produce a robust definition for many reasons including regulatory applications, standards issues, and for enabling the SDR technology to move forwards more quickly.

Many definitions have appeared that might cover a definition for an SDR. The SDR Forum themselves have defined the two main types of radio:

• Software Controlled Radio: Radio in which some or all of the physical layer functions are software-controlled. In other words this type of radio only uses software to provide control of the various functions that are fixed within the radio.

• Software Defined Radio: Radio in which some or all of the physical layer functions are software-defined. In other words, the software is used to determine the specification of the radio and what it does. If the software within the radio is changed, its performance and function may change.

Another definition that seems to encompass the essence of the SDR is that it has a generic hardware platform on which software runs to provide functions including modulation and demodulation, filtering (including bandwidth changes), and other functions such as frequency selection and if required frequency hopping. By reconfiguring of changing the software, then the performance of the radio is changed.

Traditional hardware based radio devices limit cross-functionality and can only be modified through physical intervention. This results in higher production costs and minimal flexibility in supporting multiple waveform standards. By contrast, SDR technology provides an efficient and comparatively inexpensive solution to this problem, allowing multi-mode, multi-band and/or multi-functional wireless devices that can be enhanced using software upgrades.

(6)

Figure 1.1: Wireless Innovation Forum Generalized Functional Architecture

SDR defines a collection of hardware and software technologies where some or all of the radio operating functions, also referred to as physical layer (PHY) processing, are implemented through modifiable software or firmware operating on programmable processing technologies.

These devices include Field Programmable Gate Arrays (FPGA), Digital Signal Processors (DSP), General Purpose Processors (GPP), Programmable System on Chip (SoC) or other application specific programmable processors. The use of these technologies allows new wireless features and capabilities to be added to existing radio systems without requiring new hardware.

In an ideal world the signal from the antenna would be directly converted to digits and all the processing be undertaken under software control. In this way there are no limitations introduced by the hardware. To achieve this, the Analogue-to-Digital conversion for trans- mission would need to have a relatively high power, dependent upon the application and it would also need to have very low noise on the receiver. As a result fully-software definition is not normally possible. Figure 1.1 shows the Generalized Functional Architecture defined by the Wireless Innovation Forum (WINNF).

(7)

1.1.3 Levels

It is not always feasible or practicable to develop a radio that incorporates all the features of a fully software defined radio. Some radios may only support a number of features associated with SDRs, whereas others may be fully software defined. In order to give a broad appreciation of the level at which a radio may sit, the SDR Forum (now called the Wireless Innovation Forum, WINNF) has defined a number of tiers. These tiers can be explained in terms of what is configurable.

• Tier 0: A non-configurable hardware radio, i.e. one that cannot be changed by software.

• Tier 1: A software controlled radio where limited functions are controllable. These may be power levels, interconnections, etc. but not mode or frequency.

• Tier 2: In this tier a significant proportion of the radio is software configurable. Often the term Software Controlled Radio, SCR may be used. There is software control of parameters including frequency, modulation and waveform generation/detection, wide/- narrow band operation, security, etc. The RF front end still remains hardware based and non-reconfigurable.

• Tier 3: The Ideal Software Radio, or ISR (Figure 1.2), where the boundary between configurable and non-configurable elements exists very close to the antenna, and the front end is configurable. It could be said to have full programmability.

Figure 1.2: Ideal Software Radio block scheme

• Tier 4: The Ultimate Software Radio, or USR, is a stage further on from the ISR.

Not only this form of software defined radio has full programmability, but it is also able to support a broad range of functions and frequencies at the same time. For example, with an electronic item such as cellphones having many different radios and standards a software definable multifunction phone would fall into this category.

(8)

Although these SDR tiers are not binding in any way, they give a way of broadly summaris- ing the different levels of software defined radios that may exist.

1.1.4 Benefits

The benefits of SDR are very large. For Radio Equipment Manufacturers and System Inte- grators, SDR enables:

• A family of radio “products” to be implemented using a common platform architecture, allowing new products to be more quickly introduced into the market;

• Software to be reused across radio “products”, reducing development costs dramatically;

• Over-the-air (OTA) or other remote reprogramming, allowing “bug fixes” to occur while a radio is in service, thus reducing the time and costs associated with operation and maintenance.

For Radio Service Providers, SDR enables:

• New features and capabilities to be added to existing infrastructure without requiring major new capital expenditures;

• The use of a common radio platform for multiple markets, significantly reducing logis- tical support and operating expenditures;

• Remote software downloads, through which capacity can be increased, capability up- grades can be activated and new revenue generating features can be inserted.

For End Users, from business travelers to soldiers on the battlefield, SDR technology aims to:

• Reduce costs in providing end-users with access to ubiquitous wireless communications, enabling them to communicate with whomever they need, whenever they need to and in whatever manner is appropriate.

1.1.5 Waveform portability

Apart from the fact that the SDR can reconfigure itself, another major advantage is that of waveform portability. There are several reasons for the need for SDR waveform portability:

• Cost savings: With the waveforms for various transmissions, military and commercial, costing huge sums to develop, there is a real need to be able to re-use waveforms on different projects and this is likely to involve very different platforms.

(9)

• Obsolescence mitigation: A similar requirement comes as hardware technology de- velops and it is necessary to transfer existing waveforms onto newer platforms.

• Interoperability: To provide complete interoperability a customer may request the use of a particular waveform being used across the equipment from several manufacturers.

Complete SDR waveform portability is not always easy to achieve. However, it is necessary to incorporate measures at the earliest stages of the design to ensure the optimum level portability: elements such as the use of Software Communications Architecture (SCA), and CORBA, a middleware used in the SCA. In addition to the use of SCA and CORBA general good structured programming techniques are needed: short-cuts that may work on one platform are certainly not likely to work on another. It is often necessary to be able to re-compile the code for use the different platforms, so all code should be in a format that can be compiled on the foreseeable platforms.

1.1.6 Security

Another area of growing importance is that of SDR security. Many military radios, and often many commercial radio systems will need to ensure the transmissions remain secure, and this is an issue that is important for all types of radio. However when using an SDR, there is another element of security, namely that of ensuring that the software within the radio is securely upgraded. With the growing use of the Internet, many SDRs will use this to medium to deliver their updates. This presents an opportunity for malicious softwares to be delivered that could modify the operation of the radio or prevent its operation altogether. Accordingly SDR software security needs to be considered, if the Internet is used for software delivery or where there could be security weaknesses that could be employed maliciously.

1.1.7 Interoperability testing

With the need to transfer waveforms from one radio or platform to another it is necessary to undertake full interoperability testing. This needs to assure that the code can be trans- ported from one platform to another and provides the correct functionality for the particular waveform in case. To achieve this the waveforms generally need to be certified and accredited.

The SDR is a reality today, and it is being used in many areas. However there are a number of limitations that prevent them being used in as many applications as some would like. One is the sheer processing power that is required, and the resulting power consumption. It is necessary to undertake a power consumption/processing power trade-off, and this is one of

(10)

the core decisions that needs to be made at the outset. As a result of this it is not feasible, for example, to use SDR for cellphone designs, but cellphone base-stations are using them as power consumption and space are normally not issues and the software can be upgraded to enable the moving standards to be tracked. Also software defined radios are being used by the military, and already some handheld designs are appearing. As technology progresses software defined radios will be used in more applications, yet there will always be a decision to be made as the SDR is not the right decision for all radios. For small cheap radios where changes will be few, the SDR is definitely not right. But for more complicated systems where length of service is an issue and where change is likely, then the SDR is definitely a good option to be considered.

1.2 SDR hardware architecture

The hardware for a software defined radio is a particularly important element of the overall design. While the whole idea of the radio is that it is fundamentally driven by software, it still needs the basic hardware to enable the software to run.

The SDR hardware presents some interesting challenges to the hardware development engineer. The performance of the hardware will define exactly how much can be done within the software.

With the interface between software and hardware controlled functions needing to be as close to the antenna as possible to provide greater levels of software control and hence reconfigurability, this provides greater challenges in terms of design, performance and costs.

As a result, decisions need to be made at the earliest stages of any design to determine where the boundary will be, based on functionality required, performance and costs.

Although there are many different levels of SDR and many ways in which a software defined radio may be designed, it possible to give some generalised comments about the basic structures that are used.

Apart from the control and management software and its associated hardware, an SDR can be considered to contain a number of basic functional blocks:

• RF Amplification: These elements are the RF amplification of the signals travelling to and from the antenna. On the transmit side the amplifier is used to increase the level of the RF signal to the required power to be transmitted. It is not correct to say that direct conversion by the DAC will give the required output level. On the receive side signals from the antenna need to be amplified before passing further into the receiver.

(11)

If antenna signals are directly converted into digital signals, quantisation noise becomes an issue even if the frequency limits are not exceeded.

• Frequency conversion: In many designs, some analogue processing may be required.

Typically this may involve the conversion of the signal to and from the final radio frequency. In some designs this analogue section may not be present and the signal will be converted directly to and from the final frequency from and to the digital format.

Some intermediate frequency processing may also be present.

• Digital conversion: It is at this stage that the signal is converted between the digital and analogue formats. This conversion is in many ways at the heart of the equipment.

When undertaking these conversions there are issues that need to be considered. On the receive side, the maximum frequency and number of bits to give the required quan- tisation noise are of great importance. On the transmit side, the maximum frequency and the required power level are some of the major issues.

• Baseband processor: The baseband processor is at the very centre of the SDR. It performs many functions from digitally converting the incoming or outgoing signal in frequency. These elements are known as the Digital Up Converter (DUC) for converting the outgoing signal from the base frequency up to the required output frequency for conversion from digital to analogue. On the receive side a Digital Down Converter (DDC) is used to bring the signal down in frequency. The signal also needs to be filtered, demodulated and the required data extracted for further processing. One of the key issues of the baseband processor is the amount of processing power required.

The greater the level of processing, the higher the current consumption and in turn this required additional cooling, etc. This may have an impact on what can be achieved if power consumption and size are limitations. Also the format of any processing needs to be considered: general processors, DSPs, ASICs and in particular FPGAs may be used.

1.3 The GNU Radio project and the USRP/USRP2 peripheral

GNU Radio is a free and open-source software development toolkit that provides signal processing blocks to implement software radios. It can be used with readily-available low-cost external RF hardware to create software defined radios, or without hardware in a simulation- like environment. It is widely used in hobbyist, academic and commercial environments to support both wireless communications research and real-world radio systems.

(12)

GNU Radio applications are primarily written using the Python programming language, while the supplied performance-critical signal processing path is implemented in C++ using processor floating-point extensions, where available. Thus, the developer is able to implement real-time, high-throughput radio systems in a simple-to-use, rapid-application-development environment.

While not primarily a simulation tool, GNU Radio does support development of signal processing algorithms using pre-recorded or generated data, avoiding the need for actual RF hardware. Hardware is strictly not part of GNU Radio, which is purely a software library.

However, GNU Radio supports several radio front-ends:

• Sound interface – cheap and easy

• USRP – open source spinoff with RF frontends

• Comedi

• Perseus

The most commonly used equipment are the USRP devices by Ettus Research, LLC.

USRPs connect to a host computer through a high-speed USB or Gigabit Ethernet link, which the host-based software uses to control the USRP hardware and transmit/receive data. Some USRP models also integrate the general functionality of a host computer with an embedded processor that allows the USRP Embedded Series to operate in a standalone fashion.

The USRP family was designed for accessibility, and the majority of the products are open source. The board schematics for some USRP models are freely available for download; all USRP products are controlled with the open source USRP Hardware Driver (UHD).

The USRP product family includes a variety of models that use a similar architecture.

A motherboard provides the following subsystems: clock generation and synchronization, FPGA, ADCs, DACs, host processor interface, and power regulation. These are the basic components that are required for baseband processing of signals. A modular front-end, called daughterboard, is used for analog operations such as up/down-conversion, filtering, and other signal conditioning. This modularity permits the USRP to serve applications that operate between DC and 6 GHz.

In stock configuration the FPGA performs several DSP operations, which ultimately provide translation from real signals in the analog domain to lower-rate, complex, baseband signals in the digital domain. In most use-cases, these complex samples are transferred to/from

(13)

applications running on a host processor, which perform DSP operations. The code for the FPGA is open-source and can be modified to allow high-speed, low-latency operations to occur in the FPGA.

The USRP Hardware Driver (UHD) is the device driver provided by Ettus Research for use with the USRP product family. It supports Linux, MacOS, and Windows platforms;

several frameworks including GNU Radio, LabVIEW, MATLAB and Simulink use UHD.

The functionality provided by UHD can also be accessed directly with the UHD API, which provides native support for C++. Any other language that can import C++ functions can also use UHD. This is accomplished in Python through SWIG, for example. UHD provides portability across the USRP product family: applications developed for a specific USRP model will support other USRP models if proper consideration is given to sample rates and other parameters.

The USRP family features a modular architecture with interchangeable daughterboard modules that serve as the RF front end. Several classes of daughterboard modules exist:

Receivers, Transmitters and Transceivers.

1.3.1 USRP1

The USRP1 (Figure1.3) is the first USRP product and consists of:

• Four high-speed analog-to-digital converters (ADC), each capable of 64 MS/s at a resolution of 12-bit, 85 dB SFDR (Spurious-Free Dynamic Range) (AD9862).

• Four high-speed digital-to-analog converters (DAC), each capable of 128 MS/s at a resolution of 14-bit, 83 dB SFDR (AD9862).

• An Altera Cyclone EP1C12Q240C8 FPGA.

• A Cypress EZ-USB FX2 high-speed USB 2.0 controller.

• Four extension sockets (2 TX, 2 RX) in order to connect 2–4 daughterboards.

1.3.2 USRP2

The USRP2 (Figure 1.4) was developed after the USRP and was first made available in September 2008. It has reached end of life and has been replaced by the USRP N200 and USRP N210. The USRP2 was not intended to replace the original USRP, which continued to be sold in parallel to the USRP2. The USRP2 contains:

(14)

• Two 100 MS/s, 14-bit, LTC2284 ADCs, 72.4 dB SNR and 85 dB SFDR for signals at the Nyquist frequency.

• Two 400 MS/s, 16-bit, AD9777 DACs, 160 MSPS w/o interpolation, up to 400 MSPS with 8x interpolation.

• A Xilinx Spartan 3-2000 FPGA.

• Gigabit Ethernet interface.

• SD card reader.

Figure 1.3: USRP1 motherboard

Figure 1.4: USRP2 Ettus Research peripheral

(15)

Figure 1.5: Simple USRP block diagram

1.3.3 The FPGA

Probably understanding what goes on the USRP FPGA is the most important part for the GNU Radio users. As shown in Figure 1.5, all the ADCs and DACs are connected to the FPGA. This piece of FPGA plays a key role in the USRP system. Basically what it does is to perform high bandwidth math, and to reduce the data rates to something you can squirt over USB 2.0.

The FPGA connects to a USB2 interface chip, the Cypress FX2. Everything (FPGA circuitry and USB microcontroller) is programmable over the USB2 bus.

The standard FPGA configuration includes Digital Down Converters (DDC) implemented with 4 stages cascaded integrator-comb (CIC) filters. CIC filters are very high-performance filters using only adds and delays. For spectral shaping and out of band signals rejection, there is also 31 tap halfband filters cascaded with the CIC filters to form complete DDC stage. The standard FPGA configuration implements 2 complete DDCs. Also there is an image with 4 DDCs but without halfband filters. This allows 1, 2 or 4 separate RX channels.

In the 4 DDC implementation, in the RX path we have 4 ADCs, and 4 DDCs. Each DDC has two inputs I and Q. Each of the 4 ADCs can be routed to either of I or the Q input of any of the 4 DDCs. This allows for having multiple channels selected out of the same ADC sample stream.

The DDC first down converts the signal from the IF band to the base band, and second it

(16)

Figure 1.6: USRP Digital Down Converter block scheme

decimates the signal so that the data rate can be adapted by the USB 2.0 and is reasonable for the computers computing capability. The complex input signal (IF) is multiplied by the constant frequency (usually also the IF) exponential signal. The resulting signal is also complex and centered at 0. Then we decimate the signal with a factor N . Figure 1.6shows the DDC block scheme.

The decimator can be treated as a low pass filter followed by a down sampler. Suppose the decimation factor is N . If we look at the digital spectrum, the low pass filter selects out the band [−F s/N, F s/N], and then the down sampler de-spread the spectrum from [−F s, F s] to [−F s/N, F s/N ]. So in fact, we have narrowed the bandwidth of the digital signal of interest by a factor of N . Regarding the bandwidth, we can sustain 32 MB/s across the USB. All samples sent over the USB interface are in 16-bit signed integers in I/Q format, i.e. 16-bit I and 16-bit Q data (complex) which means 4 bytes per complex sample. This resulting in a 8 Mega-complex-samples/sec across the USB (32 MB/sec/4 Byte). Since complex processing was used, this provides a maximum effective total spectral bandwidth of about 8 MHz by Nyquist criteria. Of course we can select much narrower ranges by changing the decimation rate. For example, suppose we want to design an FM receiver. The bandwidth of a FM station is generally 200 kHz. So we can select the decimation factor to be 250. Then the data rate across the USB can be 64 MHz/250=256 kHz, which is well suited for the 200 kHz bandwidth without losing any spectral information. The decimation rate must be in [8, 256].

Finally the complex I/Q signal enters the computer via the USB.

At the TX path, the story is pretty much the same, except that it happens reversely. We need to send a baseband I/Q complex signal to the USRP board. The Digital Up Converter (DUC) will interpolate the signal, up convert it to the IF band and finally send it through the DAC. The DUCs on the transmit side are actually contained in the AD9862 CODEC chips,

(17)

Figure 1.7: USRP Digital Up Converter block scheme

not in the FPGA (as shown in Figure1.7). The only transmit signal processing blocks in the FPGA are the CIC interpolators. The interpolator outputs can be routed to any of the 4 CODEC inputs. In multiple TX channels (1 or 2) all output channels must be the same data rate (i.e. same interpolation ratio). Note that TX rate may be different from the RX rate.

The USRP can operate in full duplex mode. In this mode, transmit and receive sides are completely independent of one another. The only consideration is that the combined data rate over the bus must be less the 32 MB/s.

Riferimenti

Documenti correlati

Our exploration method is based on Ant Colony Optimization (ACO) [ 13 ] and efficiently solves the combined problem: it refines the target architecture (i.e., it selects both

Volendo procedere con un esempio concreto, la BBC, partenendo dall’idea che le generazioni di utenti più giovani possano non essere particolarmente attente a mezzi

Abl = ablative, Acc = accusative, AmongP = among construction, BareP = bare partitives, CanonP = canonical partitives, CovertP = covert partitives, DegP =

87 Assieme alla festa della rottura del digiuno, la festa del sacrificio (ىحضلأا ديع ) rappresenta la seconda grande festa nell’anno islamico e, prevedendo una preghiera

© The Author(s). European University Institute. Available Open Access on Cadmus, European University Institute Research Repository... EU Citizenship as the

Questo penalizza in particolar modo il colore verde, che non è mai ottenuto come miscela di blu e giallo, non solo per via di tabù religiosi, ma anche a causa della

Da queste poche informazioni, associate a quelle fornite dallo studioso Miotti si sono sviluppate le campagne di scavo dell’Università Cà Foscari che, oltre a

Available Open Access on Cadmus, European University Institute Research Repository... Charles Tilly thinks the Europeanization of conflict may already