Top PDF Measurement and Optimization of LTE Performance

Measurement and Optimization of LTE Performance

Measurement and Optimization of LTE Performance

In order to achieve a higher channel capacity and thus throughput, carrier aggregation was introduced by 3GPP since Release 10. CA allows mobile network operators to combine a number of separate LTE carriers in frequency to provide a wider spectrum to its end-users. CA technology aggregates multiple small band segments into maximum 100 MHz virtual bandwidth to achieve a higher data rate. Extra channels, called Component Carriers, can have different widths and can be allocated and de-allocated on needs. Each station is always connected, though, to a licensed carrier, which is called PCC. While being connected to such PCC, the station can connect to other unlicensed carriers which are called SCC. To each UE several SCC are accessible at the same time. Accordingly to the UE QoS demand and cell capacity, configuration information can be sent via PCC to dynamically remove or add more or less SCCs. All of the SCCs, as mentioned above, can be changed dynamically during the time the user is served by the eNodeB, by means of configuration messages over the PCC. There are two operation modes for LAA: supplemental downlink (SDL) and TDD. SDL mode is the easiest form where the unlicensed bandwidth is mainly used for downlink transmission, as downlink traffic is typically oriented towards user’s information traffic offloading. When using a TDD mode, the unlicensed bandwidth is adopted for both downlink and uplink, as the LTE TDD system practically works in licensed bands. TDD mode grants the flexibility to dynamically exchange resource allocation between downlink and uplink. Although on the user side implementation complexity increases with the implementation of mechanisms such as LBT features and radar detection requirements.
Mostra di più

148 Leggi di più

Design and performance evaluation of algorithms for wireless self-organizing systems

Design and performance evaluation of algorithms for wireless self-organizing systems

tion of the combined coverage and connectivity problems as a multi-objective optimization problem. Moreover, we provide more details in terms of results by validating the algorithm we propose in several communication scenarios. Starting from 2007, Stuckmann and Zimmermann [ 79 ] envisaged SDR tech- nology as one of the four main objectives, to develop European technologies for systems beyond 3G. Specifically, the spectrum and resource management to make efficient the use of existing spectrum resources can be realized in an feasible and effective way through the application of the SDR concept. The importance of this kind of technology is shown in [ 80 ], where the authors propose to optimize the throughput of the network, working in different chan- nel conditions, by considering an automatic modulation switching method to reconfigure the transceivers of SDR systems. We propose a similar approach but with a different purpose, namely a multi-objective algorithm where the goals are both the coverage and the connectivity, in addition the technique is also energy-aware because, where more solutions are feasible, the best one in terms of energy-consumption is selected. Finally, in [ 81 ], the authors analyse different modulation techniques in combination with SDR. In practice, they outline again the importance of this kind of technology for the future mobile communication systems.
Mostra di più

134 Leggi di più

Machine learning in Industrial turbomachinery: development of new framework for design, analysis and optimization

Machine learning in Industrial turbomachinery: development of new framework for design, analysis and optimization

Regardless of specific application field, optimization is the selection of a best element, with regard to one or more criteria, from a set of available alternatives. To the end of selection, we need measuring the performance of all available samples. When looking at optimization strategies, several multi-objective optimization problems (MOOP) have been presented in the open literature and the choice depends upon various aspects, in particular the level of knowledge of the objective function and the fitness landscape [35]. Among these methods, Evolutionary Algorithms (EAs) often perform well approximating solutions to all types of problems. They ideally do not make any assumption about the underlying fitness land- scape and they are able to find a good spread of solutions in the obtained set of solutions. Furthermore, genetic methods are independent from level of knowledge of the objective function and the fitness landscape, resulting less sensitive to numerical noise. Over the past decade, a number of Multi-Objective Evolutionary Algorithms (MOEAs) have been proposed [36], [37], primarily because of their ability to find multiple Pareto-optimal solu- tions in one single simulation run. In the number of different EAs the most popular belong to the family of Genetic Algorithms (GAs) and among these, we focused on NSGA-II [38]. When dealing with real-world optimization problems, the number of calls of the objective function to find a good solution can be high. Furthermore, in optimization problems based on computer experiments, where the simulation acts as the objective function, the compu- tational cost of each run can severely restrict the number of available calls to the fitness function. Shape optimization is one of the most important applications of CFD. However, a single CFD simulation may require days to be accomplished.
Mostra di più

129 Leggi di più

21- OPTIMIZATION OF BLADE PROFILES OF CROSS FLOW TURBINE

21- OPTIMIZATION OF BLADE PROFILES OF CROSS FLOW TURBINE

The areas where the supply of grid power is very difficult, Cross Flow Turbines (CFT) are used for low head power production. Results are presented for optimum profile of the leading edges of blade. Four different profiles (Flat tip blade profile, round tip blade profile, pointed tip blade profile and oval tip blade profile) are modeled and simulated with ANSYS CFX and then compared their performance in terms of their efficiencies. The complete turbine is analyzed for the determination of the leading edge of the blade in the first stage and the trailing edge in the second stage. The design and simulation conditions are based on the installed CFT at Chitral city of Pakistan. The results show that efficiency of the round tip blade profile is better than the other three profiles of the blade.
Mostra di più

19 Leggi di più

Optimization of dendritic cell based immunotherapies

Optimization of dendritic cell based immunotherapies

39 4.2.2 LIg-DC showed high level of variability on CD14 and CCR7 surface expression. Phenotypically all lots of DC products were positive for CD80, CD83, CD86, CD123. CD11c CD38, CD54, HLA-DR (all > 95%) by flow cytometry (Figure 4.1). The markers showing significant degrees of variability among DC products were CD14 (ranging from 14% to 90% CD14+) and CCR7 (ranging from 5% to 90%). This variability was dependent on both manufacturing and inter-patient factors, but only for CD14 the inter-patient variability was substantially greater than manufacturing variability (Figure 4.2). Interestingly, when we analyzed DC for differential expression among those from patients that achieved a decreasing slope log PSA clinical response (RespDC)versus those from patients that did not (NonRespDC), we observed a trend with RespDC expressing higher levels of CCR7 and lower levels of CD14 compared to NonRespDC (not statistically significant).To analyze how CCR7 or CD14 levels were able to discriminate RespDC vs NonRespDC we used receiver operating characteristic (ROC) analysis. The underlying assumption of ROC analysis is that a variable under study (e.g. % of CCR7+ DC) is used to discriminate between two mutually exclusive states (i.e., RespDC vs NonRespDC). For these analyses, ROC curve represents an easy visualization tool because it illustrates the performance of the variable under study by plotting specificity vs sensitivity of the test for each possible cut-off; and the area under the curve (AUC) summarizes the overall ROC curve and can be considered as a summary statistic of its ability to classify cases correctly. A perfect test would have an AUC of 100%; a worthless test would have an AUC of 50%. According to an arbitrary guideline AUC values may be classified as follows: 90%–100%, excellent; 80%–90%, good; 70%–80%, fair; 60%–70%, poor; 50%–60%, fail (Hanley & McNeil 1982).
Mostra di più

109 Leggi di più

Global Optimization, Ordinary Differential Equations and Infinity Computing

Global Optimization, Ordinary Differential Equations and Infinity Computing

Another difficulty in a convincing demonstration consists in the existence of methods having a completely different structure. A typical example is the principle trouble arising when one needs to test a deterministic method A with a stochastic algorithm B. The method A applied to a certain set of functions returns always the same results while the method B should be run several times and the results of these runs are always different. Consequently the method A is compared with some average characteristics of the method B. In the literature, there exist some approaches for a graphical comparison of methods, as for example, operational characteristics (proposed in 1978 in [78], see also [214, 215]), subsequently generalized as performance profiles (see, e. g., [46]) and re-considered later as data profiles (see, e. g., [141]). Alt- hough they are very similar, performance profiles are mainly based on the relative behavior of the considered solvers on a chosen test set, while opera- tional characteristics (and data profiles, which are quite close to operational characteristics) are more suitable for analyzing performance of a black-box optimization solver (or solvers) with respect to expensive function evaluati- ons budget, independently of the behavior of the other involved methods on the same benchmark set.
Mostra di più

218 Leggi di più

Development and applications of an innovative wearable system based on time-of-flight technology for the measurement of the human movement

Development and applications of an innovative wearable system based on time-of-flight technology for the measurement of the human movement

obstacles between transmitter and receiver, no other BLE devices in the environ- ment). For those localisation applications requiring an higher accuracy, a further improvement could be obtained by increasing the number of BLE devices following a “fingerprinting” or trilateration approaches [113]–[115]. However, for proximity sensing applications which require very high accuracy (resolution down to 0.1 m), BLE cannot be proficiently used, while other technologies such as time-of-flight DS, IR-LI or US can. These approaches exploit the information provided by a redundant number of nodes to optimize the final position estimate. It is quite evident that the performance of the latter optimisation methods would benefit from a reduction of the errors affecting the estimate of each inter-nodes distance. Furthermore, sensor fusion algorithms based on the use of MIMU may benefit by the additional information provided by BLE unit to improve the position estimate [116]. This approach has great potential for monitoring human behavior in indoor environment and it opens interesting applications in various fields, such as fall detection, depression monitoring and rescuers navigation.
Mostra di più

142 Leggi di più

Analysis of open graded pavement performance through microstructure and hydraulic simulation

Analysis of open graded pavement performance through microstructure and hydraulic simulation

To prevent these problems and to control the conditions of the mix, one of the primary requirements for a flexible pavements, since the advent of asphalt paving technology, is that hot mix asphalt are impermeable. By minimizing moisture infiltration, adequate support from the underlying unbound materials is obtained. In 1993, Superior Performing Asphalt Pavement was introduced as a part of Strategic Highway Research Program. With the adopting of Superpave mix design system, open- graded pavements have been produced with coarser gradation. This application of novel material limited distresses, however other issues related to higher permeability values have to be considered. The traditional approach used to evaluate the expected drainage capability of open-graded pavement is controversial. It is based on laboratory tests using hydraulic permeameters or measures on the field, and it is unable to estimate correctly the permeability due to many approximations in measurement and to the wide variety of variables that influence the hydraulic behavior of asphalt pavement. It often happens that the mixture is designed adopting an overestimated value for hydraulic conductivity. It increases the susceptibility to moisture induced damage in the asphalt pavement, promoting the oxidation of asphalt, and it produces many disadvantages and problems from an economic and environmental point of view.
Mostra di più

273 Leggi di più

Process design optimization based on metamodeling and metaheuristic techniques

Process design optimization based on metamodeling and metaheuristic techniques

combined use of genetic algorithm and NN to optimize the weight initialization phase; more in particular, in their approach genetic algorithm is used as a com- plementary tool for weights optimization that serves in turn for NN to perform robust prediction of machining process performance. Always for machining pro- cess, Yao and Fang (26) used NN to predict the development of chip breakability and surface finish at various tool wear states. Moving from machining processes to forming operations, NNs result widely utilized to predict both quantitative and qualitative variables: Ambrogio and Gagliardi (27) designed a customized tool- box combining NN and the Design of Experiments (DoE) for predicting opposite performance in porthole die extrusion process; Ozcan and Figlali (28) proposed the NN for estimating the cost of stamping dies, as alternative and only way to analytical and conventional methods; finally, Saberi and Yussuff (29) introduced NN to evaluate advanced manufacturing technologies implementation outcomes and predict company performance as high, low, or poor in technology adoption. Even if many successfully applications of NN in the manufacturing field are re- ported in the literature, the possible benefits deriving from the use of this tool are not always completely achieved due to the use of random network parameters or trial and error approaches.
Mostra di più

145 Leggi di più

Environmental and physiological parameters measurement in images and video

Environmental and physiological parameters measurement in images and video

Chapter 6 addressed the problem of reliable pulse rate (PR) evaluation from a PPG signal. The proposed method is based on the evaluation of the PR on the basis of the time interval between two consecutive PPG peaks. It uses the adaptive threshold to detect the peak by taking into account the standard deviation of the PPG and statistical analysis of the collected time intervals to reject outliers. The obtained results have been found reliable for different smartphones with respect to the: (i) heterogeneous characteristics of the cameras, (ii) changing of the chromatic and geometrical features of the frame if the LED is used, and (iii) wrong finger position on the smartphone. The proposed method was validated respect to Ambulatory Blood Pressure Systems Spacelabs 90207. The results confirm the correctness, suitability and reliability of the proposed technique. In particular, the proposed method shows the maximum error of PR evaluation equal to 2 pulses per minute (ppm) that is fully compatible with the accuracy ±2 ppm declared in the ABP datasheet. Chapter 7 described the method for continuous and non-invasive blood pressure estimation from a PPG signal. The blood pressure was estimated by a feed forward artificial neural network with two hidden layers, with 35 neurons on the first layer and 20 neurons on the second one. The two output neurons are used to estimate the systolic and diastolic blood pressure. Such configuration showed better results on performance in comparison with other architectures. In addition, the Multiparameter Intelligent Monitoring in In- tensive Care waveform dataset was used for training the neural network, and a total of more than 15000 pulsations were analysed and the 21 parameters were extracted from each of them. The mean error and standard deviation of obtained results was 3.80 ±3.46 mmHg for systolic and 2.21±2.09 mmHg for diastolic pressure. It fulfils with the American National Standards of the Asso- ciation for the Advancement of Medical Instrumentation, where the maximal accepted errors is 5 ±8 mmHg. The mean relative error was less than 4±3.5%. In conclusion, the research was dedicated to provide the novel image and video processing techniques and to show how they can be used for measure- ment of various environmental and physiological parameters. Indeed, using the camera as a measuring sensor is very interesting. It permits to create a ”universal” measurement instrument, where new type of measurements can be enabled just by changing the software. The advantage of such approach is that any imaging device can be used to acquire information about measur- ing object: static camera, digital camera, video camera, webcam, smartphone camera etc. Then, a specific algorithm installed on computer, smartphone, or even on reprogrammable integrated circuit can provide appropriate measure- ment results.
Mostra di più

129 Leggi di più

Feasibility of a frequency-modulated, wireless, MEMS acceleration evaluator (ALE) for the measurement of lowfrequency and low-amplitude vibrations

Feasibility of a frequency-modulated, wireless, MEMS acceleration evaluator (ALE) for the measurement of lowfrequency and low-amplitude vibrations

It should be pointed out that ALE does not embed any ADC. Considering the accelerometer resolution (0.14∙10 -3 m∙s -2 ), if a traditionally used 16-bit ADC were installed, the resolution of the system would be lowered to 1.50∙10 -3 m∙s -2 , one order of magnitude higher than the sensor’s resolution. Installing that ADC would nullify the decision to use such a sensitive accelerometer. The SF1600 should be matched with a 24-bit ADC to achieve the designed performance and maintain the resolution of the conversion equal to 5.84∙10 -6 m∙s -2 . It is clear that such an ADC is extremely power-demanding and not well suited for low-power applications. To overcome this difficulty, ALE converts the sensor output voltage to a FM signal using a V/F converter instead of the conventional ADCs. The AD650 manufactured by Analog Devices, Inc. is selected to convert the sensor output to a sequence of pulses. The AD650 V/F/V (voltage-to-frequency or frequency-to- voltage converter) is a monolithic converter that can operate up to 1 MHz [131]. V/F converters are electronic circuits that supply as an output signal a square wave whose frequency is proportional to the input voltage value. It is important to point out that neither analyses nor computations are made on the amplitude of the input signal. In the proposed MEMS accelerometer system, the AD650 has been installed to receive the analogic signal coming from the SF1600 and convert it into FM pulses in the range 0 – 100 kHz. The 0 V sensor output is converted to 50 kHz. This technique is quite slow, but operating with high-sampling rate devices (order of MHz) and a narrow sensor bandwidth (0 to 1.5 kHz), it is possible to overcome this limitation [132]. Furthermore, because the converter does not consider the amplitude of the signal (which degrades with transmission distance) but its frequency only, the decision to use this device allows for a signal more immune to noise [133], [134]. To conclude, since the output of the accelerometer is an extremely low frequency DC, converting this signal in an AM one, is difficult and it may cause signal distortions. To obtain more accurate data, it is preferable to convert the signal into a frequency value before transmitting it.
Mostra di più

235 Leggi di più

Development of biomechanical-based analysis tools for the evaluation of infringements and performance in race-walking

Development of biomechanical-based analysis tools for the evaluation of infringements and performance in race-walking

Recent history shows that, at various levels and in many disciplines, technological evolution has radically changed the way how sport is approached from the monitoring and training point of view; consequently, the performance of the athletes has improved. The use of technology for sport applications allows to collect a large amount of data by different tools. Video and tracking technology, wearable devices, fitness trackers, equipment design and clothing, as well as the novel materials introduced in the recent years, have strongly influenced the performance. The adoption of these new tools is useful to understand and evaluate the dynamic evolution of the general state of the athlete's physical condition. More recently, technologies have been tested also to help the judgment system in many sports. This influence is also evident in the world of the race-walking. Race-walking is a long- distance discipline within the track and field program characterized by two possible infringements (bent knee and Loss Of Ground Contact) but at the same time the best chronometric performances are required. It is worth noticing that nowadays judges can rely only on their subjective observations (made by human eyes); to date, technology is not used to support the judging decisions. With the current method, there is a critical issue in race- walking competitions: the very short duration of the loss of ground contact events generates difficulties in a proper identification of a correct/incorrect gesture. This is a major problem since the looking for performance optimization might determine a good or bad final result. For example, increasing the step length even of a single centimeter can lead to a time improving of about 2 minutes at the end of 50 km, greater than the range between the first and the fourth at the Olympic games.
Mostra di più

155 Leggi di più

Bi-criteria network optimization: problems and algorithms

Bi-criteria network optimization: problems and algorithms

is becoming an ever increasing pervasive technology. According to different studies, new services like High Definition (HD) videos, tactile applications (see [36]), Internet of Things (IoT) (see [10]), and extremely low delay applications will dominate the scene in the forthcoming years. In addition to this, the number of users will continue to notably increase, especially from growing economies. As a result, the network itself will have to evolve from a monolithic architecture towards a converged, flexible, and high performance solution. To this end, new paradigms, like Network Function Virtualization (NFV) [56], have been proposed in the last years. Moreover, several initiatives are currently devoted to the design of 5G networks (see [8]), which are expected to turn into reality by 2020. In this context a so called superfluid approach has been defined, meaning that network functions and services are decomposed into reusable components, denoted as Reusable Functional Blocks (RFBs), which are deployed on top of physical nodes. RFBs have notable features, including: i) RFBs chaining, in order to implement more complex functionality and provide the required service to user; ii) platform independence, i.e., RFBs can be realized via software functions, and can run on several hardware solutions; and, iii) high flexibility and performance, thanks to the fact that RFBs can be deployed where and when they are really needed (hence the superfluid attribute of the architecture). In this context the main question is if it is possible to efficiently manage a 5G superfluid network based on RFBs. In order to give an answer to this question, a 5G architecture was considered to model the needed components in terms of RFBs and the infrastructure resources in terms of physical nodes and HW features. Then the problem of managing a set of RFBs in order to serve the users of a 5G network with a high definition video distribution service was mathematically formulated. Such a model has been tested on a simple but yet representative case study. The results pointed out that the proposed approach is a first step towards a more comprehensive solution. In order to present the obtained mathematical model, Section 2.3.1 gives a description of the 5G architecture under consideration.
Mostra di più

130 Leggi di più

Performance measurement in mutual fund analysis

Performance measurement in mutual fund analysis

Value/Income funds invest in stocks that can regularly distribute dividends. This category includes index funds, which have the capabilities to achieve equal returns to a specific market index, like the S&P 500 Composite Stock Price Index, through investing in a representative sample or all of the companies included in an index. In riskier circumstances, growth funds target stocks that potentially yield larger capital gains in exchange for the absence of regular dividends. Stocks have been shown to have better long-term performance in comparison to other kinds of investments like corporate bonds, government bonds, and treasury securities. However, taking into account short-term characteristics, a stock’s value can fluctuate over a spectrum of high and low values. There are numerous reasons as to why stock prices can rise and fall such as the overall success of the economy or the theory of supply and demand for specific services. Hence, the market risk poses the greatest potential or gamble for investors in stocks funds.
Mostra di più

60 Leggi di più

1- BEGINNERS-GUIDE-TO-MEASUREMENT-IN-ELECTRONIC-AND-ELECTRICAL-ENGINEERING

1- BEGINNERS-GUIDE-TO-MEASUREMENT-IN-ELECTRONIC-AND-ELECTRICAL-ENGINEERING

manufacturers to design MEMS to the needs of each particular system they operate in. More accurate knowledge of the product output and energy requirements will also affect the choice of device from potential consumers who will now be able to select only those with optimised performance for their particular sector. The technique should help improve performance, functionality and reliability of MEMS around the world.

51 Leggi di più

Modeling, Simulation and Optimization in Logistics

Modeling, Simulation and Optimization in Logistics

Afterwards, we now can see how OptQuest for Arena allows the modeler to attempt the two phases for the optimization model definition. Here we refer to information extracted by some applied research study [73] and the “OptQuest for Arena User’s Guide” [9]. In the first phase, the modeler must specify the controls (i.e., the controllable parameters associated with the system being modeled) that OptQuest is allowed to select values for, establish upper and lower limits for each control, and define linear and non-linear constraints (non-linear constraints include output variables of the simulation model, therefore non-linear constraints feasibility is checked after the evaluation through simulation of the solution). The set of controls the modeler is allowed to select from includes all Arena model user defined variables (e.g. the number of completions of workpieces in a production line) and all model resource capacities (e.g. the number of servers in a service station) – a snapshot of the OptQuest GUI for Arena is shown in Figure 2.3. In the latter phase, the modeler must define the objective function that combines any statistic defined in the model, i.e. typically, system performance measures. (In the User’s Guide is heedless to develop complex objectives, despite the solver can approach also very complex objective functions).
Mostra di più

179 Leggi di più

Optimization and applications of the microbiological survey method MBS

Optimization and applications of the microbiological survey method MBS

Concerning the use of the MBS method as a Point of care test for diagnosis and management of Urinary Tract Infections the first step was the development and preliminary in vitro validation of new MBS reagents for the detection of bacteria in urine and for the evaluation of their susceptibility/resistance to a panel of antibiotics. These were tested in a preliminary clinical study, performed in collaboration with the “Azienda ospedaliera Sant’Andrea” of Rome, that demonstrated the great potential of the MBS POCT as a diagnostic tool for a rapid and accurate detection of bacteria causing UTIs. A comparative analysis between the results obtained with the MBS method and results of urine cultures, conducted by the hospital laboratory, was performed using the Receiver Operating Characteristics (ROC) analysis. Results demonstrated that the MBS POCT was able to reveal the presence of a significant bacterial load in urine, hence diagnose a clinical UTI, in only 5 hours (Area Under the Curve=0,93). More importantly, the MBS POCT showed much higher accuracy (90.2%), sensitivity (91.2%) and specificity (89,8%) compared to urine dipsticks, widely used for a presumptive diagnosis of UTI. In a relatively short time compared to standard methods, the MBS method was able to give an accurate indication of UTI and a preliminary evaluation of the antibiotic susceptibility of the infecting bacteria, ensuring a prompt diagnosis and guiding the antibiotic choice long before the conventional antibiotic susceptibility test is performed . Different issues linked to the specific composition of the reagent, the operating procedures and the manufacturing process were also optimized in order to further improve the overall performance of the MBS POCT and meet the essential premarketing requirements mandatory for all in vitro diagnostic devices.
Mostra di più

97 Leggi di più

Development of novel methodologies for the optimization of production processes of biopharmaceuticals

Development of novel methodologies for the optimization of production processes of biopharmaceuticals

Process Analytical Technology (PAT) is defined by the FDA as a “System for designing, analyzing and controlling manufacturing through timely measurements of critical quality and performance attributes of raw and in-process materials and processes, with the goal of ensuring final product quality”. The goal of implementing PAT is defined therein as enhancing the understanding and the control of a production process. This broad definition encompasses testing of raw material for batch consistency as well as online sensors that provide feedback for the process control. In this context, rapid methods placed at-line, i.e. close to the process, can be considered to be PAT applications since they will both contribute to the understanding how individual steps impact on product quality and will accelerate and facilitate process development and optimization decisions. There are many tools available that enable process understanding for scientific, pharmaceutical development. These tools can provide effective and efficient means for acquiring information to facilitate process understanding and continuous improvement. From a physical, chemical and biological perspective, pharmaceutical products and processes are complex multi-factorial systems. Methodological experiments based on multivariate statistical principles provide useful means for the identification and study the effect and interaction of product and process variables. Traditional one-factor-at-time experiments cannot address these kinds of interactions. These tools enable the identification and evaluation of product and processes variables that may be critical to product quality and performance.
Mostra di più

100 Leggi di più

Female management and academic spin-off: role of the financial context and effect on finance and performance

Female management and academic spin-off: role of the financial context and effect on finance and performance

Surette 1998; Barsky et al. 1997, Powell and Ansic 1997 ), or in terms of corporate strategy formulation (Adams and Ferreira, 2009, Ahern and Dittmar ,2012, Weber and Zulehner ,2010). Such higher risk aversion of women has an impact on two fundamental aspects. First, firms managed by women CEOs are characterized by less risky choices, profit and cash flows with lower level of volatility and higher chance of survival than those obtained from male CEO-led companies (Faccio et al. 2016 Arano et al., 2010; Bernasek and Shwiff,2001, Booth and Nolen, 2012; Borghans et al. (Croson and Gneezy, 2009). In addition, risk-aversion can even leads women to “leave money on the table” giving up risky projects, although they show a positive NPV, with a reduction of efficiency in capital allocation process due to underinvestment problems (Faccio et al. 2016). Also Huang and Kisgen (2013, pg 18 ) proved that women with executive roles within firms are more conservative than men. They state that “I also confirm that women are less likely to make acquisitions than their matched male CFOs” and again “..women are less likely to make acquisitions, and women are less likely to issue debt”. Therefore, managers and especially female CEOs tend to be more risk-averse, showing a willingness to avoid investment and funding opportunities that present a higher risk profile, but with higher expected value. As a result, women are supposed to have the propensity to adopt prudential and more conservative behaviors. It is such risk-avoidance behavior that can lead to distortions in the investment policies of the company (Faccio et al 2016). Not only women are less risk-prone, they also tend to be less overconfident and behave less competitively. As suggested by Beckmann and Menkhoff (2008) women, compared to men, prefer to shy away from competitions. If gender diversity influences risk propensity and overconfidence, it is likely that this different approach will be reflected in the cash policy decisions since cash is a liquidity asset that allows for managerial discretionary spending (Jensen, 1986; Opler et al., 1999; Harford et al., 2008).
Mostra di più

114 Leggi di più

Development and optimization of analytical protocols based on microextraction techniques for clinical screening and environmental control

Development and optimization of analytical protocols based on microextraction techniques for clinical screening and environmental control

In In-tube SPME and in-tip SPME the diffusion of the analytes is mediated by flow-through. In-tube solid phase microextraction (IT-SPME) was introduced in 1997 by Eisert and Pawliszyn [136]. IT- SPME is a new sample preparation that involve an open tubular capillary column as an SPME device. Compared with the conventional SPME fiber, this SPME approach is a fully automated analytical technique that provides higher analytical efficiency. IT-SPME allows also to overcome some problems related to the use of conventional fiber SPME such as fragility. In IT-SPME organic compounds in aqueous samples are extracted from the sample into the internally coated stationary phase of a capillary. The compounds extracted are then desorbed by introducing a stream of the mobile phase or by using a static desorption solvent and then the desorbed compounds are injected into the LC column for analysis. The principal advantage of IT-SPME is the automation of the SPME/HPLC process, allowing extraction, desorption and injection to be performed continuously using an autosampler. IT-SPME can be used with all GC commercial columns thus increasing the number of stationary phase. This technique requires lower sample volumes and is versatile according a wide range of available coatings [137, 138]. The main disadvantages of the IT-SPME is the requirement of samples very clean because the capillary can be blocked. In-tip SPME is another recent approach of SPME. This technique uses a procedure similar to MEPS in which a solid packing material is inserted into pipette tips and sample preparation takes place on the packed bed. This allows a simple and fast utilization and lower cost per sample. The extraction is done off-line and only part of the sample is injected into the chromatograph, therefore the sensitivity is not high as with online MEPS. The relevant disadvantages of In-tip SPME extraction is the requirement of sample pretreatment such as filtration or dilution of complex matrix [139, 140]. Silica and monolith particles relatively large through pores are used as sorbents but in terms of stationary phases their number and properties are growing.
Mostra di più

210 Leggi di più

Show all 1967 documents...