• Non ci sono risultati.

DRUG DEVELOPMENT

N/A
N/A
Protected

Academic year: 2021

Condividi "DRUG DEVELOPMENT "

Copied!
42
0
0

Testo completo

(1)

A LONG WAY FOR SUCCESS: DRUG DEVELOPMENT

he discovery and development of a new drug is a complex process, full of hidden obstacles. In order to respond to the great variety of therapeutic needs and meet national and international regulatory requirements, these processes, which themselves demand an integrated multidisciplinary approach, must be conducted with absolute scientific rigor. In fact, the discovery and development of a new drug can take several years of work (between 7 and 15), and the average cost is very high (experts estimate this can be around 800 million €) (fig. 1).

Figure 1. Diagram of costs for drug development.

T

(2)

2

The basic principles in the discovery and development of a drug may be indicated by two aspects: discovery and development.

Discovery

The discovery stage can last up to 6 years and experts estimate its average cost to be 335 million €.

The proteins produced by transcription of our genes ensure that our body’s main biological functions are carried out. A faulty gene or protein is often what lies behind a disease. To treat a particular disease, it is first necessary to identify the biological targets (i.e., a protein or other possible targets) known to be involved in this disease’s etiology, and then discover the compound, or compounds, that have an effective and specific therapeutic capability and a minimum number of side-effects. This stage is composed by different phases: target identification/validation, lead identification, lead optimization and preclinical development. A large number of new molecules, for different reasons (inactivity or small activity, toxicity) are not able to arrive at the development stage. Specialists in the biological sciences and pharmaceutical chemistry work in close collaboration throughout the entire process of drug discovery.

Development and clinical trials

The development stage of the drug can take as long as a decade, at an average cost estimated at 467 million €.

Once the compound, or compounds, have been chosen, they must be transformed into a drug. This process involves several series of trials on animals and humans, all intended to ensure that the drug may be administered to humans with minimum possible risk and that it is superior to, or otherwise complements, existing drugs with

(3)

3

the same therapeutic function. These trials are subject to the rigorous controls required by the regulatory authorities, such as the US Food and Drug Administration (FDA) or the European Medicines Agency (EMEA). In addition to specialists in biology and therapeutic chemistry, the discovery of a new drug involves the collaboration of pharmaceutical R&D specialists and clinical research teams, composed of doctors, nurses and other health specialists.

As it is possible observe, before arriving in a pharmacy for sales, drugs must be analyzed under different “magnifying glasses” to investigate every aspect (activity, efficacy, toxicity, dispensing method) and each kind time and economical resources requires. In the next section the different stages for drug development will be described.

DRUG DEVELOPMENT

The road to success in pharmaceutical and pharmacological research field is long, bumpy and full of afterthoughts. For this reason is important have and idea of work to do and difficulties that is possible to meet along the way. To understand better this, in current paragraph will be analyzed different work phases during drug development with at their short description, and are (fig. 2):

(4)

4

Figure 2. Description of phases for drug development.

TARGET IDENTIFICATION / VALIDATION

For the pharmaceutical industry, the discovery of a new drug presents an enormous scientific challenge, and consists essentially in the identification of new molecules or compounds. The identification of therapeutic targets requires knowledge of a disease’s etiology and the biological systems associated with it. Molecular biology has revolutionized the process of drug discovery. Today, the collective contribution of genomics, proteomics and bioinformatics allows for the much more rapid and precise discovery of those genes and/or proteins involved in the etiology of certain diseases.

Duration of this stage is variable between several months to several years. In order to ensure the successful development of new drugs, the pharmaceutical industry requires considerable scientific and financial resources, must form strategic alliances with industrial partners, the university research community and companies conducting research under contract.

• Target identification/validation

• Lead identification

• Lead validation

• Preclinical development

DISCOVERY

• Phase I

• Phase II

• Phase III

• Phase IV

CLINICAL

TRIALS

(5)

5

LEAD IDENTIFICATION

Once the therapeutic target has been identified, scientists must then find one or more leads (e.g., chemical compounds or molecules) that interact with the therapeutic target so as to induce the desired therapeutic effect. In order to discover the compounds whose pharmacological properties are likely to have the required therapeutic effects, researchers must test a large variety of them on one or more targets. The pharmaceutical companies possess veritable libraries of synthetic or natural compounds, ready to be tested. To test the chosen compounds in large numbers, scientists use an entirely automated process known as high density screening. In general, of the thousands of compounds tested, barely 1% will qualify for further and more probing analysis. Specialists in process chemistry synthesize them in order to produce quantities sufficient to meet R&D needs. First of all, biologists ensure that the chosen compounds have the desired therapeutic effect on the target. Then, they test the compounds’ relative toxicity, or, in the case of a vaccine, their viral activity, using in vitro cellular and/or tissue systems. Finally, they check their bio- availability in vivo on animals. The compounds that demonstrate an ability to act specifically and selectively on the therapeutic target, are well absorbed and show minimum toxic effects, are patented (duration of all process included between 4 and 6 months. They become candidates for subsequent transformation into drugs.

LEAD OPTIMIZATION

The purpose of this stage is to optimize the molecules or compounds that demonstrate the potential to be transformed into drugs. To optimize these molecules, scientists use very advanced techniques. For example, using X-ray crystallography and in silicon (computer) modeling, they study how the selected molecules link themselves to the therapeutic target, for example, a protein or an enzyme. These data allow the medical chemists to modify the structure of the selected molecules or compounds, if

(6)

6

necessary, by screening, thereby creating structural analogues. This phase requires close collaboration between the biologists and chemists, who form a feedback loop.

This optimization stage aims at developing new substances that are more effective than known compounds. The latter are then subjected to a specific evaluation involving broader biological tests. In fact, the probability that a chemical substance will become a drug depends greatly on its pharmacokinetic and pharmacodynamic evaluation (PK/PD). These studies are conducted on both in vitro systems, for example, on human intestinal and hepatic cells, and in vivo systems such as mice and/or rats.

This discovery, with a duration of 4/6 months, phase concludes with between ten and twenty substances, with promising biological and chemical properties, being labeled

“candidate drugs” and will tested on animals in the preclinical development stage.

PRECLINICAL TRIALS

The development potential of a candidate molecule depends essentially on its capacity to be administered to humans and show therapeutic effectiveness, with an acceptable level of side-effects. Before testing candidate molecules on humans in clinical trials, scientists must show that the candidates do not present an unacceptable level of risk, given the expected therapeutic benefit. The protocols from clinical trials must be subjected to strict monitoring and evaluation. Chemists, biochemists, pharmacologists, toxicologists and histologists continue to evaluate the pharmacokinetic, pharmacodynamic and toxicological properties of the compound in vitro and in vivo (on animals of twenty compound of lead optimization, after preclinical trials (with a minimum of duration of 4/6 moths) only between one and five will have the characteristics required for phase I clinical studies. Depending on the results obtained in the preceding experiments, chemists and pharmacists develop different dosages and pharmaceutical formulations, taking particular account of the physic-chemical and

(7)

7

metabolic properties of the molecule and the biomedical characteristics of the targeted therapeutic application.

PHASE I

The aim of the first clinical trials (phase I) on humans is to help evaluate and understand the behavior of the molecule or compound on healthy subjects. These trials are carried out using a small number of healthy subjects. Clinical researchers and pharmacologists aim to establish the pharmacodynamic and pharmacokinetic profile of the compound for the first time on humans. Thus, they acquire deeper knowledge of several parameters, including, for example:

the impact of the molecule on the organism (metabolism, bone tissue, blood, liver, brain activity, etc); various secondary effects, etc.;

the differing reactions to the drug of men and women;

the interaction between food and the absorption of the drug.

Biomarkers and genetic tests are increasingly used to determine the clinical effectiveness and safety of a drug. This stage (around 18 months) facilitates the retention of only those compounds with a promising pharmacokinetic profile and little or no side-effects. These tests are carried out with the help of the most promising compounds (between one and five). Generally, between 1 and 3 molecules will be selected for phase II.

PHASE II

The phase II clinical trials aim to check the harmlessness or degree of effectiveness of those (1 to 3) promising compounds that have successfully completed the previous stages. This phase lasts between 12 and 24 months. If the response of patients is not in

(8)

8

conformity with the study’s objectives, the trials are immediately stopped. This phase allows:

study of the side-effects and risks associated with short-term use;

analysis of the compound’s impact on metabolism;

optimization of the dosage and treatment duration.

These tests are documented in compliance with the requirements of the regulatory authorities. At the end of these trials, after 12/24 months, 1 or 2 molecules are retained for phase III clinical testing.

PHASE III

Clinical trials in phase III have several objectives, for example:

proving the therapeutic effectiveness of the compound under realistic usage conditions;

demonstrating that the compound is as effective as existing drugs or forms of treatment, if not more so;

rigorously comparing the patient groups treated with the candidate drug with the control groups to whom a placebo has been given;

adjusting the dosage, according to the age of patients, the pathologies from which they are suffering and the other drugs they are receiving;

testing different methods of administering the drug (tablet, syrup, inhaler, etc.) These trials, which are conducted, for several years, along with other tests on rodents, allow for even more precise understanding of the drug’s safety profile, the occurrence of undesirable side-effects, and long-term toxicity. The pharmaceutical company prepares and then submits an application to present a drug to the regulatory authorities, who examine it and decide if the drug can be sold in the country under

(9)

9

their jurisdiction. The examination process of these authorities can take more than a year. Chemists, engineer-chemists and process chemistry specialists apply their expertise to the production of the drug in very large quantities. As soon as the drug is approved, large scale production is launched and the product commercialized.

PHASE IV

Once a drug is approved and made commercially available, research on the drug enters phase IV. During this phase, research is mainly aimed at:

spotting possible undesirable side-effects associated with long-term use by a large number of patients;

extending use of the drug to different classes of patients, such as children, for example;

finding new therapeutic opportunities or new formulations that may increase the effectiveness of the drug and allow it to be administered to a larger number of patients;

demonstrating that the benefits of treatment justify the reimbursement of the cost of the drug by public and private health insurance programs.

These studies are in part required by the regulatory authorities.

(10)

10

BIOREACTORS AND DRUG DEVELOPMENT

As described above, drug development is a long and expensive process composed of different stages. A large part of work is composed of tests of new potential drugs on animals to elucidate both their mechanisms of action and their specific targets, and also their possible side effects and toxicity. Unfortunately animal tests present two specific problems of different nature: ethic and scientific. If ethical issues have easy big resonance, scientific aspects are more important because involve the real object of problem: applicability of results obtained on animals and their transfer to humans. A big evolutive distance exists between these two organisms and often is demonstrated reduced capacity to apply this knowledge to human behavior. For this reason is every day more important to offer an alternative system or, at least, a system to put beside for drug experimentation to research labs where human element is at centre of every stage of study. A first step forward was the development of standardized cell cultures in last 30 years that helped in consistent manner comprehension of biological and physiological mechanisms at base of cell life. In fact, possibility to select and culture a specific cell type, purified from others non essential tissues can “clear” the response to specific stimuli by influence of undesired interferences. Connected to use of cell culture there was development of all new investigation techniques specific for this sector like molecular biology, proteomics, genomics. For many years this approach was useful to understand biological processes but, as time went on, and with new knowledge and technological breakthroughs, it became imperative to consider the relationships between cells and biological environment in which they live. It was thought necessary to develop the classic concept of cell culture. Our aim, so, is to furnish new biotechnological tools able to recreate some physiological conditions.

We have focused our attention on three particular aspects of human physiology:

(11)

11

1. effect of flow and, in particular, shear stress on endothelial cells which can modulate vascular responses during constriction and dilation;

2. gradient concentration of substances, which are fundamental in neural development, angiogenesis, morphogenesis;

3. allometric relationships between metabolic rate and body mass to reproduce a metabolic human-like system

For each one of these aspects we have designed and developed a specific bioreactor:

1. Laminar Flow Bioreactor, useful to recreate a controlled shear stress system 2. Array Bioreactor, able to create different sets of concentrations on the same

cell type

3. Multi-Compartmental Bioreactor, designed with help of allometric laws to simulate physiological metabolic conditions

Each of these tools will be described in detail in the following chapters. But firstly, I begin with an introductory state of the art on similar bioreactors developed by other groups.

BIOREACTORS: STATE OF THE ART

Shear stress

Shear stress is one of major physical forces acting in every organism. This operates fundamentally on endothelial cells, but when it is necessary to design a bioreactor, cells can be of different types and bioengineers must think over this particular aspect.

In fact, cells in most bioreactor systems are subjected to fluid-mechanical shear stress through the processes of agitation and aeration. For the rational design of reactor systems and scale-up strategies, an understanding of the effect of shear stress on cellular viability, metabolism, and recombinant protein productivity is required. It has

(12)

12

been well established that sparging can damage cells by subjecting them to high shear forces at the air/liquid interface [1-7]. Shear stress has been found to have a range of effects depending on the cell type and the severity and duration of the exposure. At high levels of shear the viability and growth of cells can be affected by the shear environment [8-11]. Increasing the amount of stress or exposure time to hybridomas was found to increase damage and death rate of the cells [12,13] and increased cell detachment [14]. The stage of growth of CRL-8018 hybridoma cells [13] and exposure duration and magnitude of shear stress can have an impact on the effect of shear stress on the cell. Dividing baby hamster kidney (BHK) cells were found to require more time for cell spreading compared with static controls when subjected to shear stress [15]. Murine hybridoma (TB/C3), insect cells (Sf9), and CHO cells in cell-cycle stages S1 and G2 were found to be more susceptible to shear stress than those cells in G1 phase when exposed to intense hydrodynamic forces in a turbulent flow capillary tube and in separate experiments by controlled agitation and aeration [16]. Under a shear stress severe enough to cause gas entrainment, cells were shown to lose the ability to maintain ion gradients and passive transport increases, leading to loss of cell viability [17]. Prediction of cellular damage due to shear stress is difficult due to cell-line variations and unknown mechanisms for cell responses to shear stress.

Micromanipulation techniques have been used to measure the fragility of hybridomas to shear stress by measuring the membrane-bursting tension to predict the disruption of cells by laminar shear stress [18]. Losses of cells could be predicted within a maximum error of 30%, although this was only for short exposure periods (180s).

Another approach to predict damage to cells from shear was to identify a critical shear stress level, which, when exceeded, caused cellular viability to decrease rapidly. Using reactor and viscometer studies [19], insect cell viability decreased markedly when exposed to 1 Nm−2 for more than 1 h. These studies calculated critical shear stress levels from cellular viability data, not metabolic characteristics, and the cells in these studies were cultured in media containing fetal bovine serum. The ability of the cell to respond quickly to the shear environment has been demonstrated by a number of

(13)

13

studies. Gudi et al. [20] measured the activation of GTP-binding proteins in endothelial cells within seconds of the onset of flow at the shear stress of 1.0 Nm−2. Enhanced arachidonic acid metabolism and altered protein synthesis were detected in endothelial cells by Nollert at 2.5 Nm−2 [21]. The effect of sublytic levels of shear stress on the metabolism of various cell types was established by examining the induction of the transcriptional activator c-fos. In a range of cell types of human and animal origin, Ranjan [22] examined the induction of c-fos by shear stress. Exposed to a shear stress of 2.5 Nm−2 for 1 h, a consistent response in the cell types HUVEC, HeLa, BAEC, and Chinese hamster ovary (CHO) was found to occur within minutes after the onset of flow. The minimum exposure time required for shear stress to induce c-fos protein expression in HeLa cells was 1 min. This demonstrated the rapid response of cells to shear stimuli, and the potential for cell metabolism to be altered in response to shear stress. The mechanism of transduction of the mechanical stress stimuli to an intracellular response including secondary messenger generation, protein activation, and modulation of gene expression is not fully understood. The involvement of intracellular Ca2+ in a membrane event linked to the activation of the phospholipase C pathway was proposed as a possible mechanism for signal transduction [23,24].

Intracellular Ca2+ levels were not observed to increase during shear experiments with endothelial cells and arachidonic acid uptake [21], suggesting that other secondary messengers, such as Inositol-(1,4,5)-triphosphate (IP3) or other intracellular ions (e.g., H+), may be involved in signal transduction. A specific GTP-binding protein (G protein) has been identified as being activated by shear stress within 1 s of onset of flow in primary human umbilical vein endothelial cells at a shear stress of 1.0 Nm−2 [20]. This represented one of the earliest signal transduction events detected due to onset of shear. This study identified the G protein family as an important messenger in the shear stress response. The relationship between protein production by the cell and shear stress has been examined for various cell types, including endothelial cells.

Prostacyclin is a major metabolite produced by human umbilical vein endothelial cells and the effect on its production was determined while subjected to a steady shear

(14)

14

stress of 2.4 Nm−2 [25]. The onset of flow stimulated prostacyclin production linearly with increasing shear stress. Shear stress can cause a lot of different effects on a large number of cells when insert in a bioreactor. In our work we have focused our attention on effects of shear rate on endothelial cells to develop a new tool specifically for this issue.

Effects of shear stress on endothelial cells

Flow shear stress is an important factor in endothelial-mediated regulation of the cardiovascular system. This includes the regulation of blood perfusion, coagulation and anticoagulation balance, and the exchange of macromolecules and water between the intravascular and the extra-vascular spaces. Shear stress modulates the endothelial function in part by regulation of gene expression, including vasoactive substances (nitric oxide, endothelin-1), growth factors, adhesion molecules, chemotactic molecules, coagulation factors and proto-oncogenes [26]. It is important to note that minor changes in fluid flow on the order of 0.05 to 0.1 Pa (0.5 to 1 dyne/cm2) can have a significant influence on biological processes such as transmigration of leukocytes and tumor cells [27]. Another important factor is flow type, which varies at different locations throughout the cardiovascular system. Flow separation zones generally occur at branches and bends where atherosclerotic lesions are typically found [28]. In most regions of the artery, however, unidirectional laminar flow is encountered. Increased levels of laminar flow cause flow-dependent cell alignment that is associated with reorganization of cytoskeleton proteins, cell–cell junction proteins, and focal contacts [29-37]. It has also been found that unidirectional laminar flow protects endothelial cells from apoptosis even at low shear-stress levels [38,39], and stabilizes the endothelial barrier function [40]. Normally, in biomedical research, the cone and plate configuration bioreactor is frequently used to recreate a shear stress condition. It appears to have been originally introduced by Mooney and Waters [41]. Pelech and Shapiro [42] analyzed a related problem of a rotating flexible disk and a fixed plate.

(15)

15

They found that the ratio between centrifugal and friction forces is the parameter that governs secondary flow. The configuration of the device consists of a fixed plate and a rotating cone (fig. 3).

Figure 3. Setup of the rheological in vitro system. Rheometer with optical (A). Schematic diagram of the cone-plate configuration with cell culture and flow domain (B).

For a sufficiently low Reynolds number1, rotation of the cone generates a stable three- dimensional laminar flow. The resulting wall shear-stress distribution acts on the endothelial cell culture mounted on the plate. Several experimental and mathematical attempts have been made to investigate the flow between the cone and plate for small angles, α (α << 1). Fewell and Hellums [43] computed a numerical solution. More recently, this type of flow was investigated, both analytically and experimentally, by Sdougos [44] and Grad and Einav [45]. A report by Nagel [46], provided the first

1 In fluid mechanics, the Reynolds number is the ratio of inertial forces (vsρ) to viscous forces (μ/L) and is used to determine whether a flow will be laminar or turbulent. It is the most important dimensionless number in fluid dynamics and provides a criterion for determining dynamic similitude. Laminar flow occurs at low Reynolds numbers (Re < 2100), where viscous forces are dominant, and is characterized by smooth, constant fluid motion, while turbulent flow, on the other hand, occurs at high Reynolds numbers (Re > 4000) and is dominated by inertial forces, producing random eddies, vortices and other flow fluctuations. The transition between laminar and turbulent flow is often indicated by a critical Reynolds number (Recrit), which depends on the exact flow configuration and must be determined experimentally.

A B

(16)

16

evidence that physiologically relevant levels of laminar shear stress can differentially regulate the expression of endothelial-leukocyte adhesion molecules. This selective up-regulation of ICAM-1, in contrast to the coordinate induction of ICAM-1, VCAM-1, and E-selectin by soluble mediators, suggests that hemodynamic forces, in addition to humoral stimuli, may play a significant role in vivo in patho-physiological conditions such as inflammation and atherosclerosis. This group has also demonstrated [47], with a modification of cone-plate apparatus, which are the topographical patterns of various transcription factors activation (NF-kB, Egr-1, c-Jun, and c-Fos) in a specially designed in vitro disturbed laminar shear stress model, which incorporates regions of significant spatial shear stress gradients similar to those found in atherosclerosis-prone arterial geometries in vivo (eg, arterial bifurcations, curvatures, ostial openings); they have demonstrate that endothelial cells subjected to disturbed laminar shear stress exhibit increased levels of nuclear localized NF-kB, Egr-1, c-Jun, and c-Fos, compared with cells exposed to uniform laminar shear stress or maintained under static conditions.

Gradient concentration

The development of a gradient concentration is closely connected to “Microfluidics”, the science and technology of manipulating fluids in networks of channels with dimensions of 5–500 mm. Microfluidics has several features that have attracted users in biology, chemistry, engineering and medicine. It requires only small volumes of samples and reagents, produces little waste, offers short reaction and analysis times, is relatively cheap, and has reduced dimensions compared with other analytical devices.

In addition, microfluidics offers structures with length scales that are comparable to the intrinsic dimensions of prokaryotic and eukaryotic cells, collections of cells, organelles, and the length scale of diffusion of oxygen and carbon dioxide in tissues (every cell, in vivo, is no more than 100 µm from a capillary). These characteristics

(17)

17

make microfluidics particularly useful in studying biology and biomedicine [48]. The number of applications of microfluidics in biology, analytical biochemistry, and chemistry has grown as a range of new components and techniques have been developed and implemented for introducing, mixing, pumping, and storing fluids in microfluidic channels. Despite rapid progress in this area, there remain several unsolved problems: preparing and introducing samples; interfacing microfluidic channels with the human hand; working with a range of sample volumes (e.g.

Vacutainers, a drop of blood, a biopsy sample, a single cell); and portability. In this specific situation, manipulation of laminar streams of fluids (fig. 4) in microfluidic channels makes it possible to create gradients of almost arbitrary complexity of small molecules, growth factors and other proteins in solution and on surfaces.

Figure 4. Laminar streams of solutions of dye (in water) flowing in a microfluidic channel. The fluid is flowing from the six channels on the left into the central channel on the right where flow is laminar.

Methods that use a common attachment scheme based on biotin and avidin are particularly well developed [49]. Microfluidic enables the formation of gradients that cannot be generated using other techniques, including very steep gradients of concentration that extend over several orders of magnitude, gradients with complicated profiles, and gradients of gradients [49-54] (fig. 5A,B).

(18)

18

Figure 5. A microdiluter system in which two fluids are repeatedly split at a series of nodes, combined with neighboring streams, and mixed. At the end of the network of channels, the streams of fluid carrying different concentrations of green and red dye are combined and produce a gradient (A). A linear gradient of fluorescein produced using the microdiluter shown in (B).

Several groups have studied the behavior and differentiation of cells in response to chemical signals in channels. One of the earliest examples of interactions between cells and chemical gradients was by Jeon et al., who studied the chemotaxis of human neutrophils on a gradient of interleukin-8 in a microfluidic channel [55]. Other groups have studied the migration and behavior of neutrophils on gradients of proteins [51,56]. Pihl and co-workers used a microfluidic system to profile molecules for pharmacological activity [54]. The authors created an integrated device in which gradients of drugs were produced and their activity against voltage-gated K+ ion channels was measured (at different concentrations) by patch clamping individual CHO cells exposed to different regions of the gradient. Chung et al. created gradients of growth factors to study the growth and differentiation of human neural stem cells [57].

Gunawan and co-workers studied the migration and polarity of rat intestinal cells on gradients of extracellular matrix proteins in microfluidic channels [58]. Laminar flow can also be used to create gradients in temperature. Lucchetta created a thermal gradient in a microfluidic channel whose dimensions were designed to study the effect of temperature on the development of Drosophila melanogaster embryos by exposing different parts of the same embryo to fluids at two different temperatures [59].

A B

(19)

19

Metabolic system

Cellular function in the native environment is influenced by a variety of biochemical factors, which interact to create complex signaling mechanisms. This complex set of biochemical and mechanical signals that regulate cellular metabolic function in vivo is incompletely understood. A systematic examination of how changes in environmental conditions may lead to phenotype shifts will improve understanding of cellular biology and pathology as well as it would be of benefit in developing appropriately targeted therapies. Recreating this level of control in vitro, is challenging, characterizing and requires the recreation and characterization of models that simulate in vivo circuitry.

Although much work has been done to develop culture models of single tissues, integration of multiple tissue models towards development of the so called “human- on-a-chip” is a relatively new concept [60]. In vivo, interactions between multiple organ systems are important for the maintenance of homeostasis, and such interactions can also mediate the toxicity of pharmaceuticals. For this reason metabolism is one of most important aspects in drug development. Comprehension of its action mechanisms, of its biological pathways, of biochemical modification or degradation, can improve knowledge about possible unknown side effects of a drug. In fact, through specialized enzymatic systems, xenobiotics often are converted from lipophilic chemical compounds into more readily excreted polar products. Rate metabolism is an important determinant of the duration and intensity of the pharmacological action of drugs. Drug transformation can result in toxication or detoxication (the activation or deactivation of the chemical). While both occur, the major metabolites of most drugs are detoxication products but give also opportunity for drug-drug and drug-chemical interactions or reactions. Various mathematical approaches were developed in these years that try to predict possible biotransformation products, interactions, unpredictable effects and consequent biological systems and tools to use for these particular research field. The two major metabolic approaches to the problem are the metabolic scaling, defined in his works

(20)

20

by Kleiber in the 30s and developed by West, and Physiologically-Based Pharmacokinetic (PBPK) models. Both are described in the following paragraphs.

Allometric scaling

In 1932, Kleiber published a paper [61] showing that standard metabolic rates among mammals varied with the three-quarters power of body mass: the so-called "elephant to mouse curve", termed "Kleiber's law". Since that date, this and similar allometric scaling phenomena have been widely and often intensively investigated. These investigations have generated continuing debates. At least three broad issues remain contentious, each compounded on the one hand the problem of obtaining valid data (in particular, finding procedures by which reliable and reproducible measures of standard metabolic rate can be obtained, especially in poikilotherms2) and on the other by statistical considerations (in particular, the validity of fitting scattered points to a straight line on a semi-logarithmic plot). The first issue is disagreement as to whether any consistent relationship obtains between standard metabolic rate and body mass. Since the 1960s there has been a measure of consensus: a consistent allometric scaling relationship does exist, at least among homoiotherms3. Second, assuming that some version of Kleiber's law (a consistent metabolic scaling relationship) applies to at least some taxa4, there are disagreements about the gradient of the semi-log plot. Numerous discussions developed around the value of b in the formula

2 Creatures whose internal temperatures vary, often matching the ambient temperature of the immediate environment

3 Organisms maintaining a constant internal temperature, usually above that of their environment.

4 A taxon (plural taxa or taxons), or taxonomic unit, is a grouping of organisms (named or unnamed).

Once named, a taxon will usually have a rank and can be placed at a particular level in a hierarchy.

(21)

21

where B = standard metabolic rate, M = body mass, and a and b are constants. Kleiber and many subsequent investigators claimed that b = 0.75, and on this matter too a measure of consensus has obtained since the 1960s. Once again, however, not all biologists agree. A significant minority of investigators holds that b = 0.67; and other values have been suggested, at least for some organisms. Third, assuming a consistent scaling relationship and an agreed value of b, there was asked question about is Kleiber's law to be interpreted mechanistically and what was its physical or biological basis. For those who claim that b = 0.67, this issue is simple: standard metabolic rate depends on the organism's surface to volume ratio. But for proponents of the majority view, that b = 0.75, the issue is not simple at all. Relevant data have been reviewed periodically since then [62-73] and recent developments have rekindled interest in the field. Many biological variables other than standard metabolic rate also reportedly fit quarter-power scalings. Examples include lifespans, growth rates, densities of trees in forests, and numbers of species in ecosystems [67]. Some commentators infer that Kleiber's law is, or points to, a universal biological principle, which they have sought to uncover. Different models were developed to correlate various dimensions (and tissues) to metabolic rate. Patterson [74] showed how water movements and organism size might affect such delivery and hence determine metabolic rate. Using simple geometrical models of organisms (plates, cylinders and spheres), he derived b values ranging from 0.31 to 1.25, more or less consistent with the experimental values.

Experiments revealed the relative importance of diffusion and mass transfer (convective movement) in the supply of materials. The two main attractions of this model are (1) good agreement with a wide range of data and (2) derivation from basic physical principles without ad hoc biological or other assumptions. Many of these skeptics claim that the "true" value of b is 0.66 or 0.67 because the principal determinant of metabolic scaling is the surface-to-volume ratio of the organism;

B = aMb

(22)

22

hence, assuming constant body density, the surface-to-mass ratio. Heusner [75]

reported that b is approximately 0.67 for any single mammalian species and suggested that the interspecies value of 0.75 is a statistical artifact. He argued that metabolic rate data for small and large mammals lie on parallel regression lines, each with a gradient of approximately 0.67 but with different intercepts (i.e. values of a, termed the

"specific mass coefficients"). Hayssen and Lacy [76] found b = 0.65 for small mammals and b = 0.86 for large ones, again suggesting that b = 0.75 is a cross-species "average"

with no biological significance; but it is questionable whether their data were measurements of standard metabolic rate in all cases. According to Heusner, the ratio B/M0.67 is a mass-independent measure of standard metabolism. Variations indicate the effects of factors other than body mass. Of course, if B varies as M0.67, the interesting problem is not the index (b) in the Kleiber equation but the constant relationship between specific mass coefficient (a) and body size. This point was developed by Wieser [77], who distinguished the ontogeny of metabolism, which comprises several phases but follows the surface rule (M0.67) overall, from the phylogeny of metabolism, which concerns the mass coefficients (a). Another application of allometric laws was the McMahon’s model applied for elasticity expressed by bone and muscle. For the first, assuming that a vertical column displaced by a sufficiently large lateral force buckles elastically, McMahon [78] applied this reasoning to bone dimensions for stationary quadrupeds. In a running quadruped the limbs support bending rather than buckling loads but the vertebral column receives an end thrust that generates a bucklingload. It follows that all bone proportions change in the same way with animal size. McMahon also applied this argument to muscles [78].

The Economos model try to correlate, instead, metabolic rate with ground gravitation:

an increased gravitational field increases energy metabolism in animals [79,80]. This model [81] is difficult to assess: it is not clear why the two proposed factors, surface area dependence and gravitational loading, should combine for all animals (and other taxa) in just the right proportions to generate a 0.75-power dependence on body mass.

All these models are based on physical dimensions like surface, mass, force correlated

(23)

23

between there. When we want to analyze possible applications of Kleiber’s law at the organ, tissue and cell levels, we must consider also other factors. Standard metabolic rate (B) is usually measured as oxygen consumption rate, which correlates with nutrient utilization [66,73] and rates of excretion of nitrogenous and other wastes [61]; so research in the field has been dominated by respiratory studies. Stahl [82]

described the scaling of cardiovascular and hematological data. Standard metabolic rate has two main components: service functions, e.g. the operation of heart and lungs; and cellular maintenance functions, e.g. protein and nucleic acid turnover. Krebs [83] elucidated this second component by studying tissue slices; his investigation has since been extended. Oxygen consumption per kg decreases with increasing M in all tissues, but tissues do not all scale identically. Horse brain and kidney have half the oxygen consumption rates of mouse brain and kidney but the difference between these species in respect of liver, lung and spleen is 4-fold [84-86]. Spaargen [87]

suggested that tissues that use little oxygen constitute different percentages of body mass in large and small mammals, leading to a distortion of the surface law (B = M2/3), which would otherwise be valid. More recently various experiments [88,89,90] for cell elements scaling (e.g. numbers of mitochondria vs. gram of liver, liver mass vs.

hepatocyte oxygen consumption, liver mass vs. mitochondrial number per hepatocyte) have shown different values for b constant not so close with 0.75 value. It is possible conclude that:

(a) different organs make different contributions to the scaling of whole-organism metabolic rates;

(b) differences at the cellular level make relatively small contributions to scaling at the organ level;

(c) these differences at cellular level might disappear altogether after several generations in culture.

(24)

24

The most striking conclusion is (b). It implies that allometric scaling of metabolic rate does not after all, for the most part, reside in cellular function but at higher levels of physiological organization. One of the first models developed following these basic assumptions was the Coulson’s flow model. Coulson relates tissue or organ oxygen consumption rates to circulation times, i.e. to the rate of supply of oxygen and nutrients. Coulson's approach contrasts with traditional biochemical measurements:

the principal variable is not the concentration of a resource but the supply rate;

metabolic activity depends on encounter frequency not concentration. Obviously, it is within the cell that the reactant molecules are passed over the catalysts. Not only flow theories can explain Kleiber’s law. Banavar et al. [91,92] and Dreyer and co-workers [93,94] have shown that the Kleiber relationship can be deduced from the geometries of transport networks, without reference to fluid dynamics. Broadly, these authors argue that as a supply network with local connectivity branches from a single source (in a mammalian circulatory system, the heart is the source), the number of sites supplied by the network increases. Natural selection has optimized the efficiency of supply. According to Banavar et al. [91], deviations from Kleiber's law indicate inefficiency or some physiological compensation process. A multi-cause rather than a single-cause account of allometric scaling was elaborated by Darveau and co-workers [95]. Their "allometric cascade" model holds that each step in the physiological and biochemical pathways involved in ATP biosynthesis and utilization has its own scaling behavior and makes its own contribution (defined by a control coefficient between 0 and 1) to the whole-organism metabolic rate. This idea is inherently plausible, and the model is attractive because it draws upon recent advances in metabolic control analysis in biochemistry [96] and physiology [97]. It emphasizes that standard metabolic rate is determined by energy demand, not supply. Moreover, their data cover only some three orders of magnitude of body mass, whereas many studies have involved much wider ranges. To conclude, several explanatory or quasi-explanatory models have been proposed for the allometric scaling of metabolic rate with body mass. Most of them have significant attractions, particularly the most recent ones, but

(25)

25

none of them can be unreservedly accepted. The variability of experimental data leaves room for doubt that Kleiber's law is universally or even widely applicable in biology, yet most workers in the field presume that it is. Even if such doubts are set aside, no model has yet addressed every relevant issue. As can be observed, all these models are only a mathematical fit to data collected from different studies. There is no bioengineeristic system that “translates” mathematics in a biological tool useful to predict the metabolic destiny of a substance in a more specific manner. To fill up this gap in our lab we have designed and realized the Multi-Compartmental Bioreactor, described in next section.

Physiologically-based pharmacokinetic (PBPK) models

Pharmacokinetic (PK) models are used to make rational predictions of the concentration of a chemical throughout the body. PK modeling has evolved over the past several decades. In recent years, biologically based models which apply first principles such as material balance and incorporate physiological were developed.

Parameters are being developed, initially to describe the kinetics of therapeutic drugs [98], then to environmental chemicals. These models include the physiology and anatomy of the animal species being described, as well as parameters such as blood flow, ventilation rates, metabolic constants, tissue solubility, and binding to macromolecules. Subsequent models have also incorporated the quantitative description of biological events, such as macromolecular binding, glutathione depletion/regeneration, enzyme induction or inhibition, cytotoxicity, mutation, and the effects during stages of gestation and bone development. Collectively, these models are known as physiologically based pharmacokinetic/pharmacodynamic (PBPK/PD) models. While PBPK/PD modeling was first extended from pharmaceutical agents to environmental chemicals in 1974 [99], it was not until 1984 when PBPK/PD modeling was popularized with the publication of the styrene model [100]. In recent years physiological modeling has emerged as a versatile approach to the assessment

(26)

26

and characterization of hazardous substances [101]. The most common application of PBPK/PD modeling is in dose extrapolation: from high dose to low dose, from one route to another route of exposure, and from laboratory animal to human [102]. In a PBPK/PD model a compartment is treated as individual organ or tissue arranged in precise anatomical configuration connected by the cardiovascular system. The transfer of chemicals between compartments is governed by actual blood flow rates and tissue solubilities (partition coefficients). Since PBPK/PD models are constructed with conformance to anatomical and physiological reality, they can be used to estimate dose-effect data over a wide range of exposure conditions. If they are validated, they can even be used to predict the outcomes of exposure conditions which have not been experimentally tested. Thus PBPK/PD models can provide a means of extrapolating from the high-dose situation commonly used in laboratory studies to the low-dose condition relevant to environmental exposure. The next area where PBPK/PD modeling has shown some promise is route-to-route extrapolation. Traditionally, route-to-route extrapolation is based on the total administered dose (fig. 6).

Figure 6. Comparison of the traditional approach and the PBPK/PD approach in route-to-route extrapolation. The traditional approach is based on total administered dose. The PBPK/PD approach is based on the time-course profile of tissue dosimetry.

In the absence of empirical data, it assumes complete absorption from all exposure routes. Obviously, extrapolation based on this principle is primitive, and more importantly, quite inaccurate. PBPK/PD modeling, on the other hand, accounts for the rates of uptake and potential first-pass effects from the different routes, and provides a complete time course profile of the tissue dosimetry. However, it is unlikely that there would ever be a ‘perfect’ route-to-route extrapolation, i.e., an exactly matched

(27)

27

concentration time-course profile from 2 different exposure routes. For a chemical which produces systemic effect an extrapolation based on target tissue dose integrated over time is certainly more rational and potentially more accurate than one based on the total administered dose. Specific examples include the dermal to inhalation extrapolation of organic chemical vapors [103], inhalation to oral extrapolation of trichloroethylene [104] and methylene chloride [105], oral to dermal extrapolation of ethyl acrylate [106], and oral to inhalation extrapolation of chloroform [107]. Because of ethical reasons, most toxicity studies are seldom conducted with humans but with laboratory animals. Risk assessors are therefore faced with having to extrapolate data from laboratory animals. Historically, interspecies extrapolation is based on the principle of body size (fig. 7A).

Figure 7. Comparison of traditional approach and the PBPK/PD approach in species extrapolation. Based on exposure concentration or applied dose (A). Based on delivered or target tissue dose (B).

The administered dose is scaled according to the ratio of body weight or body surface area. Despite its popularity, this form of dosimetric scaling is only marginally accurate for intraspecies extrapolation, and is rarely acceptable for interspecies extrapolation [108]. The apparent unreliability of this approach is in part due to its omission to consider PK differences between species. In the PBPK/PD approach (fig. 7B) interspecies extrapolation is based on the target tissue dose. The target tissue dose takes into account the PK and metabolic differences. Using the PBPK/PD model, the administered dose in the rodent is transformed to a target tissue dose. If PD data are not available, it is assumed that rodent and human tissues have similar sensitivity. The

A B

(28)

28

tissue dose is then converted to an administered dose in the human using the human PBPK/PD model. When PD data are included in the model, the extrapolation procedures are still similar. The only difference is that the biologically effective dose, which will have accounted for the difference in tissue sensitivity, would replace the tissue dose as the basis for the extrapolation. PBPK/PD modeling provides a means of estimating the tissue doses of chemicals and their metabolites over wide range of exposure conditions in different animal species. It can provide a biologically based means of extrapolating from animal results to predict effects in human populations.

These techniques have been applied to the human cancer risk assessment of methylene chloride [109], ethylene dichloride [110], tetrachloroethylene [111], trichloroethylene [112], 1,4-dioxane [113], ethyl acrylate [114], chloroform [115], and dioxin [116]. The various forms of extrapolation are by far the most popular applications of PBPK/PD modeling. However, they have been used in other areas of toxicological assessment. One of these is the experimental design of toxicity studies. It is well recognized that PK and metabolism play an important role in regulating the toxicity of many chemical substances. Almost all metabolic and many excretion processes utilize specific enzymes or binding proteins, which have limited capacity and may become saturated at high substrate concentrations. When these processes are saturated, the tissue dosimetry will not be linearly related to the externally administered dose. PBPK/PD analysis of the dose-dependent processes provides an understanding of the relationship between external and internal dosimetrics under various exposure conditions. Recognition of these kinetic behaviors is essential to the proper design of toxicological experimentation. It is particularly relevant for dose selection in traditional cancer bioassays, which emphasizes the use of a maximum tolerated dose. A comprehensive description of the influence of saturable processes on the delivery of a chemical to target tissues, such as in a PBPK/PD model, can be very helpful in the correct selection of dosing regimen and test species, as well as the timing of interim sacrifice. In addition to uses for improving toxicity study design, PBPK/PD models have been used to examine chemical interactions. An example of this is the

(29)

29

work of Andersen et al. [117] with a dichloroethylene and trichlorethylene mixture. In this approach a PBPK model is constructed for the 2 chemicals individually (fig. 8).

Figure 8. Application of PBPK/PD modelling to evaluate metabolic interactions in chemical mixtures.

The 2 models are linked via a mass-balanced differential rate equation for the liver compartment that has been generalized to account for the various mechanisms of metabolic interaction. The PBPK models are then tested by optimizing the fit to a series of uptake data in a closed-chamber inhalation exposure to the chemical mixture.

These kinetic analyses, coupled with in vitro interaction data, can help to delineate the correct mechanistic possibility. Other examples of analyses of metabolic interaction utilizing PBPK/PD modeling include the binary mixtures of hexane with methyl-n-butyl ketone or 2,5-hexanedione [118], and a mixture of benzene and toluene [119]. Haddad et al. [120], Cahill et al. [121] and Cole et al. [122] developed a PBPK model that predicts tissue concentrations of benzene and its key metabolites in mice using metabolic parameters obtained in vitro. The PBPK model’s tissue compartments include the liver, richly perfused and poorly perfused tissues, and adipose tissue. Two additional compartments, the stomach and the alveolar gas-exchange region, were also included to describe oral and inhalation exposures, respectively. This model was later extended to take into account the zonal distribution of enzymes and metabolism in the liver, rather than treating the liver as one homogeneous compartment [123].

(30)

30

Most PBPK models have used single-valued parameters and hence are deterministic, though parameter distributions are increasingly being used to explicitly account for variability and uncertainty. When PBPK models are extended to humans, accounting for the multiple sources of variability that affect dosimetry in humans is especially important. This hierarchy of variances includes population-wide variability, variability among different studies, differences between individuals within each study, and uncertainty in measurements taken from each individual. To properly account for the variability and uncertainty at any of these levels, PBPK models should be integrated into a statistical framework that acknowledges these sources of variation PBPK models have become popular in toxicology, however, because human dosimetry data are rare for environmental pollutants without therapeutic value. PBPK models overcome this data limitation because they make use of measured values for tissue compartments and blood flows (physiological parameters), which again are presumed to represent the larger population. When a PBPK model fails to adequately fit pharmacokinetic data, however, one may consider varying the physiological parameters to fit the data, but updating population physiological parameters based on observations from a small sample could well result in posterior distributions not truly representative of the population as a whole. Recently, Viravaidya et al. [124] have developed a multichamber microbioreactor, called µCCA (microscale Cell Culture Analog) (fig. 9A,B) to study bioaccumulation, distribution and toxicity in different tissue compartments connected via circulating fluid. The system is a simple four-chamber (“lung”-“liver”-

“fat”-“other tissue”) designed on the basis of a physiologically based pharmacokinetics (PBPK) model of a rat. This device consists of an array of channels or chambers containing cultured mammalian cells selected to mimic different animal “organ”

systems. Design parameters such as “organ” compartment residence times and flow distribution are based on a corresponding PBPK model, and the fluidics is designed to mimic essential features of the circulatory system with re-circulating culture medium as a blood surrogate. Unlike other in vitro systems (usually static), a µCCA has the potential to mimic the dose dynamics that would occur in an animal or human. The

(31)

31

tissue compartments in this device are currently modeled by simple immortalized cell lines, but clearly the ability to study the interaction of high-fidelity tissue models in a microfluidic network would be quite powerful. Viravaidya and co-workers used, to test their bioreactor, naphthalene in presence [125] and absence [124] of fatty tissue to analyze its pharmacokinetic and, precisely, how much is important presence of this particular accumulation compartment for metabolism of two xenobiotics (fig. 10).

Figure 9. A schematic diagram of μCCA design. Dimensions of each chamber are as followed (w x l x d): lung (2 mm x 2 mm x 20 μm), liver (3.5 mm x 4.6 mm x 20 μm), fat (0.42 mm x 50.6 mm x 100 μm), and other tissues (0.4 mm x 109 mm x 100 μm).

According to the PBPK model, 25%, 9%, and 66% of flow from the lung chamber go to liver, fat, and other tissues chambers, respectively (A). Size of μCCA (B).

A

B

(32)

32

Figure 10. Cells on the µCCA chip after being in circulation for 6 h. A indicates L2 cells in the lung chamber and an enlarged image of L2 cells. B indicates HepG2/C3A in the liver chamber and enlarged image of C3A cells. C indicates the differentiated 3T3-L1 in the fat chamber.

The presence of fat reduces naphthalene-induced glutathione depletion in lung and liver cells caused by sequestration of toxic naphthalene metabolites (e.g., naphthoquinone) in the adipocytes and, successively, by reduction of product of naphthalene metabolism (i.e., H2O2 produced by redox cycling of naphtoquinone).

Viravaidya’s µCCA is, as MCB for allometric scaling, the “cellular translation” of the PBPK model. These two systems can offer an alternative approach to determine biotransformation characteristics of the compound of interest. With some modifications, the future devices will be particularly beneficial to pharmaceutical and chemical companies to assess the efficacy and safety of the lead compounds.

(33)

33

BIBLIOGRAPHY

1. Bavarian F, Fan LS, Chalmers JJ. Microscopic visualization of insect cell–bubble interactions. I: Rising bubbles, air–medium interface, and the foam layer. 7, 1991, Biotechnol Progr., pp. 140–150.

2. Cherry RS, Hulle CT. Cell death in the films of bursting bubbles. 8, 1992, Biotechnol Progr., pp. 11-18.

3. Garcia-Briones M, Chalmers JJ. Cell–bubble interactions. Mechanisms of suspended cell damage. 1990, Ann NY Acad Sci, pp. 219–229.

4. Handa A, Emery AN, Spier RE.On the evaluation of gas–liquid interfacial effects on hybridoma viability in bubble column bioreactors. 66, 1987, Dev Biol Stand, pp. 241–253.

5. Handa-Corrigan A, Emery AN, Spier RE. Effect of gas–liquid interfaces on the growth of suspended mammalian cells: Mechanisms of cell damage by bubbles. 11, 1989, Enzyme Microb Technol, pp. 230–

235.

6. Jordan M, Sucker H, Einsele A, Widmer F, Eppenberger HM. Interactions between animal cells and gas bubbles: The influence of serum and Pluronic F68 on the physical properties of the bubble surface. 43, 1994, Biotechnol Bioeng, pp. 446–454.

7. Trinh K, Garcia-Briones M, Hink F, Chalmers JJ. Quantification of damage to suspended insect cells as a result of bubble rupture. 43, 1994, Biotechnol Bioeng, pp. 37–45.

8. Garcia-Briones M, Chalmers JJ. Flow parameters associated with hydrodynamic cell injury. 1994.

Biotechnol Bioeng 44:1089–1098.

9. Gregoriades N, Clay J, Ma N, Koelling K, Chalmers JJ. Cell damage of microcarrier cultures as a function of local energy dissipation created by a rapid extensional flow. 2000. Biotechnol Bioeng 69:171–182

10. McDowell C, Papoutsakis ET. Increased agitation intensity increases CD13 receptor surface content and mRNA levels, and alters the metabolism of HL60 cells cultured in stirred tank bioreactors. 1998.

Biotechnol Bioeng 60:239–250.

11. Michaels JD, Mallik AK, Papoutsakis ET. Sparging and agitation-induced injury of cultured animal cells: Do cell-to-bubble interactions in the bulk liquid injure cells? 1996. Biotechnol Bioeng 51:399–409.

12. Kretzmer G, Jammrich U, Schugerl K. Advances in animal cell biology and technology for bioprocesses. 1990. In: Spier RE, Griffith JB, editors. London: Butterworths. p 172–174.

(34)

34

13. Peterson JF, McIntire LV, Papoutsakis ET. Shear sensitivity of cultured hybridoma cells (CRL-8018) depends on the mode of growth, culture age and metabolic concentration. 1988.. J Biotechnol 7:229–

246.

14. Kretzmer G, Schugerl K. Response of mammalian cells to shear stress. 1991. Appl Microbiol Biotechnol 34:613–616.

15. Ludwig A, Tomeczkowski J, Kretzmer G. Influence of shear stress on adherent mammalian cells during division. 1992. Biotechnol Lett 14: 881–884.

16. Al-Rubeai M, Singh RP, Emery AN, Zhang Z. Cell cycle and cell size dependence of susceptibility to hydrodynamic forces. Biotechnol Bioeng 1995. 46:88–92.

17. Al-Rubeai M, Emery AN, Chalder S, Goldman MH. A flow cytometric study of hydrodynamic damage to mammalian cells. J Biotechnol 1993. 31:161–177.

18. Born C, Zhang Z, Al-Rubeai M, Thomas CR. Estimation of disruption of animal cells by laminar shear stress. Biotechnol Bioeng 1992. 40: 1004–1110.

19. Tramper J, Williams JB, Joustra D, Vlak JM. Shear sensitivity of insect cells in suspension. Enzyme Microb Technol 1986. 8:33–36.

20. Gudi SRP, Clark CB, Frangos JA. Fluid flow rapidly activates G proteins in human endothelial cells.

Involvement of G proteins in mechanochemical signal transduction. Circ Res 1996 79:834–839.

21. Nollert MU, Diamond SL, McIntire LV. Hydrodynamic shear stress and mass transport modulation of endothelial cell metabolism. Biotechnol Bioeng 1991. 38:588–602.

22. Ranjan V, Waterbury R, Xiao Z, Diamond SL. Fluid shear stress induction of the transcriptional activator c-fos in human and bovine cells, HeLa and Chinese hamster ovary cells. Biotechnol Bioeng 1996. 49: 383–390.

23. Levesque MJ, Sprague EA, Schwartz CJ, Nerem RM. The influence of shear stress on cultured vascular endothelial cells: The stress response of an anchorage-dependent mammalian cell. Biotechnol Progr 1989. 5:1–8.

24. Weisner TF, Berk BC, Nerem RM. A mathematical model of the cytosolic-free calcium response in endothelial cells to fluid stress. Proc Natl Acad Sci 1997. 94:3726–3731.

Riferimenti

Documenti correlati

La publicación de la novela-mundo que abrió las puertas del mercado editorial europeo a la literatura latinoamericana tuvo lugar el mismo año en que a Miguel Ángel Asturias se

PhD in Agricultural Science and Biotechnology, University of Udine, Edmund Mach Foundation, San Michele all’Adige (TN), Plant &amp; Food Research Institute, Palmerston North

19 Oltre a chiamare in causa, evidentemente in chiave cristiana, un intervento non lontano da quei giudizi di Dio cui nel corso dell’opera è attribuito un ruolo decisivo in

Abstract In the framework of the couple stress theory, we discuss the effective elastic properties of a metal open-cell foam.. In this theory, we have the couple stress tensor, but

Le variabili rilevate dalla indagine del IUI con riferimento all’impresa madre sono: il settore di attività principale, il numero degli occupati, il numero delle affiliate estere, il

Apart from general insights, this book has several interesting contents also for agent-based modelers: it provides a clear perspective for cognition analysis in social sciences and

In conclusione, il caso clinico presentato è un tipico esempio di SPTP, trattandosi infatti di una giovane donna giunta all’osservazione asintomatica, con marcatori tumorali

results_elpropr_(inputfilename) is an Excel file containing the calculated elastic properties of the laminate as a function of the number of cycles (N) or – in case