• Non ci sono risultati.

9. Regressione lineare semplice

N/A
N/A
Protected

Academic year: 2021

Condividi "9. Regressione lineare semplice"

Copied!
54
0
0

Testo completo

(1)

Regression methods

Regression methods search for the best relationship between a set of variables describing objects (samples) and a set of responses obtained for the same objects.

The type of relationship (model) is initially searched for by fitting it to the experimental measures; once the model has been subjected to validation, i.e., the quality of its description of responses is checked, aprediction of future responses can be made:

Regression methods are thus mathematical models searching for functions f that relate to a response Y a certain number, m, of descriptors (variables):

As shown in the general scheme reported in the next slide, the number and type of variables determines the specific definition of a regression model.

fitting

validation

prediction

)

X

,...,

X

,

f(X

Y

=

1 2 m

(2)

ε

β

β

β

+

+

+

=

2 2 1 0

X

X

Y

ε

β

β

β

β

+

+

+

+

=

3 3 2 2 1 0

X

X

X

Y

ε β β β β + + + + + = m mX X X Y 2 ... 2 1 0 ………

m-th

ε

β

β

+

+

=

X

Y

0 1

Y

=

β

0

+

β

1

X

1

+

β

2

X

2

+

ε

ε

β

β

β

β

+

+

+

+

=

0 1

X

1 2

X

2 12

X

1

X

2

Y

ε

β

β

β

β

β

β

+ + + + + + = 2 2 22 2 1 11 2 1 12 2 2 1 1 0 X X X X X X Y

ε

β

β

β

β

+

+

+

+

+

=

X

X

m

X

m

Y

0 1 1 2 2

...

Univariate models

Bivariate models

(3)

Linear regression

A linear model of type:

with β0 , β1 , …., βm representing the model parameters, is the simplest type of regression model.

Actually, even a generic transformation of each variable, fi(Xi) can be usedin the model:

Examples of linear models are then:

whereas the following are examples of non linear models:

=

+

=

m i i i

X

1 0

Y

β

β

=

+

=

m i i i i

X

f

1 0

(

)

Y

β

β

X

Y

=

β

o

+

β

1

log

2

( )

2 1 1 0

β

X

1

β

senh

X

β

Y

=

+

+

)

)

β

(X

exp(β

β

Y

X)

exp(β

β

Y

2 3 2 1 2 1

=

=

(4)

Simple linear regression

A linear model of type:

thus involving a single variable, represents simple linear regression, that is one of the most common in analytical chemistry (for example, it is used for calibration purposes).

Examples of sets of observations that can be treated with simple linear regression are:

where average values of X and Y, representing the center of gravity (or centroid) of the measurements set, are also indicated in the left graph, whereas the case of replicated measurements for specific values of the independent variable (x) is represented in the right graph.

ε

β

β

+

+

=

X

Y

o 1

Y

X

(5)

A deterministic and an accidental (or stochastic) component can be distinguished in a simple linear model:

Deterministic component

Accidental (stochastic) component

ε

β

β

+

+

=

X

Y

o 1

For an i-th observation the equation becomes:

The main assumptions made for simple linear regression are:

Consequently: i i o i

X

Y

=

β

+

β

1

+

ε

εi ∼ N(0, σ2), thus: E(ε

i) = 0 and V(εi ) = σ2 for i = 1,2,….,n (homoscedasticity);

Cov (εi , εj) = 0 for i ≠ j = 1,2,….,n (uncorrelation)

2

)

(

Y

i

=

σ

V

for i = 1,2,…. , n;

Cov

(

Y

i

,

Y

j

)

=

0

for i ≠ j = 1,2,….,n.

i o i X Y E( ) =

β

+

β

1 ( , 2) 1 σ β βo Xi N +

Y

i

for i = 1,2,…. ,n;

(6)

Parameters β0 and β1 of the model can be subjected to statistical inference.

One of the first approaches adopted at this aim, still one of the most common, is based on

the least squares method, that was described in 1805 by the French mathematician Adrien-Marie Legendre.

Friedrich Gauss was also one of the first mathematicians to study linear regression. Such assumptions can be represented with the following graph:

) , ( 2 1

σ

β

β

o Xi N + Yi

ε

iN(0,σ 2) i o i

X

Y

=

β

+

β

1

)

(

ε

f

(7)

Estimators of β0 and β1 are indicated as b0 and b1 and the general equation for values predicted by the model is:

Minimization of the squares of differences between actual and modelled values leads to the following equations for the two estimators:

Combining the equation for b0 and the general equation of the model, the following equation is easily obtained:

It is thus apparent that the least squares regression line passes through the data centroid, as emphasized in the following graph.

X

b

b

=

0

+

1

)

X

(X

b

Y

=

+

1

(8)

Y X

X

β

β

Y

=

o

+

1

X

b

b

=

o

+

1

e

i

ε

i (Xi,Yi) i 1 o i i

Y

b

b

X

e

=

X

Y

i

Y

i

X

i

Given a specific couples of values (Xi , Yi) the meaning of εi as the difference between the experimental value Yi and the value provided by the real model is evidenced in the figure:

On the other hand, the difference between the experimental value Yi and the response value predicted by the inferred model is represented by ei, which is called residual.

(9)

If experimental data are fitted properly,

the mean of residuals ei and their correlation with x are both equal to 0:

Actually, also the sum of all residuals is equal to 0.

On the contrary, in the example shown in the figure on the right the model is clearly wrong, thus both the residuals mean and their correlation with x are not equal to 0:

(10)

Interestingly, equations for estimators b0 and b1 can be obtained starting from the assumptions that the mean of residuals (or their sum) and their correlation with X are equal to 0: XX XY

S

S

b =

1

(11)

Variances and distributions for estimators b

0

and b

1

Variances for the estimators of regression line slope and intercept can be obtained considering the general properties of variance:

(12)

Note that ai in this case is different from the one used for V(b1).

When each square of binomial indicated in the second member is calculated and then the summation of squares is considered, the following equations have to be taken into account:

(13)

( )

(

)

     + =             − + =

= XX 2 2 n 1 i 2 i 2 2 0 σ n1 SX X X X n 1 σ b V

n

n

n

n

n i

1

1

1

1 2 2

=

=

= Consequently:

( )

( )

(

)

(

)

+

=

=

=

= = = = = n i n i

n

1 n 2 1 i 2 i 1 2 2 i 2 2 n 1 i 2 i 2 i n 1 i 2 i 0

X

X

X

X

X

1

σ

a

σ

Y

V

a

b

V

Since:

(14)

The properties of b0 and b1 can thus be summarized as follows:

It is worth noting that σ2 corresponds to the residual mean square, MS

RES, that is estimated

by the square of residual standard deviation, sy/x.

Distributions for b0 and b1 can finally be obtained:

XX 2 1) Sσ V(b =

( )

      + = XX 2 2 0 σ n1 SX b V 1 1

)

β

E(b =

( )

b0 β0 E =

(15)

Confidence intervals for b0 and b1 can then be expressed as follows:

A confidence interval can be also calculated for the predicted response Y, according to the linear regression model, once a specific value of variable X is fixed:

( )

X

E

(

b

b

X

)

E

( )

b

XE

( )

b

β

β

X

E

=

0

+

1

=

0

+

1

=

0

+

1

( )

V

(

b

b

X

)

V

( ) ( )

b

V

b

X

2

COV

(

b

,

b

X

)

V

=

0

+

1

=

0

+

1

+

0 1

Since estimators b0 and b1 are NOT independent, the following equation has to be written for the predicted response variance:

(16)

Covariance between b0 and b1 can be calculated starting from the following general definition:

Consequently:

As shown before, both b0 and b1 can be expressed as with ai having a different expression in the two cases:

Cov(aY, bY) = ab Cov (Y,Y) = ab V(Y)

for b1 for b0

(17)

After combining the last three equations and considering that V(Yi) is always equal to σ2, if

homoschedasticity occurs, the following equation can be obtained:

Considering the expressions for V(b0), V(b1) and Cov(b0, b1):

( )

      + = XX 2 2 0 σ n1 SX b V XX 2 1

)

S

σ

V(b =

(

)

XX 2 1 0,b σ SX b Cov = −

(18)

( )

(

)

       + = XX 2 2 S X n 1 σ Yˆ V X

The equation can be easily re-written in a compact form:

The following expression can subsequently be written, considering the normality of distribution for predicted Y values and using sy/x as an estimator for σ:

The confidence interval for the predicted values of Y can thus be expressed as:

(

)

(

)

XX 2 / 2 / , 2 1 0

n

1

S

X

+

±

+

b

X

t

S

X

b

n α Y X

This expression is exploited by fitting programs to draw the so-called confidence bands, together with regression line, as shown, for examples, in the next figure.

(19)

95% confidence interval for mean Y at X = 3.5:

X

b

b

=

0

+

1

As apparent, the narrowest confidence interval is observed when an X value close to the system centroid is considered.

On the other hand, it seems that several actual responses can be located outside confidence band in this case. This result will be discussed in detail in the following slides.

(20)

Partitioning of total deviations and ANOVA related to linear regression

As shown in the following figure, the difference between an experimental Y value and the average of Y values can be partitioned into two contributions:

X

Y

Y

i i

(

Y

ˆ

Y

)

i

) (YiY

X

i

)

ˆ

(

Y −

i

Y

i

)

(Y

)

Y

(

Y

Y

Y

Y

i

=

i

+

i

i

=

i

+

i

i Y

X

(21)

A further development of this equation is:

Indeed, the double product is equal to zero:

(

)

[

]

=

+

=

+

i

2 i i i 2 i 2 i i i i 2

(22)

(

)

i

Y

i

Y

2

=

i

(

Y

ˆ

i

Y

)

2

+

i

(

Y

i

Y

ˆ

i

)

2

The total deviation of a specific value from the mean of values can thus be partitioned into the regression deviation and the residual deviation:

Total deviation Regression deviation Residual deviation

SS

TOT

=

SS

REG

+

SS

RES

(23)

The following ANOVA table can thus be written: Source of variation

SS

df

MS

E(MS)

Regression 1 MSREG Residuals n-2 MSRES σ 2 Total n-1

i

(

Y

i

Y

ˆ

i

)

2

(

)

2

i i

Y

Y

i

(

Y

ˆ

i

Y

)

2 XX 2 1 2

β

S

σ +

The expected value for the regression mean square, MSREG, is identical to the one for the regression sum of squares SSREG (since df = 1). SSREG can be easily calculated:

(

)

( )

2 1 XX REG

S

E

b

SS

E

=

(24)

( ) ( )

(

)

2 1 2 1 1

E

b

E(b

)

b

Var

=

( )

( )

(

)

2 1 1 2 1

Var

b

E(b

)

b

E

=

+

( )

XX 2 1 Sσ b Var =

(

E(b

1

)

)

2

=

β

12

( )

12 XX 2 2 1 Sσ β b E = +

(

)

( )

2 XX 1 2 2 1 XX 2 XX 2 1 XX REG

S

E

b

S

S

σ

β

σ

β

S

SS

E

=

+

+

=

=

The expected value for the square of b1 can be calculated starting from its variance and from an already described general property of variance:

E(MS

REG

) =

(25)

The significance of regression can be tested using hypothesis testing:

H

0

: β

1

= 0

regression not significant

H

1

: β

1

0

regression significant

Y

Y =

=

β

β

oo

+

β

1

X

Since

if the H0 hypothesis is true, both MSREG and MSRES are estimators of σ2.

The reject criterion for H0 is thus:

XX 2 1 2

+

β

S

σ

E(MS

REG

) =

) 2 , 1 ( −

n RES REG

F

MS

MS

α

(26)

Coefficient of determination and correlation coefficient

The coefficient of determination, denoted as R2 or r2, is the proportion of the variance in the

dependent variable that is predictable from the independent variable.

In the case of linear regression the mathematical definition is:

R2 is thus comprised between 0 and 1.

Notably, R2 = 0 when SS

REG = 0 and R2 = 1 when SSRES = 0, i.e., when all the observed points

are perfectly located on the regression line (total fit of the adopted model).

An interesting relationship can be found between the coefficient of determination and the linear correlation coefficient, ρ2

XY (whose estimator is indicated as r2XY):

thus: TOT RES TOT REG 2

SS

SS

1

SS

SS

R

=

=

(27)

As shown in the following figure, R2 becomes much lower than 1 when the variability of data

not explained by regression becomes remarkable, yet its approach to values close to 0 does not necessarily mean that variables are not related at all:

(28)

Since:

and:

R2 is expected to increase when S

xx is increased (i.e., x values are more spread out around

their mean) and if σ2 is decreased.

It is worth noting that a large value of R2 does not necessarily imply that the regression

model will be an accurate predictor.

YY XX YY REG

S

S

b

S

S

R

2

=

=

12 2

)

1

(

σ

=

n

S

YY

(29)

Given a particular value of X, x, an estimate of the expected value of Y (the latter is also called conditional mean) is obtained:

It is important to point out that the latter is an estimate of the mean, not of a single actual value obtained for Y when X = x.

A significant difference may exist between the estimate of conditional mean and that of an actual value of Y.

Comparison between confidence and prediction intervals

When least squares regression is applied, a general model is assumed to be valid for Y:

Regression produces an estimate of model parameters, thus leading to the equation:

X

β

β

Y

=

o

+

1

E(Y

X

=

x)

=

β

o

+

β

1

x

X

b

b

Y

=

0

+

1

x

b

b

=

0

+

1

(30)

In the following figure, blue points correspond to the actual line of conditional means, i.e., the values of the true model, whereas yellow points represent the calculated regression line, that has been found, as it always occur with regression procedures, by rotation around the centroid:

The black square shows the value of the (conditional) mean of Y obtained when x = 3, which is very close to the actual mean.

However, actual Y values obtained for x = 3, represented by brown crosses, can be quite different from that value.

(31)

In order to obtain a better estimate of actual values, the so-called prediction interval has to be considered.

Such interval must take into account both the uncertainty in the estimate of the conditional mean and the variability in the conditional distribution.

In order to find this interval, the so-called prediction error:

has to be considered.

The expectation for the prediction erroris:

(

β

β

X

ε

) (

X

)

(32)

The variance of prediction error can be obtained from the following variances:

( )

(

)

+

=

XX 2 2

S

X

n

1

σ

V

X

V

( )

Y

=

V

(

β

0

+

β

1

X

+

ε

)

=

V

( )

ε

Considering that β0 and β1 are constant values, thus their variance is zero, the following

equation can be written:

Given the normal distribution of Y data, the following relation can be written if σ2 is

(33)

Considering that σ can be estimated by the residual standard deviation, sy/x, the prediction interval for Y at a α level of significance, can be expressed as:

(

)

XX 2 Y/X 2 α/ 2, n 1 0

b

X)

t

S

1

n

1

X

S

X

(b

+

±

+

+

If m replicates are obtained for response Y at a specific value of X, an average value of Y is obtained, and the corresponding variance becomes σ2/m, thus the prediction interval for Y

becomes:

(

)

XX 2 Y/X 2 α/ 2, n 1 0

b

X)

t

S

1

n

1

X

S

X

(b

+

±

+

+

m

It is apparent that, as emphasized by the figure in the next slide, the width of the prediction interval is larger than that of the confidence interval (since the term 1, or 1/m, is present additionally under the square root sign).

(34)

95% prediction interval for mean Y at X = 3.5:

X

b

b

=

0

+

1

As shown in the figure, if the precision of responses is poor a remarkable difference is observed between prediction and confidence bands.

(35)

Lack of fit in simple linear regression

A key aspect of linear regression is the evaluation of the eventual lack of fit, indicating that the model is inadequate; it is based on the analysis of residuals:

X

i

Y

i i

u

i

Residual

Bias (B

i

)

q

i

Y

X

As shown in the figure, if the linear model is not correct (because the curved model is correct) a residual includes both a random component, qi, and a systematic component, or bias, Bi, with the latter arising from the inadequacy (lack of fit) of the model.

(36)

If a lack of fit exists, the residual mean squares, MSRES:

cannot be considered an unbiased estimator of σ2 (the pure error).

Application of ANOVA to residuals analysis enables a separation between contributions due to pure error (p.e.) and to lack of fit (LoF).

An independent estimate of σ2, easily obtained by replicating responses at each X i, is required: 2 n ) Yˆ (Y s MS i 2 i i 2 y/x RES − = =

SS

RES

SS

p.e. (from replicates)

SS

LoF (by subtraction)

F - test

MS

p.e.

MS

LoF

(37)

The pure error sum of squares, SSp.e., is:

Since the number of degrees of freedom is given by N – h, where: the pure error mean square, MSp.e., is calculated as:

2 1 1 1 1 12 11

,

Y

,

,

Y

,...,

Y

1

Y

;

s

Y

j n

n1 replicates at X1 2 2 2 2 2 22 21

,

Y

,

,

Y

,...,

Y

2

Y

;

s

Y

j n

2 2 1

,

i

,

,

ij

,...,

in i

;

i i

Y

Y

Y

Y

s

Y

i

2 2 1

,

h

,

,

hj

,...,

hn h

;

h h

Y

Y

Y

Y

s

Y

h

n2 replicates at X2 ni replicates at Xi nhreplicates at Xh

(

)

∑∑

= =

=

h i n j ij i e p

Y

Y

SS

1 1 2 . .

=

=

h i i

n

N

1

(38)

The ANOVA table for simple linear regression also including the contribution due to the Lack of fit is the following:

Source of

variation SS df MS E(MS)

Total N-1

Regression subtractionBy 1 MSREG

Residuals N-2 MSRES

σ

2

Lack of Fit subtractionBy h-2 MSLoF

Pure error N-h MSp.e.

(

)

∑∑

= = − h i n j ij Y Y 1 1 2

(

)

∑∑

= = − h i n j ij i Y Y 1 1 2 ˆ XX 2 1 2

β

S

σ +

(

)

∑∑

= = − h i n j ij i Y Y 1 1 2 2 . .

σ

p e 2 N B σ i 2 i 2 . .e +

p

(39)

In order to find the presence of lack of fit the following statistic needs to be defined:

which is distributed according to an F distribution with h-2, N-h degrees of freedom.

The presence of Lack of fit will thus be confirmed, with a α significance coefficient, if:

It can be demonstrated that, as reported in the ANOVA table shown before, the expected value for MSLoF is:

Since MSp.e. = σ2

p.e., the absence of a significant difference between MSLoF and MSp.e. implies

that all the bias values Bi are negligible (indeed, the sum of their squares is close to 0), which is consistent with the absence of lack of fit.

F = MS

LoF

/MS

p.e

MS

LoF

/MS

p.e

F

h-2, N-h, α

2

N

B

σ

i 2 i 2 . .e

+

p

(40)

A numerical example

Suppose that the following X and Y values were obtained from a series of measurements:

X Y 90 81; 83 79 75 66 68; 60; 62 51 60; 64 35 51; 53

These sums can be easily calculated:

(41)

The slope of regression line is:

The regression sum of squares, SSREG, is:

(42)

Calculations required for the pure error sum of squares, SSp.e., are:

x

i

SSp.e.= 2 + 0 + 34.67 + 8 + 2 = 46.67

(43)

Since the ratio F = MSLoF/MSp.e.is lower than the critical value of F distribution:

The simple linear model is adequate, at a level of confidence of 95%.

The following ANOVA table is thus obtained:

Source of variation SS df MS Regression 965.66 1 965.66 Lack of Fit 71.77 3 23.92 Pure error 46.67 5 9.33 Total 1084.1 9

(44)

Calibration and inverse regression

A calibration experiment typically consists of two stages

.

In the first stage n observations (xi, yi) are collected from standards and used to fit a regression model, that in the simplest case can be expressed as:

The fitted model is often referred to as the calibration curve or standards curve.

ε

x

β

β

y

=

0

+

1

+

S = b

0

+ b

1

C

(45)

In the second stage m (m ≥ 1) values of the response are observed for an unknown predictor value x0.

Inverse regression (also called inverse prediction or discrimination) is the procedure, based on the calibration line, adopted to estimate the value x0from a single observation y0 or from the mean of m replicated observations.

x0

y0

(46)

The x0 estimate is based on inverting the calibration curve at y0 (i.e., solving the fitted regression equation for x0) and is easily extended to polynomial and nonlinear calibration problems: single observation) ⇔ m replicates) where: 1 0 0 0

βˆ

βˆ

y

=

1 0 m 0 0

βˆ

βˆ

y

=

=

=

m i i m

m

y

y

1 0 0

1

)

(

βˆ

)

y

(y

ˆ

0 1 0 0

x

x

S

S

y

y

x

xy xx

+

=

+

=

Two approaches are usually adopted to calculate confidence intervals, also known as calibration intervals, for :

1) Wald interval 2) Inversion interval

0

(47)

Wald interval

According to the Wald approach, the following standard deviation can be estimated for :

thus an approximate 100 (1-α) % confidence interval for is given by:

0

ˆx

0

ˆx

(

)

1/2 xx 2 0 1 / ˆ S x xˆ n 1 m 1 βˆ      + + = y x x s s o

(

)

1/2 xx 2 0 1 / 3 ), 2 / 1 ( 0 ˆ 3 ), 2 / 1 ( 0

m

1

n

1

S

x

βˆ

ˆ

ˆ

+

+

±

=

±

t

n+m

s

x

x

t

n+m

s

y x

x

α o α

(48)

From a graphical point of view this interval can be obtained by exploiting prediction bands, namely, by extrapolating x values corresponding to the upper (y+) and lower (y-) limits of the

prediction interval for y0:

x

βˆ

βˆ

=

0

+

1 Y X m

y

0 0 xˆ 0 0 xˆ t s xˆ+ = + 0 xˆ 0 0 xˆ t s xˆ− = − + 0

0

− 0

Prediction bands +

y

y

Confidence bands

(49)

Inversion interval

A confidence interval for can be constructed also by inverting a prediction interval for the response, thus it is called “inversion interval”.

From a graphical point of view, an approximate inversion interval can be obtained by drawing a horizontal line through the scatterplot of the standards at y0 and finding the abscissas of its intersections with the prediction bands of the calibration curve:

0

ˆx

Y X x b b yˆ = 0+ 1 0 y 0 ˆx + 0 ˆx − 0 ˆx

(

)

xx 2 0 y/x 2 n ), 2 / 1 ( 0 1 0 βˆ xˆ ) t s 1 n1 xˆS βˆ ( y= + − + + + −−x α 0 1 0 0 βˆ βˆ xˆ y y= = +

(50)

Interestingly, for the simple linear calibration problem the Wald-based interval is equivalent to the approximate inversion interval:

x

βˆ

βˆ

=

0

+

1 Y X 0

y

+ 0

0

− 0

Prediction band +

y

y

Wald interval (approximate) inversion interval

(51)

It is worth noting that the expression for can be expressed in an alternative form, if the following relationships are considered:

Indeed, the equation:

is easily transformed into:

)

(

βˆ

ˆ

y

1

x

x

y

i

=

+

i

y

ˆ

i

y

=

βˆ

1

(

x

i

x

)

(

)

xx x y x

m

x

x

s

s

S

n

1

1

βˆ

2 0 1 / ˆ0

+

+

=

(

)

xx x y x

m

y

y

s

s

S

βˆ

n

1

1

βˆ

2 1 2 0 1 / ˆ0

+

+

=

(52)

As shown in the following figure, even a limited number of replicates can reduce significantly the uncertainty on x0:

m=1

(

)

xx x y x x x s s S n 1 1 βˆ 2 0 1 / ˆ0 − + + = m=3

(

)

xx x y x x x s s S n 1 3 1 βˆ 2 0 1 / ˆ0 − + + =

Moreover, given a certain number of replicates, the uncertainty on x0 is minimized when and when Sxx is increased.

x

x =

0

(53)

Considerations about the S

xx

term and the experimental design of calibration

The SXX term depends on the number of experimental points but also on their distribution along the x axis.

Sxx values as a function of the distribution of calibration points are reported in the figure on the right:

As apparent, they are higher for designs including clusters of points close to the ends of the x interval. However, such designs would not provide good guarantees on the adequacy of the adopted linear regression model.

Consequently, calibration designs including evenly-spaced points, that lead to intermediate values for Sxx but, additionally, provide information on important aspects like homoscedasticity and linearity, are usually preferred.

(54)

It is also worth noting that, as shown in the following figure, the distribution of experimental points has a clear influence also on the centroid, indicated by an arrow in each set of points: 2.5 5 10 15 20 0 Sxx = 311.7 Sxx = 298.2 Sxx = 514.9 Sxx = 312.0

The centroid displacement has, in turn, a direct influence on the position of confidence and prediction bands, and, consequently, on the width of confidence intervals for extrapolated concentrations.

Riferimenti

Documenti correlati

Through the game and the direct experience of space, we illustrated the illusory power of perspective and experien- ced the stereometric construction of a small Ames room..

Il modello finale è una stanza di circa due metri per tre, all’interno della quale non è possibile camminare (come d’altronde nella camera di Ames originale) ma è

Furthermore, measurements carried out during internal erosion simulations demonstrate that even small variations in the electrical resistivity can be captured and these changes can

In Figure 1.1 we show the approximate Fekete points computed by the above code (marked by +) as well as the true Fekete points, the extended Chebyshev points and also the so-called

[r]

This thesis presented Home Manager, a home automation centralized applica- tion focused on power saving and power overload protection; ZigApi, a software framework aimed at

A new proof is given of the theorem of Victor Klee which states that the support points of a closed, convex, and locally weakly compact set in a real Hausdorff locally

Though it is not known whether “true” Lebesgue points for the triangle are symmetric (and computational results seem to say they are not, cf. [11, 22]), symmetry is a