Inferential Statistics Hypothesis tests Confidence intervals
Eva Riccomagno, Maria Piera Rogantin
DIMA – Universit`a di Genova
riccomagno@dima.unige.it rogantin@dima.unige.it
Part G. Multiple tests
Part H. Confidence intervals 1. Introduction
2. Confidence interval for the mean
(a) of a Normal variable – known variance (b) of a Normal variable – unknown variance
(c) of a variable with unknown distribution (approximate) 3. Different levels 1 − α
4. Confidence intervals and tests
Part G. Multiple tests
We may need to conduct many hypothesis tests concurrently Suppose each test is conducted at level α.
For any one test, the chance of a false rejection of the null is α.
But the chance of at least one false rejection is much higher Examples:
• Measuring the state of anxiety by questionnaire in two groups of subjects. Various questions help define the level of anxiety.
As more questions are compared, it becomes more likely that the two groups will appear to differ on at least one topic by random chance alone.
• Efficacy of a drug in terms of the reduction of any one of a number of disease symptoms. It becomes more likely that the drug will appear to be an improvement over existing drugs in terms of at least one symptom.
Consider m hypothesis tests:
H0i and H1i for i = 1, . . . , m Example
For α = 0.05 and m = 2
Probability to retain both H01 and H02 when true: (1 − α)2 = 0.952 = 0.90 Probability to reject at least one true hypothesis: 1−(1−α)2 = 1−0.952 = 0.10
For α = 0.05 and m = 20
Probability to reject at least one true hypothesis:
1 − (1 − α)20 = 1 − 0.9520 = 0.64 α
There are many ways to deal with this problem. Here we discuss two methods
Bonferroni (B) Method
Let p1, . . . , pm denote the m p-values for these tests. Reject null hypothesis H0i if
pi ≤ α m
The probability of falsely rejecting any null hypotheses is less than or equal to α
Example (continue)
m = 2: 1 − (1 − (0.05/2))2 = 0.0493 m = 20: 1 − (1 − (0.05/20))20 = 0.0488
Benjamini-Hochberg (BH) Method
1. Let p(1) < · · · < p(m) denote the ordered p-values 2. Reject all null hypotheses H0(i) for which p(i) < i mα
If the tests are not independent the value to compare p(i) is appropriately adjusted
Example
Consider the following 10 (sorted) p-values. Fix α = 0.05
p=c(0.00017,0.00448,0.00671,0.00907,0.01220,0.33626,0.39341, 0.53882,0.58125,0.98617)
alpha=0.05; m=length(p); i=seq(1,m);
b=i*alpha/m; BH=(p<b);
B=(p<alpha/m); cbind(p,b,BH,B) p b BH B
[1,] 0.00017 0.005 1 1 [2,] 0.00448 0.010 1 1 [3,] 0.00671 0.015 1 0 [4,] 0.00907 0.020 1 0 [5,] 0.01220 0.025 1 0 [6,] 0.33626 0.030 0 0 [7,] 0.39341 0.035 0 0 [8,] 0.53882 0.040 0 0 [9,] 0.58125 0.045 0 0 [10,] 0.98617 0.050 0 0
Reject H0i for
- i = 1, 2 with Bonferroni method
- i = 1, 2, 3, 4, 5 with Benjamini-Hochberg method
Abuse of test
Warning! There is a tendency to use hypothesis testing meth- ods even when they are not appropriate. Often, estimation and confidence intervals are better tools. Use hypothesis testing only when you want to test a well-defined hypothesis
(from Wassermann)
A summary of the paper by Regina Nuzzo. (2014) Statistical Errors – P values, the “gold standard” of statistical validity, are not as reliable as many scientists assume. Nature, vol. 506, p. 150-152
• Ronald Fisher 1920s
– intended p-values as an informal way to judge whether evidence was significant for a second look
– one part of a fluid, non-numerical process that blended data and background knowledge to lead to scientific con- clusions
• Interpretation
– the p-value summarizes the data assuming a specific null hypothesis
• Caveats
– tendency to deflect attention from the actual size of an effect
– P-hacking or significance-chasing, including making as- sumptions, monitor data while it is being collected, ex- cluding data points,. . .
• Measures that can help – look for replicability
– do not ignore exploratory studies nor prior knowledge – report effect sizes and confidence intervals
– take advantage of Bayes’ rule (not part of this course unfortunately)
– try multiple methods on the same data set
– adopt a two-stage analysis, or “preregistered replication”
Part H
Confidence intervals 1. Introduction
2. Confidence interval for the mean
(a) of a Normal variable – known variance (b) of a Normal variable – unknown variance
(c) of a variable with unknown distribution (approximate) 3. Different levels 1 − α
4. Confidence intervals and tests
1. Introduction
Let θ be a real valued parameter and L and U two real valued functions of the random sample X = (X1, ..., Xn) such that
L(x) ≤ U (x) for all instances x of the random sample.
Then (L, U ) is called an interval estimation for θ
If P (θ ∈ (L, U )) ≥ 1 − α
then (L, U ) is a 1 − α confidence interval
1 − α is called the coverage of the confidence interval Usually 1 − α = 0.95
We have over a 1− α chance of covering the unknown parameter with the estimator interval (from Casella Berger)
From Wassermann
Warning! (L, U ) is random and θ is fixed
Warning! There is much confusion about how to interpret a confidence interval. A confidence interval is not a probability statement about θ since θ is a fixed quantity [. . . ]
Warning! Some texts interpret confidence intervals as follows: if I repeat the experiment over and over, the interval will contain the parameter 1− α percent of the time, e.g. 95% of the time. This is correct but useless since we rarely repeat the same experiment over and over. [. . . ] Rather
- day 1: θ1 ⇒ collect data ⇒ construct a 95% IC for θ1 - day 2: θ2 ⇒ collect data ⇒ construct a 95% IC for θ2 - day 3: θ3 ⇒ collect data ⇒ construct a 95% IC for θ3 - . . .
Then 95 percent of your intervals will trap the true parameter value. There is no need to introduce the idea of repeating the same experiment over and over
2. Confidence interval for the mean of a random variable X1, . . . , Xn i.i.d. random sample. Parameter of interest µ
• Point estimator: X;
point estimate x (sample value of X at the observed data points)
• Confidence interval or Interval estimator with coverage 1− α:
X − δ , X + δ with δ such that P
X − δ < µ < X + δ = 1 − α
The limit of the interval X − δ and X + δ are random variables The sample confidence interval is:
(x − δ, x + δ)
How to compute δ? Using the (exact or approximate) distribution of the point estimator X
2. (a) Confidence interval for the mean of a Normal variable – known variance
For X1, . . . , Xn i.i.d. sample random variables with X1 ∼ N (µ, σ2) X ∼ N µ, σ2
n
!
or Z = X − µ σ/√
n ∼ N (0, 1)) 1 − α = PX − δ < µ < X + δ = P
µ − δ < X < µ + δ
Example
X1 ∼ N (µ, 4) n = 9 1−α = 0.95
X ∼ N
µ, 4 9
0.00.10.20.30.40.50.60.00.10.20.30.40.50.6
µ − δ µ µ + δ
Computation of δ
1 − α = P µ − δ < X < µ + δ
= P µ − δ − µ
√σ n
< X − µ
√σ n
< µ + δ − µ
√σ n
!
= P − δ
√σ n
< Z < δ
√σ n
!
⇒ δ
√σ n
= z1−α/2 ⇒ δ = z1−α/2 σ
√n
Density functions of - Z ∼ N (0, 1)
- X ∼ N (µ, 4/9) z1−0.05/2 = 1.96 δ = 1.31
0.00.10.20.30.40.50.60.00.10.20.30.40.50.6
µ − δ µ µ + δ
0.00.10.20.30.40.50.60.00.10.20.30.40.50.6
−1.96 0 1.96
Confidence interval for µ: X − z1−α/2 σ
√n, X + z1−α/2 σ
√n
!
Sample confidence interval for µ:
x − z1−α/2 σ
√n, x + z1−α/2 σ
√n
!
- we do not know if µ belongs or not to this sample interval whose limits are computed using the sample value x
- another x the interval would be different
Among all possible confidence intervals constructed as before, 95% contains µ and 5% does not
Simulation for 100 samples: n = 80 σ2 = 4 1 − α = 95%
x − 1.96 2/√
80, x + 1.96 2/√
80
6 intervals do not contain µ
2. (b) Confidence interval for the mean of a Normal variable – unknown variance
For X1, . . . , Xn random sample and X1 ∼ N (µ, σ2) as point esti- mator of µ and σ2 take X and S2 respectively
Consider the random variable
T = X − µ S/√
n ∼ t[n−1]
The computation of the confidence interval for µ is similar to the normal case
X − t1−α/2 S
√n, X + t1−α/2 S
√n
!
2. (c) Confidence interval for the mean of a random variable with unknown distribution
If the sample size is “large” we can use the approximate distri- bution of X via CLT:
X − z1−α/2 S
√n, X + z1−α/2 S
√n
!
3. Different coverage coefficients 1 − α A 95%-confidence interval or a 99%-confidence interval?
Values of the 1 − α/2 quantile of a standard normal random variable N (0, 1): z0.950 = 1.64 z0.975 = 1.96 z0.995 = 2.58
0.95 0.99 0.90
What is gained in precision is lost in range
Example X ∼ N (µ, 4/80) and assume x = 2.5:
- at 90%: δ = 0.37 sample confidence interval (1.92, 3.08) - at 95%: δ = 0.44 sample confidence interval (2.06, 2.94) - at 99%: δ = 0.58 sample confidence interval (2.13, 2.87)
5. Confidence intervals and tests Parameter of interest µ of a N (µ, σ) with σ known
Two-sided 1 − α confidence interval for µ and two-sided test at level α (H0 : µ = µ0 and H1 : µ 6= µ0)
H0 is retained for x ∈ (µ0 − δ, µ0 + δ) The sample confidence interval is (x − δ, x + δ)
( µ )
( xA ) ( xB )
)
0
where δ = z1−α/2 √σ
n in both cases
The interval where H0 is retained is centered in µ0 while the confidence interval is centered in x
If the sample confidence interval contains µ0 then H0 is retained, and viceversa
Compare tests and confidence intervals in R output Example. Chicago Tribune (continue)
> np=750*0.2347; prop.test(np,750,0.25)
1-sample proportions test with continuity correction data: np out of 750, null probability 0.25
X-squared = 0.85654, df = 1, p-value = 0.3547
alternative hypothesis: true p is not equal to 0.25 95 percent confidence interval:
0.2051343 0.2670288 sample estimates:
p 0.2347
> prop.test(np,750,0.25,"less")
1-sample proportions test with continuity correction data: np out of 750, null probability 0.25
X-squared = 0.85654, df = 1, p-value = 0.1774 alternative hypothesis: true p is less than 0.25 95 percent confidence interval:
0.0000000 0.2617696 sample estimates:
p 0.2347
Remark. If the parameter of interest is a proportion p then δ is different for confidence intervals and tests, because it depends on the standard deviation.
In the first case it is calculated using the sample value ˆp, in the second one using p0