One Sample Tests

Main Page

We focus on one sample tests.

Tests on a Gaussian sample

Consider a random sample (X1, …, Xn) of a distribution with mean μ, standard deviation σ. Recall that

Test of the mean

A first test is on the mean of the sample.

Example. For an adult, the logarithm of the D-dimer concentration, denoted by X, is modeled by a normal random variable with mean μ and standard deviation σ. The variable X is an indicator for the risk of thrombosis: it is considered that for healthy individuals, μ is −1, whereas for individuals at risk μ is 0. The influence of olive oil on thrombosis risk must be evaluated. A group of 13 patients, previously considered as being at risk, had an olive oil enriched diet. After the diet, their value of X was measured, and this gave an empirical mean of −0.15.

The doctor would like to decide if the olive oil diet has improved the D-dimer concentration.

The test on the mean of the sample compares the hypothesis H0 : μ = μ0 with a two-sided hypothesis H1 : μ ≠ μ0 or a one-sided hypothesis H1 : μ ≥ μ0 or H1 : μ ≤ μ0.

When the variance σ2 is known, the test statistic is
$$T = \sqrt{n} \left(\frac{\bar X - \mu_0}{\sigma}\right)$$

When H0 is true, the statistic T follows a 𝒩(0, 1).

The decision rule depends on H1. It consists in computing the bounds at which we reject H0. The bounds depend also on the risk of the test (the first kind risk).

The test on the mean of the sample compares the hypothesis H0 : μ = μ0 with one of the three alternatives:

  • For H1 : μ ≠ μ0: we reject H0 when T takes too small or too large values. At a risk of α = 5%, the two bounds are
R> alpha <- 0.05
R> qnorm(alpha/2,0,1)
[1] -1.959964
R> qnorm(1-alpha/2,0,1)
[1] 1.959964

We reject H0 when T < ( − 1.959964) or whenT > 1.959964.

  • For H1 : μ ≥ μ0: we reject H0 when T takes too large values. At a risk of α = 5%, the bound is
R> alpha <- 0.05
R> qnorm(alpha,0,1, lower.tail=FALSE)
[1] 1.644854

We reject H0 when T > 1.644854.

  • For H1 : μ ≤ μ0: we reject H0 when T takes too small values. At a risk of α = 5%, the bound is
R> alpha <- 0.05
R> qnorm(alpha,0,1)
[1] -1.644854

We reject H0 when T < ( − 1.644854).

Back to the example. We assume that the sample of 13 patients is a Gaussian sample. The standard deviation σ is supposed to be known and equal to 0.3. We want to test
H0 : μ = 0 versus H1 : μ = −1

The test statistic is
$$ T = \sqrt{13} \left(\frac{\bar X - 0}{0.3}\right)$$
According to the null hypothesis H0, T follows the normal distribution 𝒩(0, 1). The hypothesis H0 is rejected when T takes low values. At risk 5%, the bound is

R> qnorm(0.05,0,1)
[1] -1.644854

The decision rule is Reject H_0 if T  <  (−1.6449).

For $\bar X= -0.15$, the test statistic takes the value

R> n<-13
R> Xbar<--0.15
R> sig<-0.3
R> mu0<-0
R> t<-sqrt(n)*(Xbar-mu0)/sig
R> t
[1] -1.802776

Decision and interpretation: At risk 5%, the hypothesis H0 is rejected. The decision is that there has been a significant improvement.

The previous case assumes that the standard deviation σ is known. This is usually not the case in practice. The adaptation to the test of the mean, with an unknown variance is the following.

The test on the mean of the sample compares the hypothesis H0 : μ = μ0 versus a two-sided hypothesis H1 : μ ≠ μ0 or a one-sided hypothesis H1 : μ ≥ μ0 or H1 : μ ≤ μ0.

When the variance σ2 is unknown, the test statistic is
$$T = \sqrt{n} \left(\frac{\bar X - \mu_0}{S}\right)$$

When H0 is true, the statistic T follows a Student distribution with n − 1 degrees of freedom T(n − 1).

The decision rules are before, but the bounds are computing from the Student distribution instead of the normal distribution.

  • For H1 : μ ≠ μ0: we reject H0 when T takes too small or too large values. At a risk of α = 5%, the two bounds are
R> alpha <- 0.05
R> n<-13
R> qt(alpha/2,n-1)
[1] -2.178813
R> qt(1-alpha/2,n-1)
[1] 2.178813

We reject H0 when T is outside the two bounds.

  • For H1 : μ ≥ μ0: we reject H0 when T takes too large values. At a risk of α = 5%, the bound is
R> alpha <- 0.05
R> n<-13
R> qt(alpha,n-1)
[1] -1.782288

We reject H0 when T is larger than the bound.

  • For H1 : μ ≤ μ0: we reject H0 when T takes too small values. At a risk of α = 5%, the bound is
R> alpha <- 0.05
R> n<-13
R> qt(alpha,n-1)
[1] -1.782288

We reject H0 when T is lower than the bound.

Back to the example. We assume that the standard deviation σ is unknown and estimated to 0.3. We want to test
H0 : μ = 0 versus H1 : μ = −1

The test statistic is
$$ T = \sqrt{13} \left(\frac{\bar X - 0}{0.3}\right)$$
According to the null hypothesis H0, T follows a Student distribution with 12 degrees of freedom. The hypothesis H0 is rejected when T takes low values. At risk 5%, the bound is

R> qt(0.05,12)
[1] -1.782288

The decision rule is Reject H_0 if T  <  (−1.6449).

For $\bar X= -0.15$, the test statistic takes the value

R> n<-13
R> Xbar<--0.15
R> s<-0.3
R> mu0<-0
R> t<-sqrt(n)*(Xbar-mu0)/s
R> t
[1] -1.802776

Decision and interpretation: At risk 5%, the hypothesis H0 is rejected. The decision is that there has been a significant improvement.

The previous example uses the estimation of the mean, the standard deviation and the sample size. In practice, all the values of the sample are usually available. In that case, the user could estimate himself the mean, the standard deviation and apply the previous instruction. Or he can directly use the function t.test.

R code for the test on a mean. The mean of a sample can be tested using the function t.test.

t.test(X,mu,alternative)

The function computes the test statistic of Student’s T-test comparing mean(X) to mu, and the corresponding p-value according to the alternative. The null hypothesis H0 is “the mean is equal to mu”.
The alternative is in « two.sided » (default), « less », « greater »; they are understood as:

Example To test if the mean of the age in LenzIsample is equal to 60, we run the following code

R> LenzI <- readRDS("data/LenzI.rds")
R> A<-LenzI$age
R> t.test(A, mu=60)

	One Sample t-test

data:  A
t = 1.5019, df = 413, p-value = 0.1339
alternative hypothesis: true mean is not equal to 60
95 percent confidence interval:
 59.6479 62.6323
sample estimates:
mean of x 
  61.1401

The two hypotheses of this t-test are H0 : μ = 60 and H1 : μ ≠ 60*. The output reads as follows:

Interpretation of the t.testoutput: the p-value is 0.1339. Therefore, at a risk of 5%, we do not reject H0. The mean age is not significantly different from 60 years.

We can also apply a one-sided test by changing the alternative. To test the alternative H1 : μ ≥ 60, run

R> A<-LenzI$age
R> t.test(A, mu=60, alternative= "greater")

	One Sample t-test

data:  A
t = 1.5019, df = 413, p-value = 0.06695
alternative hypothesis: true mean is greater than 60
95 percent confidence interval:
 59.88867      Inf
sample estimates:
mean of x 
  61.1401

Interpretation of the t.testoutput: the p-value is 0.06695. Therefore, at a risk of 5%, we do not reject H0. The mean age is not significantly greater than 60 years.

Remark that even if the empirical mean (61.1401) is greater than 60 years, the difference is not significant, and we can not conclude, at a risk of 5%, that the mean is larger than 60. Several reasons could be involved: the variability in the sample is too large (the standard error of the empirical mean is large) or the size of the sample is not large enough (the empirical mean is not estimated with enough precision).

Test of the standard deviation or the variance

One can also test the value of the standard deviation or the variance of a Gaussian sample.

The test on the variance of the sample compares the hypothesis H0 : σ2 = σ02 versus a two-sided hypothesis H1 : σ2 ≠ σ02 or a one-sided hypothesis H1 : σ2 ≥ σ02 or H1 : σ2 ≤ σ02.

The test statistic is
$$T = (n-1) \left(\frac{S^2}{\sigma_0^2}\right)$$

When H0 is true, the statistic T follows a chi-square distribution with n − 1 degrees of freedom χ2(n − 1).

Test of the mean for large sample

Finally, a test of the mean exists for large sample, and we don't need the assumption that the sample is Gaussian thanks to the Central Limit Theorem.

The test on the mean of a large sample compares the hypothesis H0 : μ = μ0 versus a two-sided hypothesis H1 : μ ≠ μ0 or a one-sided hypothesis H1 : μ ≥ μ0 or H1 : μ ≤ μ0.

The test statistic is
$$T = \sqrt{n} \left(\frac{\bar X - \mu_0}{S}\right)$$

When H0 is true, the statistic T follows a normal distribution 𝒩(0, 1).

With R, the test of the mean for large sample can be applied with the function t.test (as above), assuming that the normal distribution is very closed to a Student distribution with a large degree of freedom.

Test of a proportion

The previous tests are applied to gaussian samples. When the variable of interest is binary (two possible modalities), we are interested in comparing the proportion of the first modality with a theoretical proportion p0.

Example For a certain disease, there exists a treatment that cures 70% of the cases. A laboratory proposes a new treatment claiming that it is better than the previous one. Out of 100 patients having received the new treatment, 74 of them have been cured. The expert would like to decide whether the new treatment should be authorized.


The test on the proportion of a binary sample compares the hypothesis H0 : p = p0 versus a two-sided hypothesis H1 : p ≠ p0 or a one-sided hypothesis H1 : p ≤ p0 or H1 : p ≥ p0.

Back to the example The hypotheses we want to test are H0 : p = 0.7 versus H1 : p ≥ 0.7.


A value of a proportion can be tested using the function prop.test.

prop.test(x,n,p,alternative)

The null hypothesis H0 is: “the proportion of x out of n is equal to p”. The alternative is in « two.sided » (default), « less », « greater »; they are understood as:

Back to the example The one-sided test is applied running

R> prop.test(x=74, n= 100, p=0.7, alternative=« greater »)

Interpretation The p-value is 0.2225. At risk 5%, we do not reject H0. The new treatment is not significantly better than the standard treatment. It should not be authorized.

Note that when the whole binary sample X is available (and not only the count of « successes »), the instruction is prop.test(sum(X), length(X), p, alternative).


Goodness-of-fit tests

A goodness-of-fit test answers the question: could the sample have been drawn at random from a particular distribution?

Example Consider a diploid population with allele frequencies 0.4 for A, 0.6 for a. At the Hardy-Weinberg equilibrium, the probabilities of the three genotypes AA, Aa, aa, are (0.16, 0.48, 0.36). Frequency table of the three genotypes is (1600, 4900, 3500). Is the theoretical model plausible ?


Chi-squared test

For a discrete variable, the goodness-of-fit is measured by a distance between the relative frequencies of the variable, and the probabilities of the target distribution.

For a discrete variable, the chi-squared test compares the null hypothesis H0: “the observed frequencies fit the theoretical probabilities”. The alternative is “the observed frequencies do not fit the theoretical probabilities”.

Under H0, the distance follows a chi-squared distribution. The parameter df of that chi-squared distribution is the number of different values minus 1, minus the number of estimated parameters, if there are any.

Under the alternative, the distance should be large, so that the p-value is computed as the right-tail probability of the chi-squared distribution at the distance.

A a goodness-of-fit for a discrete variable can be tested using the function chisq.test, if no parameter have been estimated. If X is the sample, and p is the distribution, the result is obtained by:

chisq.test(table(X),p)

In that command,

The answer is “the fit is good”, if the p-value is large (above the risk).

If some frequencies are too small, a warning message may be issued. If one parameter or more have been estimated, the test statistic should be extracted, and the p-value computed as its right-tail probability for the chi-squared distribution with a smaller parameter.

Back to the example The frequency table of the three genotypes AA, Aa, aa is (1600, 4900, 3500). The theoretical probabilities are (0.16, 0.48, 0.36). The chi-squared test is applied running

R> chisq.test(c(1600, 4900, 3500),p=c(0.16, 0.48, 0.36))

	Chi-squared test for given probabilities

data:  c(1600, 4900, 3500)
X-squared = 4.8611, df = 2, p-value = 0.08799

The outputs are

The p-value is 0.08799. At risk 5%, we do not reject H0. The theoretical probabilities are acceptable.

Kolmogorov-Smirnov test

For a continuous variable, the goodness-of-fit is measured by a distance between the empirical cumulative distribution function (ecdf) of the variable, and the cdf of the target distribution.

For a continuous variable, the Kolmogorov-Smirnov test compares the null hypothesis H0: “the empirical distribution of the data fits the theoretical distribution” and H1: "The empirical distribution does not fit the theoretical distribution".

If X is the sample, dist is the distribution, param the parameters of that distribution, the result is obtained by:

`ks.test(table(X), dist, param, alternative)’

The answer is “the fit is good”, if the p-value is large (above the risk). The variable X should not have ties (equal values). If some values are equal, a warning message is issued, indicating that the p-value is not quite as precise. This does not affect the validity of the result.

The null hypothesis H0 is: “the distribution of the sample is the theoretical cdf”. The alternative is in « two.sided » (default), « less », « greater »; they are understood as:

Example The hypoxy level is given in data set HY. Let us plot the ecdf of Level and those of a normal distribution.

Code R :
R> HY <- read.table("data/hypoxy.csv", header=TRUE, dec=",")
R> L<-HY$Level
R> plot(ecdf(L))
R> curve(pnorm(x,mean(L), sd(L)), col="red", add=TRUE)

Résultat : #r{codeimg}

The red curve is the one of a normal distribution with parameters μ = 1.2 and σ = 1. The ecdf (black curve) is quite far from the theoretical cdf. The Level variable is probably not normally distributed.

To test if the ecdf of Level is a normal distribution with parameters μ = 1.2 and σ = 1 or not, run

R> ks.test(L, "pnorm", c(1.2,1))

	One-sample Kolmogorov-Smirnov test

data:  L
D = 0.1782, p-value = 2.501e-06
alternative hypothesis: two-sided

The outputs are

The p-value is 2.501e-06. At risk 5%, we reject H0. The sample level does not follow a normal distribution 𝒩(1.2, 1).

The histogram of the variable Level reveals a right-skewed distribution, closed to a log-normal distribution. Let us log-transform the data

R> LL<-log(L)

and plot the ecdf (black curve) of the log-Level and the cdf (red curve) of a normal distribution with parameters log(1.2)≈0.2 and 1. #r{codeimg}

The two curves are quite closed. Let us test if the empirical cdf of LL is a normal distribution with parameters log(1.2)≈0.2 and 1

R> ks.test(LL, "pnorm", c(0.2,1))

	One-sample Kolmogorov-Smirnov test

data:  LL
D = 0.51048, p-value < 2.2e-16
alternative hypothesis: two-sided

The p-value is still very small. At risk 5%, we reject H0. The sample log-level does not follow a normal distribution 𝒩(0.2, 1).

Remark: the difference on the left of the two curves (ecdf and cdf) is large enough to reject the null hypothesis.

Normality test

Testing whether a variable is normally distributed, is different from testing whether a particular normal distribution with given parameters fits the variable.

The normality of a continuous variable is tested with the Shapiro-Wilk test.

The null hypothesis H0 is: “the variable is normally distributed”. The alternative is “the variable is not normally distributed”.

If X is the sample, the result is obtained by:

`shapiro.test(X)’

Back to the example Test of the normality of the log-level of hypoxy:

R> shapiro.test(LL)

	Shapiro-Wilk normality test

data:  LL
W = 0.95329, p-value = 1.936e-06

The outputs are

The p-value is 1.936e-06. At risk 5%, we reject H0. The sample log-level is not normally distributed.

12/9/2018