next up previous
Next: Hypothesis tests to compare Up: Statistical Decisions Previous: Hypothesis tests for a

Hypothesis tests to compare means of two populations - independent samples

The tests described in the previous section involve a comparison between a population mean and a standard. More commonly occurring situations involve a comparison of the means in two populations. Suppose for example we would like to compare the mean salaries of female and male financial analysts. This comparison can be expressed as a test of the hypotheses

\begin{eqnarray*}
H_0:&\ \mu_1 = \mu_2\\
H_1:&\ \mu_1 \ne \mu_2,
\end{eqnarray*}

where $\mu_1$ represents the population mean salary of all male financial analysts and $\mu_2$ represents the population mean salary of all female financial analysts. This problem is stated as a two-sided hypothesis so that we can detect an increase as well as a decrease in female salaries compared to male salaries. We will assume that these populations have approximately normal distributions or that we have large sample sizes so that the central limit theorem can be applied. The simplest way to make this comparison is to separately select random samples from each group. This sampling method produces independent samples. Let $\mu_1,\sigma_1$ denote the population mean and standard deviation of male salaries, let $\mu_2,\sigma_2$ denote the population mean and standard deviation of female salaries, and let $n_1,\bar{X}_1,s_1,n_2,\bar{X}_2,s_2$ denote the sample sizes, sample means, and sample standard deviations for the respective samples. It would be reasonable to base our decision on $\bar{X}_1-\bar{X}_2$, the difference between the sample means. To construct a test statistic based on this difference, we need to determine its sampling distribution. That is, we must find the distribution of $\bar{X}_1-\bar{X}_2$ from all possible samples of size $n_1$ for males and $n_2$ for females. Let

\begin{displaymath}
V_1 = \frac{s_1^2}{n_1},\ \ V_2 = \frac{s_2^2}{n_2}.
\end{displaymath}

Statistical theory shows that if the populations are approximately normal or if the sample sizes are large, then the distribution of

\begin{displaymath}
\frac{(\bar{X}_1 - \bar{X}_2) - (\mu_1 - \mu_2)}{\sqrt{V_1 + V_2}}
\end{displaymath}

has approximately a t-distribution with degrees of freedom given by

\begin{displaymath}
\nu = \frac{(V_1 + V_2)^2}{\frac{V_1^2}{n_1-1} + \frac{V_2^2}{n_2-1}}.
\end{displaymath}

Under the assumption that the null hypothesis is true, then

\begin{displaymath}
\frac{\bar{X}_1 - \bar{X}_2}{\sqrt{V_1 + V_2}}
\end{displaymath}

has approximately a t-distribution with degrees of freedom $\nu$. Strong evidence for this two-sided alternative would be sample means that are far apart. Therefore, the p-value is $P(T \ge \vert T_0\vert)$, where

\begin{displaymath}
T_0 = \frac{\bar{X}_1 - \bar{X}_2}{\sqrt{V_1 + V_2}}.
\end{displaymath}

This test is referred to as Welch's approximation to the two-sample t-test.

Care should be taken with one sided-alternatives, since only one direction indicates strong evidence for the alternative. If the hypotheses are

\begin{eqnarray*}
H_0:&\ \mu_1 \le \mu_2\\
H_1:&\ \mu_1 > \mu_2,
\end{eqnarray*}

then strong evidence for the alternative hypothesis would be a value of $\bar{X}_1 - \bar{X}_2$ that is a large positive number. If $\bar{X}_2$ is much larger than $\bar{X}_1$, then the decision should be to not reject the null hypothesis even though the sample means are far apart. Likewise, if the hypotheses are

\begin{eqnarray*}
H_0:&\ \mu_1 \ge \mu_2\\
H_1:&\ \mu_1 < \mu_2,
\end{eqnarray*}

then strong evidence for the alternative hypothesis would be a value of $\bar{X}_1 - \bar{X}_2$ that is a large negative number. If $\bar{X}_1$ is much larger than $\bar{X}_2$, then the decision should be to not reject the null hypothesis. The easiest way to handle these one-sided hypotheses is the form the test statistic according to the alternative hypothesis. If the hypotheses are

\begin{eqnarray*}
H_0:&\ \mu_1 \le \mu_2\\
H_1:&\ \mu_1 > \mu_2,
\end{eqnarray*}

then let

\begin{displaymath}
T_0 = \frac{\bar{X}_1 - \bar{X}_2}{\sqrt{V_1 + V_2}}.
\end{displaymath}

The p-value is $P(T>T_0)$. If $\bar{X}_2$ is larger than $\bar{X}_1$, then $T_0$ would be negative and so this p-value would be greater than 0.5 and we would not reject the null hypothesis. If the hypotheses are

\begin{eqnarray*}
H_0:&\ \mu_1 \ge \mu_2\\
H_1:&\ \mu_1 < \mu_2,
\end{eqnarray*}

then let

\begin{displaymath}
T_0 = \frac{\bar{X}_2 - \bar{X}_1}{\sqrt{V_1 + V_2}}.
\end{displaymath}

The p-value in this case is $P(T>T_0)$. If $\bar{X}_1$ is larger than $\bar{X}_2$, then $T_0$ would be negative and so this p-value would be greater than 0.5 and we would not reject the null hypothesis.

The validity of this two-sample test depends on the assumption of normality of the population. If the populations are not normally distributed and if the sample sizes are not sufficiently large to compensate for this non-normality via the Central Limit Theorem, then the p-values obtained as described above will not be valid. There is a non-parametric test called the Wilcoxon Mann-Whitney rank sum test that can be used in place of the two-sample t-test. Most statistical computer packages include this test as part of their set of two-sample test methods, but this method will not be discussed here.

Example. Suppose we wish to test the hypotheses

\begin{eqnarray*}
H_0:&\ \mu_1 = \mu_2\\
H_1:&\ \mu_1 \ne \mu_2,
\end{eqnarray*}

based on a random sample of 25 male financial analysts and a random sample of 18 female financial analysts using a 5% level of significance. Suppose that the salaries in these samples give $\bar{X}_1=77500$, $s_1=6000$, $\bar{X}_2=72000$, $s_2=9000$. It will be easier to express the salaries in $1000 dollar units rather than dollars, so the data becomes $\bar{X}_1=77.5$, $s_1=6$, $\bar{X}_2=72$, $s_2=9$. Then

\begin{displaymath}
V_1 = 6^2/25 = 1.44,\ \ V_2 = 9^2/18 = 4.5,
\end{displaymath}


\begin{displaymath}
T_0 = \frac{77.5 - 72}{\sqrt{1.44 + 4.5}} = 2.257.
\end{displaymath}

The degrees of freedom are

\begin{displaymath}
\nu = \frac{(1.44+4.5)^2}{\frac{1.44^2}{24} + \frac{4.5^2}{17}} = 27.6.
\end{displaymath}

The degrees of freedom are rounded to 28 to obtain the p-value for this two-sided test.
2*pt(-2.257,28),
which gives 0.032. So our decision is to reject the null hypothesis at the 5% level of significance. Since we now believe that there is a difference between the means, we could ask how great is that difference. This can be accomplished with a confidence interval for the difference between the population means. This confidence interval has the form

\begin{displaymath}
(\bar{X}_1 - \bar{X}_2) \pm t_\nu s_{ind},
\end{displaymath}

where the degrees of freedom for the t-value is the same as for the test statistic, and the standard deviation $s_{ind}$ is the denominator of the test statistic,

\begin{displaymath}
s_{ind} = \sqrt{V_1 + V_2}.
\end{displaymath}

A 95% confidence interval for the difference between the mean salaries for males and females is

\begin{displaymath}
(77.5 - 72) \pm 2.052\sqrt{1.44 + 4.5}\ \Longleftrightarrow\ 5.5 \pm 5.00 \Longleftrightarrow\ [0.5,10.5].
\end{displaymath}

This confidence interval expressed in dollars is [$500,$10,500]. That is, we are 95% confident that the difference between the means is within this interval. Note that all of these values are positive, indicating that the mean for males is greater than the mean for females.

There are situations in which we may wish to compare the variances of two populations. In that case, the test statistic is the ratio of the sample variances, $s_1^2/s_2^2$. Statistical theory implies that if the populations are approximately normally distributed or the sample sizes are large, then under the assumption the population variances are equal, the sampling distribution of this ratio is an F-distribution. This distribution has two parameters, degrees of freedom, given by $n_1-1,n_2-1$. This implies that a test of the hypotheses,

\begin{eqnarray*}
H_0:&\ \sigma_1 = \sigma_2\\
H_1:&\ \sigma_1 \ne \sigma_2,
\end{eqnarray*}

can be constructed based on the ratio of sample variances. Since this test is inherently two-sided, in practice, we divide the larger sample variance by the smaller sample variance and the corresponding p-value is the area to the right of this ratio under the corresponding F-distribution. Note that we do not double this area to obtain the p-value for this test. For example, the data given above for the comparison of male and female financial analysts reported sample variances $s_1=6000$, $s_2=9000$ based on sample sizes of 25,18. So the test statistic is

\begin{displaymath}
\frac{9000}{6000} = 1.5,
\end{displaymath}

and the p-value is taken from the F-distribution with 17,24 degrees of freedom. This can be obtained using R as follows:
pv = 1 - pf(1.5,17,24)
which gives pv = 0.177. Therefore, we would not reject the null hypothesis at the 10% level of significance. This conclusion is based on the assumption that the populations are approximately normal, so that assumption should be checked.


next up previous
Next: Hypothesis tests to compare Up: Statistical Decisions Previous: Hypothesis tests for a
Larry Ammann
2014-12-08