P30Home | Schedule | Labs | Homework | Project | Topics |
The details of some of the steps differ: The method of determining the critical region depends which one-sample test we are using, and, of course, we way we calculate the (estimate of the) standard error differs for T and Z.
This step is the same for both one-sample tests. We actually state two hypotheses:
Z-Test: We use the alpha-level to find the critical Z value in the Z table.
T-Test: We use the alpha-level and the degrees of freedom to find the critical T value in the T table.
Notice that there is a new complication in using T: There isn't just one T-distribution that we use to determine the critical value of T. There is a whole family of distributions. The distribution depends on the "degrees of freedom".
For the one-sample T-test, the degrees of freedom is simply equal to one less than the sample size. That is:
Chapter 11 presents T-Tests for the situation where there are two related samples of scores. This situation commonly occurs in the repeated-measures experimental study. It can also occur in the matched-subject experimental study. These designs are sometimes called dependent sample studies or within-subject designs.
One Tail, Less Than 0
![]() |
Two Tail![]() |
One Tail, Greater Than 0![]() |
A sample of 5 patients is selected. During the week before treatment, the investigator records the severity of their symptoms by measuring how many doses of medication are needed for asthma attacks. Then the patients receive relaxation training. For the week following the training the research once again records the number of doses used by each patient.
Hand Calculations: The classical hypothesis testing steps are:
Computer Calculations: ViSta can be used to analyze these data, as specified in the ViSta Applet. We specified a directional T-Test: There will be fewer medication doses used after relaxation than before. Note that this differs from the book, where they use a non-directional test. We selected the "before" variable as the first variable and the "after" variable as the second one (the second variable is subtracted from the first).
We obtained the following workmap:
![]() |
The analysis produces the following report, which corresponds with the hand calculations:
![]() |
The analysis also produces the following visualization. The plots suggest that the data are not normally distributed, since the jagged lines don't follow the straight line in the quantile plots and quantile-quantile plot, and since the boxes in the box and diamond plot are not symmetric.
This situation is probably the most common experimental design in Psychology. These designs are sometimes called between-subjects or between-groups designs.
T-Test for Two Independent Samples
Example:
The data report for the ViSta Data is:
The ViSta Applet for these data yields the following workmap:
We analyze these data using a one-tailed test based on a directional hypothesis that the directed reading activity will improve reading ability scores (that the "Treatment" group will have higher scores than the "Control" group).
The analysis of these data produces the following model report:
As pointed out in the chapter, the significance test requires that the data come from populations that are normally distributed with equal variance. The visualization helps us see whether these assumptions are met.
Normality: Interpreting features of these plots discussed above, we conclude that the data are reasonably normal.
Equal Variance: The box-plot, however, reveals that there may be more variation in the control group than in the treatment group (the box for the control group is taller than for the treatment group, and the observation dots cover a wider range for the control group). This may mean that the value of p (.0129) may be too optimistic. We note that we have one outlying control group value. Perhaps we should remove it and reanalyze the data.
Once again, the generic formula for the T-Statistic is:
For the Independent Samples T-Statistic:
This formula for the estimated standard error uses the "pooled" (combined) errors for the two sample means. The formula for this is:
We use data concerning reading ability. (These data are from page 543 of Moore and McCabe.) The data come from a study in which an educator tested whether a new directed reading activity help elementary school pupils improve their reading ability. The two groups are a classroom of 21 students who got the activity (the "Treatment" group), and another classroom of 23 students who didn't (the "Control" group). All students were given the Degree of Reading Power test.
From this report we observe that p=.0129. Thus, we conclude that we can reject the null hypothesis that the reading activity did not improve reading ability scores, and that the reading activity had a "significant statistical effect" on the reading ability scores.