Statistical Concepts: T-Distribution, ANOVA, and Hypothesis Testing
Chapter 12 Key Terms
The t distribution is similar to the z distribution in that both are symmetrical, bell-shaped sampling distributions. However, the overall shape of the t distribution is strongly influenced by the sample size used to generate it. For very large samples, the t distribution approaches the z distribution, but for smaller samples, the t distribution is flatter.
A t test is a test of the null and research hypotheses used when the research design involves two samples. It tests the difference between the means of a sample and the population, two independent samples, or two correlated samples.
Degrees of freedom are a statistical term used to denote the number of scores within any distribution that are free to vary without restriction in a sample with a fixed mean.
Sampling distribution of differences between means is a distribution generated by taking two random samples and computing the difference between their means. By doing this a great number of times, a distribution is formed. If all the samples are selected from the same population or populations with equal means, the mean of this distribution is zero. If, on the other hand, there are two independent samples from populations that have different means, then the distribution of differences will have a mean equal to the difference between the two populations.
- Standard error of the difference between independent sample means is the standard deviation of the distribution of the differences between sample means that is drawn from two independent populations.
- Standard error of the difference between correlated sample means is the standard deviation of the distribution of the differences between paired means using the standard deviation and correlation coefficient.
The difference method is used to compute the estimate of the standard error of the difference between means that uses the differences between paired scores rather than the correlation coefficient and standard deviation.
Chapter 13 Key Terms
One-way analysis of variance is a test of the null and research hypotheses when the research design involves a comparison of three or more levels of one independent variable.
Mean square is a variance estimate, and it is the average of the squared deviation scores used to calculate the variation.
The F ratio is the ratio of the mean square between groups estimate and mean square within groups estimate. If the null hypothesis is true, this ratio will be 1.
The F test indicates if there is a difference between at least two and possibly more of the groups such that the null hypothesis can be rejected. It does not indicate where the difference lies.
Sum of squares is similar to the sum of deviation scores.
A Source table displays the source of variation, sums of squares, degrees of freedom, mean squares, the F ratio, and p value.
A Post-hoc test is a test performed if the F ratio is significant to determine where the significant difference(s) lie. Post hoc is Latin for “after the fact.”
Tukey HSD is a post hoc test. The HSD stands for Honestly Significant Difference. It is used to compare sample means when an analysis of variance leads to a significant F. It reveals how far apart the sample means must be in order to be significantly different.
