FAQ/ttst - CBU statistics Wiki

Upload page content

You can upload content for the page named below. If you change the page name, you can also upload content for another page. If the page name is empty, we derive the page name from the file name.

File to load page content from
Page name
Comment
In thi sntence, what word is mad fro the mising letters?

location: FAQ / ttst

Computing t-tests and F ratios using summary measures in between subjects designs

It is sometimes handy to be able to work out t-tests 'by hand'. For example you may not have access to the raw data (e.g. for comparing your data with published norms in a test manual). Don't forget that when performing parametric tests we are assuming within group normality and equal group variances. For two independent groups with given means, variances and sample sizes:

(group mean 1 - group mean 2) / Sqrt{(var1/n1) + (var2/n2)}

will have a t distribution on n1-n2-2 degrees of freedom if the group means are equal.

For two groups which constitute a within subjects factor we use the above t statistic but we need an additional term, corr, representing the correlation between these two differences in the denominator.

(group mean 1 - mean difference 2) / Sqrt{(var1/n1) + (var2/n2) - 2 corr sd1 sd2}

We can look at the adjusted sums of squares (SS) for comparing group differences in unbalanced between subject designs on Factor A which are not due to Factor B and vice-versa (assuming there is no AxB interaction) using this spreadsheet. This sheet evaluates Type III sums of squares (where the A and B main effects are adjusted by the AxB interaction using the harmonic mean of sample sizes in formulae illustrated by Cohen (2002). Type III sums of squares are the default in SPSS but these are frowned upon (see ANOVA grad talk) and the recommended Type II sums of squares (where one main effect is only adjusted for the presence of the other main effect) is therefore also given. The Type II sums of squares are evaluated in the spreadsheet using the change in sums of squares (of form $$B^text{T}$$ XY where B are the regression estimates, X is a matrix of the predictor variables and Y a vector of the response variable) between a model with two predictors and a model with only one. This spreadsheet is also reproduced at the end of the next section.

2x2 interactions

If the means are differences then the t-tests can be seen to correspond to a test of a two-way 2x2 interaction where one of the factors is within subjects. A t-test on differences is equivalent to an interaction test in an ANOVA for a 2x2 interaction (see here for examples where at least one of the factors is within subjects).

Suppose you wish to test for an interaction where both factors are within subject then you could do a paired t-test on one of the two possible sets of differences with an additional term, corr, representing the correlation between these two differences.

(mean difference 1 - mean difference 2) / Sqrt{(var1/n1) + (var2/n2) - 2 corr sd1 sd2}

which again will have a t distribution on n1-n2-2 degrees of freedom if the mean differences are equal. The above are also useful for incorporating partially complete cases if you can assume that the missing data would not influence the observed complete case means, sds and correlations (ie missing completely at random). This approach can also be used for testing an interaction involving one between and one within subjects factor comparing the within subjects group differences in the two between subjects factor groups.

In the case where both factors are between subjects you can perform the interaction test as described below (incorporated into this spreadsheet) for balanced data (ie where each of the four combinations of between subject factors occurs equally often).

  1. Sum over i (n(i)-1) i-th variance = MSE
  2. Sum over ij n (bar(AB_ij) - bar(A_i) - bar(B_j) + overall mean)2 = INT

where bar(AB_ij) are the four means for the i-jth combination of factors A and B which each have means bar(A_i) and bar(B_j) and n is the number in that combination. If there are different numbers of i,j combinations then the harmonic mean is used for n. Fisher and vanBelle (1993) suggest this approach for balanced or nearly balanced data.

Finally,

Compare INT/MSE to a F statistic with 1 and n1+n2-4 degrees of freedom.

The more general unbalanced case where there are unequal numbers of observations across the four combinations of the factors uses formulae from Cohen (2002) which use harmonic means of the sample sizes to compute type III sums of squares. These are incorporated into this spreadsheet which is a reproduction of the one mentioned earlier at the end of the section immediately before this discussion of 2x2 interactions.

Following Boniface (1995) we refer to the three types of ANOVA considered here as BB (both factors are between subjects), BW (one factor is between subjects and one within subjects) and WW (both factors are within subjects). In following Cohen(2002) we only consider factors with two levels.

References

Cohen BH (2002) Calculating a Factorial ANOVA from means and standard deviations. Understanding Statistics 1(3) 191-203. This paper illustrates evaluating Type III sums of squares in a BB design. A PDF copy of this paper is here.

Fisher LD and van Belle G (1993) Biostatistics: A Methodology for the Health Sciences. John Wiley and Sons, New York.

Howell DC (2002) Statistical Methods for psychologists. Fifth Edition. Duxbury Press:Belmont,CA.