FAQ/ancrm - CBU statistics Wiki

Upload page content

You can upload content for the page named below. If you change the page name, you can also upload content for another page. If the page name is empty, we derive the page name from the file name.

File to load page content from
Page name
Comment
Type the missing letters from: Lodon is th capial of nglnd

location: FAQ / ancrm

ANCOVA versus Repeated Measured ANOVA

Consider a design with two time points corresponding to a pre (T1) and post test score (T2) and two groups (e.g. L(earning) D(isabled) and General Population).

Two common hypotheses of interest can be tested using ANCOVA and Repeated Measures ANOVA and formulated by multiple regression equations as below.

ANCOVA : T2 = intercept + A*group + B*T1

ANCOVA asks "How do the T2 means differ between the two groups over and above what is predicted by the T1 score".

RM ANOVA: T2 - T1 = intercept + C*group + D*T1

RM ANOVA (with a covariate) asks "How does the mean difference in time points differ between groups over and above what you would expect from the T1 score". The group and T1 regression terms are outputted in the RM ANOVA respectively as the group x time interaction and T1 x time interaction.

The below discussion of ANCOVA and Repeated Measure ANOVA is taken from here.

Analyzing Pre-Post Data with Repeated Measures or ANCOVA (by Karen Grace-Martin)

Not too long ago, I received a call from a distressed client. Let’s call her Nancy.

Nancy had asked for advice about how to run a repeated measures analysis. The advisor told Nancy that actually, a repeated measures analysis was inappropriate for her data.

Nancy was sure repeated measures was appropriate and the response led her to fear that she had grossly misunderstood a very basic tenet in her statistical training.

The Design

Nancy had measured a response variable at two time points for two groups: an intervention group, who received a treatment, and a control group, who did not.

Both groups were measured before and after the intervention.

The Analysis

Nancy was sure that this was a classic repeated measures experiment with one between subjects factor (treatment group) and one within-subjects factor (time).

The advisor insisted that this was a classic pre-post design, and that the way to analyze pre-post designs is not with a repeated measures ANOVA, but with an ANCOVA.

In ANCOVA, the dependent variable is the post-test measure. The pre-test measure is not an outcome, but a covariate. This model assesses the differences in the post-test means after accounting for pre-test values.

The advisor said repeated measures ANOVA is only appropriate if the outcome is measured multiple times after the intervention. The more she insisted repeated measures didn’t work in Nancy’s design, the more confused Nancy got.

The Research Question

This kind of situation happens all the time, in which a colleague, a reviewer, or a statistical consultant insists that you need to do the analysis differently. Sometimes they’re right, but sometimes, as was true here, the two analyses answer different research questions.

Nancy’s research question was whether the mean change in the outcome from pre to post differed in the two groups.

This is directly measured by the time*group interaction term in the repeated measures ANOVA.

The ANCOVA approach answers a different research question: whether the post-test means, adjusted for pre-test scores, differ between the two groups.

In the ANCOVA approach, the whole focus is on whether one group has a higher mean after the treatment. It’s appropriate when the research question is not about gains, growth, or changes.

The adjustment for the pre-test score in ANCOVA has two benefits. One is to make sure that any post-test differences truly result from the treatment, and aren’t some left-over effect of (usually random) pre-test differences between the groups.

The other is to account for variation around the post-test means that comes from the variation in where the patients started at pretest.

So when the research question is about the difference in means at post-test, this is a great option. It’s very common in medical studies because the focus there is about the size of the effect of the treatment.

The Resolution

As it turned out, the right analysis to accommodate Nancy’s design and answer her research question was the Repeated Measures ANOVA. (For the record, linear mixed models also work, and had some advantages, but in this design, the results are identical).

The person she’d asked for advice was in a medical field, and had been trained on the ANCOVA approach.

Either approach works well in specific situation. The one thing that doesn’t is to combine the two approaches.

I’ve started to see situations, particularly when there is more than one post-test measurement, where data analysts attempt to use the baseline pre-test score as both a covariate and the first outcome measure in a repeated measures analysis.

That doesn’t work, because both approaches remove subject-specific variation, so it tries to remove that variation twice.

Reference

Sweet SA and Grace-Martin K (2011) Data analysis with SPSS (4th Edition). Pearson Education:London

Analyzing Pre-Post Data with Repeated Measures or ANCOVA (by Thom Baguley)

(Taken from a reply to a query on the psych-postgrads e-mail list)

The ANCOVA versus multiple model argument can get confusing. First you can have covariates in either multiple regression or multilevel models so including baseline as a covariate is a legitimate option either scenarios. The main alternative is an ANOVA style model with 2 groups and three repeated measures. This could be run as a regression or multilevel model so the two issues: 1) baseline as covariate or ANOVA-style model, and 2) single level regression or multilevel regression model are not tied together.

1) This issue comes up in the literature as Lord's paradox. In essence the two approaches test different hypotheses and usually the covariate approach is more appropriate as it relies on less stringent assumptions about the impact of the baseline score on the outcome Y. Adding baseline as a covariate will generally be a better approach and have greater statistical power. I summarise the arguments in my book Serious Stats (pp 652-4) but other good summaries exist - notably by Dan Wright and Stephen Senn. If adding baseline as covariate it helps interpretation to centre the covariate and analyse the change scores (Y - post test) and (Y - minus follow up).

More generally adding a between-subjects covariate to a repeated measures model doesn't usually help much because the if omitted it just gets sucked up by the subjects term (which is separate from error). However it can make a difference if you add covariate by condition interactions or if you have a time-varying covariate.

2) If the design is balanced with fixed factors and the assumptions of an ANOVA are met there is no advantage to running a multilevel model. In more complex designs a multilevel model has advantages in declining with imbalance, being able to handle additional random effects, time varying covariates and relaxing assumptions about the form of the covariance matrix (sphericity or multisample sphericity). You can also generalise the model to handle discrete data.

With two time points and just one covariate (baseline) I can't see much reason to use a multilevel model. Sphericity can't be violated in a model with only two repeated measures. If you have missing observations or some other complicating circumstance. However you haven't provide much information.

For example, if you are planning to analyses totals or means from multiple trials in the ANCOVA then the multilevel model would allow you to model the trials nested within participants. This sort of model would be quite a bit more complex but might be attractive in allowing covariates that vary between trials (and most likely having higher statistical power).

References

Baguley T (2012) Serious Stats: A guide to advanced statistics for the behavioral sciences. Palgrave:London.

Senn S (2006) Change from baseline and analysis of covariance revisited. Statistics in Medicine 25 4334-4344.