Size: 1212
Comment:
|
Size: 1852
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 3: | Line 3: |
The purpose of the various meansures of '''effect size''' is to provide a statistically valid reflection of the ''size'' of the ''effect'' of some feature of an experiment. As such it is a rather loose concept. However there is an underlying assumption that this is taking place in some parametric design, and that the effect of the feature of interest (or ''manipulation'') can be measured by some estimable function of the parameters. | The purpose of the various measures of '''effect size''' is to provide a statistically valid reflection of the ''size'' of the ''effect'' of some feature of an experiment. As such it is a rather loose concept. However there is an underlying assumption that this is taking place in some parametric design, and that the effect of the feature of interest (or ''manipulation'') can be measured by some estimable function of the parameters. |
Line 9: | Line 9: |
$${x_i|i=1\ldots n_A} ~ \textrm{(i.i.d.)} \qquad N(\mu_A,\sigma^2)$$ | $${a_i|i=1 \ldots n_A} \qquad ~ \qquad \textrm{(i.i.d.)} \qquad N(\mu_A,\sigma^2)$$ |
Line 13: | Line 13: |
$${y_i:i=1\ldots n_B} ~ \textrm{(i.i.d.)} \qquad N(\mu_B,\sigma^2)$$. | $${b_i:i=1 \ldots n_B} \qquad ~ \qquad \textrm{(i.i.d.)} \qquad N(\mu_B,\sigma^2)$$. |
Line 15: | Line 15: |
Effect Size as defined by Cohen is the difference between the two condition means divided by the common standard deviation. (There are obvious connections with the definition of the classical Signal Dection Theory parameter $$d'$$.) | Effect Size $$d$$ was defined by Cohen (1988)[[FootNote(Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). New York:Academic Press)]] as the difference between the two condition means divided by the common standard deviation: $$d = \frac{\mu_A - \mu_B}{\sigma}.$$ That is to say it is the '''Signal to Noise Ratio'''. There are obvious connections with the definition of the classical Signal Detection Theory parameter $$d'$$. Let $$\bar{a}=\frac{\sum_{i=1}^{n_A} a_i}{n_A}$$, $$\bar{b}=\frac{\sum_{i=1}^{n_B} b_i}{n_B}$$ and $$SS_A=\sum_{i=1}^{n_A}(a_i-\bar(a))^2$$, $$SS_B=\sum_{i=1}^{n_B}(b_i-\bar(b))^2$$. Then $$\hat{\sigma}^2 = \frac{SS_A + SS_B}{n_A+n_B-2}$$ is the conventional estimate of $$\sigma^2$$, and $$\hat{d}=\frac{\bar{a}-\bar{b}}{\hat{sigma}}$$ serves at an estimator for $$d$$. |
Effect Size
The purpose of the various measures of effect size is to provide a statistically valid reflection of the size of the effect of some feature of an experiment. As such it is a rather loose concept. However there is an underlying assumption that this is taking place in some parametric design, and that the effect of the feature of interest (or manipulation) can be measured by some estimable function of the parameters.
This is certainly the case in the paradigmatic case for the evaluation effect size" the two-conditions, two-groups design. Suppose that a test $$\mathbf{T}$$ is administered to two groups of sizes $$n_A$$ and $$n_B$$ in two conditions $$A$$ and $$B$$.
The samples are assumed to be independently and normally distributed with the same variance:
- $${a_i|i=1 \ldots n_A} \qquad ~ \qquad \textrm{(i.i.d.)} \qquad N(\mu_A,\sigma^2)$$
and
- $${b_i:i=1 \ldots n_B} \qquad ~ \qquad \textrm{(i.i.d.)} \qquad N(\mu_B,\sigma^2)$$.
Effect Size $$d$$ was defined by Cohen (1988)FootNote(Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). New York:Academic Press) as the difference between the two condition means divided by the common standard deviation:
- $$d = \frac{\mu_A - \mu_B}{\sigma}.$$
That is to say it is the Signal to Noise Ratio. There are obvious connections with the definition of the classical Signal Detection Theory parameter $$d'$$.
Let $$\bar{a}=\frac{\sum_{i=1}{n_A} a_i}{n_A}$$, $$\bar{b}=\frac{\sum_{i=1}{n_B} b_i}{n_B}$$ and $$SS_A=\sum_{i=1}{n_A}(a_i-\bar(a))2$$, $$SS_B=\sum_{i=1}{n_B}(b_i-\bar(b))2$$. Then $$\hat{\sigma}2 = \frac{SS_A + SS_B}{n_A+n_B-2}$$ is the conventional estimate of $$\sigma2$$, and $$\hat{d}=\frac{\bar{a}-\bar{b}}{\hat{sigma}}$$ serves at an estimator for $$d$$.