The Anatomy of Statistics: Models, Hypotheses, Significance and Power
- Experiments, Data, Models and Parameters
- Probability vs. Statistics
- Hypotheses and Inference
- The Likelihood Function
- Estimation and Inferences
- Maximum Likelihood Estimate (MLE)
- Schools of Statistical Inference
- Ronald Aylmer FISHER
- Jergy NEYMAN and Egon PEARSON
- Rev. Thomas BAYES
- R A Fisher: P values and Significance Tests
- Neyman and Pearson: Hypothesis Tests
Type I & Type II Errors
- Size and Power
Exploratory Data Analysis (EDA)
- What is it?
- Skew and kurtosis: definitions and magnitude rules of thumb
- Pictorial representations - in particular histograms, boxplots and stem and leaf displays
- Effect of outliers
- Power transformations
- Rank transformations
Categorical Data Analysis
- The Naming of Parts
- Categorical Data
- Frequency Tables
- The Chi-Squared Goodness-of-Fit Test
- The Chi-squared Distribution
- The Binomial Test
- The Chi-squared test for association
Simpson, Cohen and McNemar
- SPSS procedures that help
- Frequencies
- Crosstabs
- Chi-square
- Binomial
- Types of Data
- Quantitative
- Qualitative
- Nominal
- Ordinal
- Frequency Table
- Bar chart
- Cross-classification or Contingency Table
- Simple use of SPSS Crosstabs
- Goodness of Fit Chi-squared Test
- Chance performance and the Binomial Test
- Confidence Intervals for Binomial Proportions
- Pearson’s Chi-squared
- Yates’ Continuity Correction
- Fisher’s Exact Test
- Odds and Odds Ratios
- Log Odds and Log Odds ratios
- Sensitivity and Specificity
- Signal Detection Theory
- Simpson’s Paradox
- Measures of agreement: Cohen's Kappa
Measures of change: McNemar’s Test
- Association or Independence: Chi-squared test of association
- Comparing two or more classified samples
Regression
- What is it?
- Expressing correlations (simple regression) in vector form
- Scatterplots
- Assumptions in regression
- Restriction of range of a correlation
- Comparing pairs of correlations
- Multiple regression
- Least squares
- Residual plots
- Stepwise methods
- Synergy
- Collinearity
Between subjects analysis of variance
- What is it used for?
- Main effects
- Interactions
- Simple effects
- Plotting effects
- Implementation in SPSS
- Effect size
- Model specification
- Latin squares
- Balance
- Venn diagram depiction of sources of variation
The General Linear Model and complex designs including Analysis of Covariance
- GLM and Simple Linear Regression
- The Design Matrix
- Least Squares
- ANOVA and GLM
- Types of Sums of Squares
- Multiple Regression as GLM
- Multiple Regression as a sequence of GLMs in SPSS
- The two Groups t-test as a GLM
- One-way ANOVA as GLM
- Multi-factor Model
- Additive (no interaction)
- Non-additive (interaction)
- Analysis of Covariance
- Simple regression
- 1 intercept
- 1 slope
- Parallel regressions
- multiple intercepts
- 1 slope
- Non-parallel regressions
- multiple intercepts
- multiple slopes
- Simple regression
- Sequences of GLMs in ANCOVA
Power analysis
- Hypothesis testing
- Boosting power
- Effect sizes: definitions, magnitudes
- Power evaluation methods:description and implementation using an examples
- nomogram
- power calculators
- SPSS macros
- spreadsheets
- power curves
- tables
- quick formula
Repeated Measures and Mixed Model ANOVA
- Two sample t-Test vs. Paired t-Test
- Repeated Measures as an extension of paired measures
- Single factor Within-Subject design
- Sphericity
- Two (or more) factors Within-Subject design
- Mixed designs combining Within- and Between-Subject factors
Mixed Models, e.g. both Subjects & Items as Random Effects factors
- The ‘Language as Fixed Effects’ Controversy
- Testing for Normality
- Single degree of freedom approach
Latent variable modelling – factor analysis and all that!
- Path diagrams – a regression example
- Comparing correlations
- Exploratory factor analysis
- Assumptions of factor analysis
- Reliability testing (Cronbach’s alpha)
- Fit criteria in exploratory factor analysis
- Rotations
- Interpreting factor loadings
- Confirmatory factor models
- Fit criteria in confirmatory factor analysis
- Equivalence of correlated and uncorrelated models
- Cross validation as a means of assessing fit for different models
- Parsimony : determining the most important items in a factor analysis
What to do following an ANOVA
- Why do we use follow-up tests?
- Different ways to follow up an ANOVA
- Planned vs. Post Hoc Tests
- Choosing and Coding Contrasts
- Handling Interactions
- Standard Errors of Differences
- Multiple t-tests
- Post Hoc Tests
- Trend Analysis
- Unpacking interactions
- Multiple Comparisons: Watch your Error Rate!
- Post-Hoc vs A Priori Hypotheses
- Comparisons and Contrasts
- Family-wise (FW) error rate
- Experimentwise error rate
- Orthogonal Contrasts or Comparisons
- Planned Comparisons vs. Post Hoc Comparisons
- Orthogonal Contrasts/Comparisons
- Planned Comparisons or Contrasts
- Contrasts in GLM
- Post Hoc Tests
- Control of False Discovery Rate (FDR)
- Simple Main Effects