A between-subjects factor refers to independent groups that vary along some dimension. Put another way, a between-subjects factor assumes that each level of the factor represents an independent (i.e., not correlated) group of observations. For example, an experimental factor could represent 2 independent groups of participants who were randomly assigned to either a control or a treatment groupition. In this case, the between-subjects experimental factor assumes that measurements from both groups of participants are not correlated – they are independent.

Broadly, contrasts test focused research hypotheses. A contrast comprises a set of weights or numeric values that represent some comparison. For example, when comparing two experimental group means (i.e., control vs. treatment), you can apply weights to each group mean and then sum them up. This is the same thing as subtracting one group’s mean from the other’s.

```
# group means
control <- 5
treatment <- 3
# apply contrast weights and sum up the results
sum(c(control, treatment) * c(1, -1))
```

`## [1] 2`

Correct functional form.Your model variables share linear relationships.

No omitted influences.This one is hard: Your model accounts for all relevant influences on the variables included. All models are wrong, but how wrong is yours?

Accurate measurement.Your measurements are valid and reliable. Note that unreliable measures can’t be valid, and reliable measures don’t necessairly measure just one construct or even your construct.

Well-behaved residuals.Residuals (i.e., prediction errors) aren’t correlated with predictor variables or eachother, and residuals have constant variance across values of your predictor variables.

Homogenous group variances.Group variances are equal. In this case, you can think of group variance as the “average” difference from the group mean (differences are squared so that they are all positive). Link this to the well-behaved residuals assumption above. Residuals (i.e., prediction errors) should be equal across groups; remember that, in ANOVA, groups are predictors.

Normally distributed group observations.Group observations come from normal distributions. Also link this to the well-behaved residuals assumption above. Residuals should come from a normal distribution too.

```
library(tidyverse)
library(knitr)
library(AMCP)
```

From Chapter 5, Table 4 in Maxwell, Delaney, & Kelley (2018)

From`help("C5T4")`

: “The following data consists of blood pressure measurements for six individuals randomly assigned to one of four groups. Our purpose here is to perform four planed contrasts in order to discern if group differences exist for the selected contrasts of interests.”

```
data("C5T4")
# add labels
C5T4 <- C5T4 %>% mutate(group_lbl = group %>% recode(`1` = "Drug Therapy", `2` = "Biofeedback", `3` = "Diet", `4` = "Combination"))
```

`## Warning: package 'bindrcpp' was built under R version 3.4.4`

```
C5T4 %>%
kable()
```

group | sbp | group_lbl |
---|---|---|

1 | 84 | Drug Therapy |

1 | 95 | Drug Therapy |

1 | 93 | Drug Therapy |

1 | 104 | Drug Therapy |

1 | 99 | Drug Therapy |

1 | 106 | Drug Therapy |

2 | 81 | Biofeedback |

2 | 84 | Biofeedback |

2 | 92 | Biofeedback |

2 | 101 | Biofeedback |

2 | 80 | Biofeedback |

2 | 108 | Biofeedback |

3 | 98 | Diet |

3 | 95 | Diet |

3 | 86 | Diet |

3 | 87 | Diet |

3 | 94 | Diet |

3 | 101 | Diet |

4 | 91 | Combination |

4 | 78 | Combination |

4 | 85 | Combination |

4 | 80 | Combination |

4 | 81 | Combination |

4 | 76 | Combination |

It’s always a good idea to look at your data. Check some assumptions.

Do variances look equal?

```
C5T4 %>%
ggplot(mapping = aes(x = group_lbl, y = sbp, fill = group_lbl)) +
geom_boxplot() +
theme_bw() +
theme(legend.position = "top")
```

Do observations look normal?

```
C5T4 %>%
ggplot(mapping = aes(sample = sbp)) +
geom_qq() +
facet_wrap(facets = ~ group_lbl, scales = "free") +
theme_bw()
```