When we have more than two groups in a one-way ANOVA, we typically want to statistically assess the differences between each group. Whereas a one-way omnibus ANOVA assesses whether a significant difference exists at all amongst the groups, pairwise comparisons can be used to determine which group differences are statistically significant.
Before we begin, you may want to download the sample data (.csv) used in this tutorial. Be sure to right-click and save the file to your R working directory. This dataset contains a hypothetical sample of 30 participants who are divided into three stress reduction treatment groups (mental, physical, and medical). The values are represented on a scale that ranges from 1 to 5. This dataset can be conceptualized as a comparison between three stress treatment programs, one using mental methods, one using physical training, and one using medication. The values represent how effective the treatment programs were at reducing participant’s stress levels, with higher numbers indicating higher effectiveness.
To begin, we need to read our dataset into R and store its contents in a variable.
- > #read the dataset into an R variable using the read.csv(file) function
- > dataOneWayComparisons <- read.csv(“dataset_ANOVA_OneWayComparisons.csv”)
- > #display the data
- > dataOneWayComparisons
One way to begin an ANOVA is to run a general omnibus test. The advantage to starting here is that if the omnibus test comes up insignificant, you can stop your analysis and deem all pairwise comparisons insignificant. If the omnibus test is significant, you should continue with pairwise comparisons.
- > #use anova(object) to test the omnibus hypothesis
- > #Is there a significant difference amongst the treatment means?
- > anova(lm(StressReduction ~ Treatment, dataOneWayComparisons))
Since the omnibus test was significant, we are safe to continue with our pairwise comparisons. To make pairwise comparisons between the treatment groups, we will use the pairwise.t.test() function, which has the following major arguments.
- x: the dependent variable
- g: the independent variable
- p.adj: the p-value adjustment method used to control for the family-wise Type I error rate across the comparisons; one of “none”, “bonferroni”, “holm”, “hochberg”, “hommel”, “BH”, or “BY”
The pairwise.t.test() function can be implemented as follows.
- > #use pairwise.t.test(x, g, p.adj) to test the pairwise comparisons between the treatment group means
- > #What significant differences are present amongst the treatment means?
- > pairwise.t.test(dataOneWayComparisons$StressReduction, dataOneWayComparisons$Treatment, p.adj = “none”)
Note that the desired p-adjustment method will vary by researcher, study, etc. Here, we will assume an alpha level of .05 for all tests, effectively making no adjustment for the family-wise Type I error rate.
These results indicate that there is a statistically significant difference between the mental and medical (p = .004) and physical and medical (p = 0.045), but not the mental and physical (p = 0.302) treatments. The treatment means are 3.5 for mental, 3 for physical, and 2 for medical. Subsequently, we are inclined to conclude based on this study that the mental and physical treatments lead to greater stress reduction than the medical method, but that there is insufficient statistical support to determine that either the mental or physical treatment method is superior.
Complete One-Way ANOVA with Pairwise Comparisons Example
To see a complete example of how one-way ANOVA pairwise comparisons can be conducted in R, please download the one-way ANOVA comparisons example (.txt) file.
R Documentation (n.d.). Adjust P-values for Multiple Comparisons. Retrieved January 16, 2011 from http://stat.ethz.ch/R-manual/R-devel/library/stats/html/p.adjust.html