MANOVA in R – How To Implement and Interpret One-Way MANOVA

[This article was first published on r – Appsilon | Enterprise R Shiny Dashboards, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

MANOVA article thumbnail

The R programming language packs a rich set of statistical functions. It makes it easy to do any kind of statistical test, including the analysis of variance. Today you’ll learn all about MANOVA in R, and apply it to a real dataset.

We’ll start with the theory and discuss use-cases in which you should consider MANOVA instead of regular, univariate ANOVA.

Want to learn the basics? Read our guide to One-way ANOVA in R from scratch. 

We strongly recommend reading the above article first, as it lays the foundation for analysis of variance in R.

Table of contents:


The Theory of MANOVA in R

MANOVA stands for Multivariate ANOVA or Multivariate Analysis Of Variance. It’s an extension of regular ANOVA. The general idea is the same, but the MANOVA test has to include at least two dependent variables to
analyze differences between multiple groups (factors) of the independent variable.

If you only have a single dependent variable, then there’s no point in using MANOVA. Regular ANOVA will suit you fine. For example, if you want to see if a petal length is associated with different types of Iris species, there’s no point in using MANOVA, as you only have a single dependent variable (petal length). On the contrary, if you have data on petal length and petal width, then using MANOVA would be a wise thing to do.

MANOVA in R uses Pillai’s Trace test for the calculations, which is then converted to an F-statistic when we want to check the significance of the group mean differences. You can use other tests, such as Wilk’s Lambda, Roy’s Largest Root, or Hotelling-Lawley’s test, but Pillai’s Trace test is the most powerful one.

Errors, hypotheses, and assumptions

A regular ANOVA test often suffers from a lot of type I errors. These are caused when you perform multiple ANOVA tests for each dependent variable. MANOVA provides a solution, as it’s able to capture group differences based on combined information of the multiple dependent variables.

Because MANOVA uses more than one dependent variable, the null and the alternative hypotheses are slightly changed:

  • H0: Group mean vectors are the same for all groups or they don’t differ significantly.
  • H1: At least one of the group mean vectors is different from the rest.

MANOVA in R won’t tell you which group differs from the rest, but that’s easy to determine via a post-hoc test. We’ll use Linear Discriminant Analysis (LDA) to answer this question later.

As you would expect, MANOVA statistical test has many strict assumptions. The ones from ANOVA carry over – independence of observations and homogeneity of variances – and some new are introduced:

  • Multivariate normality – Each combination of independent or dependent variables should have a multivariate normal distribution. Use Shapiro-Wilk’s test to verify.
  • Linearity – Dependent variables should have a linear relationship with each group (factor) of the independent variable.
  • No multicollinearity – Dependent variables should have very high correlations.
  • No outliers – There shouldn’t be any outliers in the dependent variables.

Checking all of these is time-consuming and dataset-specific. To keep things simple, we’ll assume all of the requirements are met and explore only how to use MANOVA in R in the following section.

MANOVA in R: Implementation

As with most of the things in R, performing a MANOVA statistical test boils down to a single function call. But we’ll need a dataset first. The Iris dataset is well-known among the data science crowd, and it is built into R:

Image 1 - The Iris dataset in R

Image 1 – The Iris dataset in R

It doesn’t matter if you use the same dataset as us, as long as one critical condition is met – the dataset must have more observations (rows) per group in the independent variable than a number of the dependent variables. For example, the Iris dataset has 3 groups and 4 dependent variables. This means we need more than 4 observations for each of the flower species. We have 50 for each, so we’re good to go.

Dependent variables

As MANOVA cares for the difference in means for each factor, let’s visualize the boxplot for every dependent variable. There will be 4 plots in total arranged in a 2×2 grid, each having a dedicated boxplot for a specific flower species:

Image 2 - Boxplots for all dependent variables and all factors of the independent variable

Image 2 – Boxplots for all dependent variables and all factors of the independent variable

Do you want to learn more about boxplots? Check our complete guide to stunning boxplots with R.

It seems like the setosa species is more separated than the other two, but let’s not jump to conclusions.

One-way MANOVA in R

We can now perform a one-way MANOVA in R. The best practice is to separate the dependent from the independent variable before calling the manova() function. Once the test is done, you can print its summary:

Image 3 - MANOVA in R test summary

Image 3 – MANOVA in R test summary

By default, MANOVA in R uses Pillai’s Trace test statistic. The P-value is practically zero, which means we can safely reject the null hypothesis in the favor of the alternative one – at least one group mean vector differs from the rest.

While we’re here, we could also measure the effect size. One metric often used with MANOVA is Partial Eta Squared. It measures the effect the independent variable has on the dependent variables. If the value is 0.14 or greater, we can say the effect size is large. Here’s how to calculate it in R:

Image 4 - Partial Eta Squared value for the MANOVA test

Image 4 – Partial Eta Squared value for the MANOVA test

The value is 0.6, which means the effect size is large. It’s a great way to double-check the summary results of a MANOVA test, but how can we actually know which group mean vector differs from the rest? That’s where a post-hoc test comes into play.

Interpret MANOVA in R With a Post-Hoc Test

The P-Value is practically zero, and the Partial Eta Squared suggests a large effect size – but which group or groups are different from the rest? There’s no way to tell without a post-hoc test. We’ll use
Linear Discriminant Analysis (LDA), which finds a linear combination of features that best separates two or more groups.

By doing so, we’ll be able to visualize a scatter plot showing the two linear discriminants on the X and Y axes, and color code them to match our independent variable – the flower species.

You can implement Linear Discriminant Analysis in R using the lda() function from the MASS package:

Image 5 - Linear Discriminant Analysis results

Image 5 – Linear Discriminant Analysis results

Take a look at the coefficients to see how the dependent variables are used to form the LDA decision rule. LD1 is calculated as LD1 = 0.83 * Sepal.Length + 1.53 * Sepal.Width - 2.20 * Petal Length - 2.81 * Petal.Width, when rounded to two decimal points.

The snippet below uses the predict() function to get the linear discriminants and combines them with our independent variable:

Image 6 - LDA dataset

Image 6 – LDA dataset

The final step in this post-hoc test is to visualize the above lda_df as a scatter plot. Ideally, we should see one or multiple groups stand out:

Image 7 - LDA dataset as a scatter plot

Image 7 – LDA dataset as a scatter plot

The setosa species is significantly different when compared to virginica and versicolor. These two are more similar, suggesting that it was the setosa group that had the most impact for us to reject
the null hypothesis.

To summarize – the group mean vector of the setosa class is significantly different from the other group means, so it’s safe to assume it was a crucial factor for rejecting the null hypothesis.


Conclusion

And there you have it – your go-to guide to MANOVA in R. You now know what MANOVA is, when you should use it, and how to implement and interpret it with the R programming language. For additional practice, we recommend you apply the above code to a dataset of your choice. Just make sure to satisfy all MANOVA prerequisites. You could also remove the setosa class from the Iris dataset and repeat the test. Any ideas on what would happen then?

If you want to learn more about inferential statistics in R, stay tuned to Appsilon’s blog. The best way to do so is by subscribing to our newsletter via the contact form below.

Article MANOVA in R – How To Implement and Interpret One-Way MANOVA comes from Appsilon | Enterprise R Shiny Dashboards.

To leave a comment for the author, please follow the link and comment on their blog: r – Appsilon | Enterprise R Shiny Dashboards.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)