**R Tutorial Series**, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Exploratory factor analysis (EFA) is a common technique in the social sciences for explaining the variance between several measured variables as a smaller set of latent variables. EFA is often used to consolidate survey data by revealing the groupings (factors) that underly individual questions. This will be the context for demonstration in this tutorial.

### Tutorial Files

Before we begin, you may want to download the dataset (.csv) used in this tutorial. Be sure to right-click and save the file to your R working directory. This dataset contains a hypothetical sample of 300 responses on 6 items from a survey of college students’ favorite subject matter. The items range in value from 1 to 5, which represent a scale from Strongly Dislike to Strongly Like. Our 6 items asked students to rate their liking of different college subject matter areas, including biology (BIO), geology (GEO), chemistry (CHEM), algebra (ALG), calculus (CALC), and statistics (STAT). This is where our tutorial ends, because all students rated all of these content areas as Strongly Dislike, thereby rendering insufficient variance for conducting EFA (just kidding).

### Beginning Steps

To begin, we need to read our datasets into R and store their contents in variables.

- > #read the dataset into R variable using the read.csv(file) function
- > data <- read.csv(“dataset_EFA.csv”)

### Psych Package

Next, we need to install and load the *psych* package, which I prefer to use when conducting EFA. In this tutorial, we will make use of the package’s fa() function.

- > #install the package
- > install.packages(“psych”)
- > #load the package
- > library(psych)

### Number of Factors

For this tutorial, we will assume that the appropriate number of factors has already been determined to be 2, such as through eigenvalues, scree tests, and a priori considerations. Most often, you will want to test solutions above and below the determined amount to ensure the optimal number of factors was selected.

### Factor Solution

To derive the factor solution, we will use the fa() function from the psych package, which receives the following primary arguments.

- r: the correlation matrix
- nfactors: number of factors to be extracted (default = 1)
- rotate: one of several matrix rotation methods, such as “varimax” or “oblimin”
- fm: one of several factoring methods, such as “pa” (principal axis) or “ml” (maximum likelihood)

Note that several rotation and factoring methods are available when conducting EFA. Rotation methods can be described as *orthogonal*, which do not allow the resulting factors to be correlated, and *oblique*, which do allow the resulting factors to be correlated. Factoring methods can be described as *common*, which are used when the goal is to better describe data, and *component*, which are used when the goal is to reduce the amount of data. The fa() function is used for common factoring. For component analysis, see princomp(). The best methods will vary by circumstance and it is therefore recommended that you seek professional council in determining the optimal parameters for your future EFAs.

In this tutorial, we will use oblique rotation (rotate = “oblimin”), which recognizes that there is likely to be some correlation between students’ latent subject matter preference factors in the real world. We will use principal axis factoring (fm = “pa”), because we are most interested in identifying the underlying constructs in the data.

- > #calculate the correlation matrix
- > corMat <- cor(data)
- > #display the correlation matrix
- > corMat

- > #use fa() to conduct an oblique principal-axis exploratory factor analysis
- > #save the solution to an R variable
- > solution <- fa(r = corMat, nfactors = 2, rotate = “oblimin”, fm = “pa”)
- > #display the solution output
- > solution

By looking at our factor loadings, we can begin to assess our factor solution. We can see that BIO, GEO, and CHEM all have high factor loadings around 0.8 on the first factor (PA1). Therefore, we might call this factor *Science* and consider it representative of a student’s interest in science subject matter. Similarly, ALG, CALC, and STAT load highly on the second factor (PA2), which we might call *Math*. Note that STAT has a much lower loading on PA2 than ALG or CALC and that it has a slight loading on factor PA1. This suggests that statistics is less related to the concept of *Math* than algebra and calculus. Just below the loadings table, we can see that each factor accounted for around 30% of the variance in responses, leading to a factor solution that accounted for 66% of the total variance in students’ subject matter preference. Lastly, notice that our factors are correlated at 0.21 and recall that our choice of oblique rotation allowed for the recognition of this relationship.

Of course, there are many other considerations to be made in developing and assessing an EFA that will not be presented here. The intent with this tutorial was simply to demonstrate the basic execution of EFA in R. For a detailed and digestible overview of EFA, I recommend the Factor Analysis chapter of *Multivariate Data Analysis* by Hair, Black, Babin, and Anderson.

### Complete EFA Example

To see a complete example of how EFA data can be organized using the *psych* package in R, please download the EFA example (.txt) file. For the code used in this tutorial, download the EFA Example (.R) file.

### References

Revelle, W. (2011). psych: Procedures for Personality and Psychological Research. http://personality-project.org/r/psych.manual.pdf

**leave a comment**for the author, please follow the link and comment on their blog:

**R Tutorial Series**.

R-bloggers.com offers

**daily e-mail updates**about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.