Some Common Approaches for Analyzing Likert Scales and Other Categorical Data

[This article was first published on Statistical Research » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Analyzing Likert scale responses really comes down to what you want to accomplish (e.g. Are you trying to provide a formal report with probabilities or are you trying to simply understand the data better). Sometimes a couple of graphs are sufficient and a formalize statistical test isn’t even necessary. However, with how easy it is to conduct some of these statistical tests it is best to just formalize the analysis. There are several approaches that can be used. Here are just a few of them.

The code to set up the data for some testing is as follows.  Note that this is the same code used in Plotting Likert Scales:


set.seed(1234)
library(e1071)
probs < - cbind(c(.4,.2/3,.2/3,.2/3,.4),c(.1/4,.1/4,.9,.1/4,.1/4),c(.2,.2,.2,.2,.2))
my.n <- 100
my.len <- ncol(probs)*my.n
raw <- matrix(NA,nrow=my.len,ncol=2)
raw <- NULL
for(i in 1:ncol(probs)){
raw <- rbind(raw, cbind(i,rdiscrete(my.n,probs=probs[,i],values=1:5)))
}
raw <- data.frame(raw)
names(raw) <- c("group","value")
raw$group <- as.factor(raw$group)
raw.1.2 <- subset(raw, raw$group %in% c(1,2))

T-TEST

I might as well get this one out of the way. It sure is easy to take this approach which helps explains why this is probably one of the more controversial approaches. Even something like Excel will spit this out without much thought. You have to stretch the assumptions of the t-test to its outer limits. So if taking this approach one must very carefully verify the t-test assumptions. Most notably:

-Z Follows a Standard Normal
-Variance S^2 Follows a Chi Square Distribution
-Variance From two Populations should Be Equal (unless using Welch’s test).
-Two Populations Should Be Independent

Two independent populations assumption gets a little sticky unless you truly are looking at two different populations from your data (e.g. Male/Female or Hispanic/Non-Hispanic). In other words just because you’re comparing two different questions from your questionnaire doesn’t necessarily mean you have two independent populations.


> t.test(raw.1.2$value ~ raw.1.2$group, var.equal=TRUE)

Two Sample t-test

data: raw.1.2$value by raw.1.2$group
t = 2.5622, df = 198, p-value = 0.01114
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
0.1151745 0.8848255
sample estimates:
mean in group 1 mean in group 2
3.4 2.9

CHI SQUARE / FISHER EXACT TEST

This ends up being the better approach and it’s relatively easy to understand. The chi square test is designed to handle categorical frequency data and test the association between two variables.

When the sample size is too small and the assumptions of the chi square test no longer are satisfied then an alternative option is to use Fisher’s Exact Test. The classical example of this is Fisher’s Lady Tasting Tea problem. Though this is designed for a 2×2 table there are ways to generalize it to larger tables and R makes it quite simple.


( c.test < - chisq.test(raw$group, raw$value) )

Pearson's Chi-squared test

data: raw$group and raw$value
X-squared = 195.1726, df = 8, p-value < 2.2e-16

 

In the above example some of the cells are quite small which could mean that the chi square approach may not work.  So Fisher’s 2 x 2 test can be expanded and we can test this data. However, keep in mind the assumptions on the martingale being fixed. Due to limitations in workspace size in R I have just found it easiest to simulate the p-value and achieve the desired outcome that way.


sim.table <- table(raw$group, raw$value)

fisher.test(sim.table, simulate.p.value=TRUE, B=1e6) # Simulate due to workspace limitations

Fisher's Exact Test for Count Data with simulated p-value (based on 1e+06
replicates)

data: sim.table
p-value = 1e-06
alternative hypothesis: two.sided

WILCOXON SIGNED TEST

This is used when the data come from a related sample and are from the same population. In other words this works well on a matched pairs sample.  So assuming the group 1 and group 2 come from the same population and are just a different measurement we can take this approach.


wilcox.test(raw.1.2$value[raw.1.2$group==1], raw.1.2$value[raw.1.2$group==2], paired=TRUE)

MANN-WHITNEY

This tests whether two independent samples are the same.  In this case the only difference between the Mann-Whitney test and the Wilcoxon Signed Test is that the paired sample is specified in the Wilcoxon Signed Test


wilcox.test(raw.1.2$value[raw.1.2$group==1], raw.1.2$value[raw.1.2$group==2])

KRUSKAL-WALLIS TEST
Analysis of Variance equivalent for categorical data.   I feel that this is probably very underused.  This is probably do to ANOVA being beyond the scope of most casual analysts and then throwing in categorical data makes it that much more obscure.  Like the ANOVA is also assumes independent populations.  But once you understand exactly what you’re testing and what type of data you’re dealing with the implementation of the test is quite simple:


kruskal.test(raw$value ~ raw$group)

Kruskal-Wallis rank sum test

data: raw$value by raw$group
Kruskal-Wallis chi-squared = 13.9105, df = 2, p-value = 0.0009536

This can be compared to the parametric ANOVA


fit < - lm(raw$value ~ raw$group)
anova(fit)

Analysis of Variance Table

Response: raw$value
Df Sum Sq Mean Sq F value Pr(>F)
raw$group 2 14.91 7.4533 4.0289 0.01878 *
Residuals 297 549.44 1.8500
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

To leave a comment for the author, please follow the link and comment on their blog: Statistical Research » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)