Anchoring estimation or the perfect excuse to become "Bayesian"

[This article was first published on Muestreo y estadísticas oficiales - El blog de Andrés Gutiérrez, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Anchoring is an usual process when estimating abilities in test equating. This is about analyzing standardized tests, while maintaining a predefined scale. For example, assume that you have a set of 60 items in your test. However, two test forms (named Form A and Form B) are given to the students in two different times. In order to guarantee comparability,  you decide that both Forms A and B must have 50 unique items and 10 anchor (common) items. At the end you use a linking procedure to arrange a common scale in both forms.  

The methodology behind this kind of techniques is focused on anchoring the parameters of the 10 items in Form A in order to obtain estimates of the item parameters in Form B. However, you have to realize that you are anchoring estimates of the unknown item parameters. So, if you are anchoring estimates, you have to consider two sources of error: the measurement error and the linking error. 

However, bayesian statistics offer (fortunately) a natural and logical way to incorporate the uncertainty of the unknown parameter through a prior distribution on the anchoring items. This way, I would analyze Form B like this: for the 50 unique items, I would define uninformative priors; for the 10 common items, I would define informative priors based on the behavior of the estimated parameters in Form A. This way, I am not anchoring items based on estimates, I am anchoring the uncertainty of the items parameters. Two points of view that (philosophically) differ.

Consider this example I have discussed before. Let’s assume that the first two items are anchored from a previous application. Assume that we obtained a difficulty estimate of -7 with standard error of 0.1, for the first item; and a difficulty estimate of -5 with standard error of 0.2, for the second one. Then, by using bayesian statistics, we obtain posterior difficulty estimate of -6.44 with standard error of 0.09, for the first item; and posterior difficulty estimate of -3.1 with standard error of 0.1, for the second one. Note that the estimation of the remaining items is (almost) invariant to this  kind of anchoring technique. You may use this piece of code (Stan and R) in order to follow this example. 

rm(list=ls(all=TRUE))
setwd("/Users/psirusteam/Desktop/Dropbox/Cursos/USTA/IRT/R")
load("SimuRasch.rda")
 
library(rstan)
library(reshape2)
library(dplyr)
 
Rasch.stan <- " 
data {
int<lower=1> J; 
int<lower=1> K; 
int<lower=1> N; 
int<lower=1,upper=J> jj[N]; 
int<lower=1,upper=K> kk[N]; 
int<lower=0,upper=1> y[N]; 
} 

parameters { 
real theta[J];
real b[K]; 
real mu[K];
real<lower=0> sigma[K]; 
} 

model { 
theta ~ normal(0,1); 
b[1] ~ normal(-7, 0.1); 
b[2] ~ normal(-5, 0.2); 
for (k in 3:K)
b[k] ~ normal(mu, sigma);

mu ~ normal(0, 1000);
sigma ~ inv_gamma(0.001, 0.001);

for (n in 1:N) 
y[n] ~ bernoulli_logit(theta[jj[n]] - b[kk[n]]);
}
"
 
MeltRasch <- melt(SimuRasch)
colnames(MeltRasch) <- c("Persona", "Item", "Respuesta")
MeltRasch <- arrange(MeltRasch, Persona)
 
y <- MeltRasch$Respuesta
J <- nrow(SimuRasch)
jj <- MeltRasch$Persona
K <- ncol(SimuRasch)
kk <- MeltRasch$Item
N <- J * K #nrow(MeltRasch)
 
Rasch.data <- list(J = J, K = K, N = N, jj = jj, kk = kk, y = y)
Nchains <- 2
Niter <- 100
Rasch.fit <- stan(model_code = "Rasch.stan", data = Rasch.data, chains = Nchains, iter = Niter)
 
print(Rasch.fit)
plot(Rasch.fit)
traceplot(Rasch.fit, pars = 'b')
 
# Extract estimates
Rasch.mcmc <- extract(Rasch.fit)
(b.est <- colMeans(Rasch.mcmc$b))

Did you get that? Anchoring is the perfect excuse to realize the power of the bayesian statistics.

To leave a comment for the author, please follow the link and comment on their blog: Muestreo y estadísticas oficiales - El blog de Andrés Gutiérrez.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)