Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
for (i in 1:N)
for (k in 1:3){
llh<-0
for (j in max(0,n2[k]-y[k]):min(y[k],n1[k]))
llh<-llh+choose(n1[k],j)*choose(n2[k],y[k]-j)*
theta[i,1]^j*(1-theta[i,1])^(n1[k]-j)*theta[i,2]^(y[k]-j)*
(1-theta[i,2])^(n2[k]-y[k]+j)
l[i]=l[i]*llh}
To double-check, I also wrote a Gibbs version:
theta=matrix(runif(2),nrow=T,ncol=2)
x1=rep(NA,3)
for(t in 1:(T-1)){
for(j in 1:3){
a<-max(0,n2[j]-y[j]):min(y[j],n1[j])
x1[j]=sample(a,1,
prob=choose(n1[j],a)*choose(n2[j],y[j]-a)*
theta[t,1]^a*(1-theta[t,1])^(n1[j]-a)*
theta[t,2]^(y[j]-a)*(1-theta[t,2])^(n2[j]-y[j]+a)
)}
theta[t+1,1]=rbeta(1,sum(x1)+1,sum(n1)-sum(x1)+1)
theta[t+1,2]=rbeta(1,sum(y)-sum(x1)+1,sum(n2)-sum(y)+sum(x1)+1)}
which did not show any difference with the above. Nor with the likelihood surface.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
