Blog Archives

Post 10: Multicore parallelism in MCMC

September 24, 2014
By
Post 10: Multicore parallelism in MCMC

MCMC is by its very nature a serial algorithm -- each iteration depends on the results of the last iteration. It is, therefore, rather difficult to parallelize MCMC code so that a single chain will run more quickly by splitting … Continue reading →

Read more »

Post 9: Tuning the complete sampler

October 25, 2013
By
Post 9: Tuning the complete sampler

This post demonstrates how to tune the sampler for optimal acceptance probabilities and demonstrates that the whole sampler works. Tuning the complete sampler for acceptance rate Tuning the sampler's acceptance rates consists of running the sampler several times while tweaking … Continue reading →

Read more »

Post 8: Sampling the variance of person ability with a Gibbs step

October 10, 2013
By
Post 8: Sampling the variance of person ability with a Gibbs step

The Bayesian 2PL IRT model we defined in Post 1 set a hyper-prior on the variance of the person ability parameters. This post implements the sampler for this parameter as a Gibbs step. We will check that Gibbs step is … Continue reading →

Read more »

Post 7: Sampling the item parameters with generic functions

October 10, 2013
By
Post 7: Sampling the item parameters with generic functions

In this post, we will build the samplers for the item parameters using the generic functions developed in Post 5 and Post 6. We will check that the samplers work by running them on the fake data from Post 2, … Continue reading →

Read more »

Post 6: Refactoring Part II: a generic proposal function

October 10, 2013
By

In this post we refactor the proposal function from the previous post into a generic normal proposal function. This allows us to implement normal proposals for the (yet to be developed) samplers for the item parameters without duplicating code. Our … Continue reading →

Read more »

Post 5: Refactoring Part I: a generic Metropolis-Hastings sampler

October 10, 2013
By

A best practice in software engineering is to re-use code instead of duplicating it. One reason for this is that it makes finding and fixing bugs easier. You only have to find the bug once instead of finding it every … Continue reading →

Read more »

Post 4: Sampling the person ability parameters

October 8, 2013
By
Post 4: Sampling the person ability parameters

The previous post outlined the general strategy of writing a MH within Gibbs sampler by breaking the code into two levels: a high level shell and a series of lower-level samplers which do the actual work. This post discusses the … Continue reading →

Read more »

Post 3: Setting up the sampler and visualizing its output

October 8, 2013
By
Post 3: Setting up the sampler and visualizing its output

In the chapter, we argue that a useful way to develop a Metropolis-Hastings (MH) within Gibbs sampler is to split the code into two levels. The top level is the "shell" of the MH within Gibbs algorithm, which sets up … Continue reading →

Read more »

Post 2: Generating fake data

October 6, 2013
By
Post 2: Generating fake data

In order to check that an estimation algorithm is working properly, it is useful to see if the algorithm can recover the true parameter values in one or more simulated "test" data sets. This post explains how to build such … Continue reading →

Read more »

Post 1: A Bayesian 2PL IRT model

October 4, 2013
By
Post 1: A Bayesian 2PL IRT model

In this post, we define the Two-Parameter Logistic (2PL) IRT model, derive the complete conditionals that will form the basis of the sampler, and discuss our choice of prior specification. We can find the appropriate values of numerically in R … Continue reading →

Read more »

Search R-bloggers

Sponsors

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)