**BMS Add-ons » BMS Blog**, and kindly contributed to R-bloggers)

Capturing the dependence between explanatory variables in the posterior distribution while implementing a Bayesian analysis is crucial. Taking such a dependence into account reveals the sensitivity of posterior distributions of parameters to dependency across regressors. For instance, if two specific variables are complementary over the model space, we expect relatively higher weights for the models that include both variables together than models where only one variable is present. A statistically proper measure of jointness and dependence among variables in a Bayesian framework was first introduced by Doppelhofer & Weeks (2009) and Ley & Steel’s (2009) .

According to their findings, although the marginal probability measure of variable importance has interesting policy implications and prediction, it hardly provides information on the nature of dependencies among regressors. Positive value for this jointness statistic indicates that variables x and z are complements, reflecting that posterior probabilities of models that jointly include & exclude both variables are larger than posterior probabilities of models including only one. A negative value for the statistic, on the other hand, indicates that the variables are capturing a similar underlying effect and are called substitutes. I refer the reader to Doppelhofer & Weeks (2009) and Ley & Steel (2009) for more details.

Reference:

Doppelhofer, G. & Weeks, M. (2009), ‘Jointness of growth determinants’, Journal of Applied Econometrics 24(2), 209–244.

I show how the user can employ the BMS package to compute the jointness scores and degree of dependency among the variables of interest. As an emprical example, I replicate the jointness results presented in Doppelhofer & Weeks (2009), using BMS. To start, download their dataset from http://qed.econ.queensu.ca/jae/2009-v24.2/doppelhofer-weeks/ and put it in your working directory where you have saved your R code. The dataset is named “jointness_data.txt”. The next section provides a jointness function, called “jointness.score”, that wold generate the jointness scores.

## Calling the required libraries

library(BMS)

## Reading & cleaning data

j.data <- read.table(file=”jointness_data.txt”, na.strings=”.”, header=FALSE)

j.data <- na.omit(j.data)

# Jointness function

jointness.score <- function(jdata, method=c(“DW”,”LS1″,”LS2″), burn=1000, iter=200000, nmodel=10000, mprior=”uniform”, force.full.ols=TRUE){

###############################################################################

###############################################################################

## Method=”DW”: corresponds to Doppelhofer & Weeks’ (2009) jointness presented

## in equation (25), page 218 of their paper.

## Reference: Doppelhofer, G. & Weeks, M. (2009), ‘Jointness of growth determinants’, Journal of Applied Econometrics 24(2), 209–244.

##

## Method=”LS1″: refers to Ley & Steel’ (2009) jointness presented

## in page 249

## Method=”LS2″: refers to an alternative jointness proposed b Ley & Steel (2009), page 249.

## Reference: Doppelhofer, G. & Weeks, M. (2009), ‘Jointness of growth determinants’, Journal of Applied Econometrics 24(2), 209–244.

## See BMS manual for other arguments of this function including iter, burn, nmodel, and mprior

###############################################################################

###############################################################################

jointness.bma <- bms(j.data, burn=burn, iter=iter, nmodel=nmodel, mprior=mprior, force.full.ols=force.full.ols)

j.pips <- coef(jointness.bma)[,1:3]

top.includes <- topmodels.bma(jointness.bma)

cov.nam <- head(row.names(top.includes),-2)

cov.no <- length(cov.nam)

pmps <- pmp.bma(jointness.bma)[,2]

pmps <- pmps/sum(pmps)

jointness <- matrix(0,cov.no,cov.no)

for(i in 1:cov.no){

for(j in 1:cov.no){

selected.includes <- top.includes

tl <- sum(pmps[colSums(selected.includes)==2])

tr <- sum(pmps[colSums(selected.includes)==0])

bl <- sum(pmps[selected.includes[1,]==0 & selected.includes[2,]==1])

br <- sum(pmps[selected.includes[1,]==1 & selected.includes[2,]==0])

if (method==”DW”){

## Lee-Steel jointness statistic

jointness[i,j] <- log(tl/(bl+br+tl)) } else {

if (method==”LS1″){

## Lee-Steel jointness statistic alternative

jointness[i,j] <- log(tl/(bl+br)) } else {

if (method==”LS2″) {

## Doppelhofer-Weeks jointness statistic

jointness[i,j] <- log((tl*tr)/(bl*br))

}

}

}

}

}

colnames(jointness) <- 1:cov.no

rownames(jointness) <- cov.nam

jointness <- round(jointness,1)

diag(jointness) <- “.”

jointness <- as.data.frame(jointness)

return(jointness)

}

## Replicating Doppelhofer & Weeks (2009) results using BMS:

jointness.score(jdata, method=”DW”, burn=1000, iter=200000, nmodel=10000, mprior=”uniform”, force.full.ols=TRUE)

**leave a comment**for the author, please follow the link and comment on their blog:

**BMS Add-ons » BMS Blog**.

R-bloggers.com offers

**daily e-mail updates**about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...