I’m anticipating presenting research of mine based on Bayesian graphical models to an audience that might not be familiar with them. When presenting ordinary regression results, there’s already the sort of statistical sniper questions along the lines of “What if the effect is actually being driven by this other correlate?” and “That effect might result from assumptions a, b, and c of the test.” etc. Sometimes these questions are useful, but sometimes they seem to detract from the substantive issues at hand. And frequently, I see talks get way too bogged down in anticipating questions like this by cramming too much statistical detail into their talk, leaving not enough time to do justice to the theoretical importance of their results.
Add to this the customizability of graphical models, the number of possible distributions and parameter settings, and the notion that “Bayesian” = “subjective”, and I’m really feeling stressed out by the presentational task ahead of me.
So, I’m trying to figure out a good way to both make the model I’ve built fully available and accessible to someone who can’t read JAGS code, has a little bit of presentational pizzaz, and also allows me to focus in on the parameters of specific interest. I started off trying to use Graphviz to produce directed graphs, and wound up with this (an actual level in the model I’m hoping to present).
It’s all a ton of spaghetti, difficult to hilight the particular parameters of interest, and doesn’t represent some important distinctions (like stochastic and deterministic nodes).
It’s getting there, but I’m not convinced yet that it’ll do the job of making the whole model digestible. For one, I’m modeling effects at a few different levels. The token level is represented in this visualization, but I’m also looking at speaker level effects, treating the linguistic context as a within speaker variable, and at word level effects. The way I’m setting things up now, that’s going to call for two more trees like this one.
Maybe the lesson here is that I should just fit and present a simpler model, but remember those sniper questions? I’m worried that if I leave out someone’s favorite correlate, I’ll 1) have to deal with it in the questions and 2) they’ll leave unconvinced, or rather, they’ll leave convinced that it was their favorite correlate doing the work all along. But these are really research anxieties that no visualization toolkit on earth could assuage.