Here are the slides from the Introduction to R session Danny Kaye and I ran at the BPS Mathematics, Statistics & Computing section CPS Workshop (13 December 2010, Nottingham Trent University).

Here are the slides from the Introduction to R session Danny Kaye and I ran at the BPS Mathematics, Statistics & Computing section CPS Workshop (13 December 2010, Nottingham Trent University).

I was recently reading a bit about logistic regression in a book on hierarchical/multilevel modeling when I first learned about the "divide by 4 rule" for quickly interpreting coefficients in a logistic regression model in terms of the predicted probabilities of the outcome. The idea is pretty simple. The logistic curve (predicted probabilities) is steepest at the center where...

One of the current best tools in the machine learning toolbox is the 1930s statistical technique called logistic regression. We explain how to add professional quality logistic regression to your analytic repertoire and describe a bit beyond that. A statistical analyst working on data tends to deliberately start simple move cautiously to more complicated methods. Related posts:

Please note - I’ve spotted a problem with the approach taken in this post – it seems to underestimate power in certain circumstances. I’ll post again with a correction or a more full explanation when I’ve sorted it. So, I posted an answer on cross validation regarding logistic regression. I thought I’d post it

Previosly, I calculated a bunch of ad-hoc power curves from GISTEMP data. Power is essentially a reframing of the p-value, to see the significance of the trend lines in the global temps. However, power calculations are inherently very noisy, hence, my ad-hoc way of aggregating the data. Another method is to bootstrap through the responses

e-mails with the latest R posts.

(You will not see this message again.)