I've found bootstrapping useful in several settings:
- where the statistic I'm interested in is a little unusual: the average R-square across five separate regressions; the difference in the average correlation of a set of variables between two groups
- non parametric statistics, such as the median
- when assumptions such as normality of homoscedasticity are not satisfied
R is very cool for bootstrapping. I’ve mainly used the boot package and found it very good. In fact, it is a classic example of something that R makes easy. It's easy to run loops in R, and R is excellent at taking output from one function and using it as input to another. This is the essence of bootstrapping: taking different samples of your data, getting a statistic for each sample (e.g., the mean, median, correlation, regression coefficient, etc.), and using the variability in the statistic across samples to indicate something about the standard error and confidence intervals for the statistic.
- Quick-R has a good introduction to the boot package:
- Here's another introduction to the boot package
- And another
- Further information on the web can be found in John Fox's article in relation to regression.
You can do bootstrapping with SPSS. I seem to remember there being some Python add-on package that’s designed to make bootstrapping easier. I’ve never used it and I don’t imagine that it would be as easy to use as R given how difficult it is in SPSS to take SPSS output and process it further programmatically (even if the OMS is trying to make this easier). For certain specific tests you might be able to find already available macros (e.g., for indirect effects ).