If you’ve read any of my previous posts you know

What is Random? As previously discussed, there’s no universal measure of randomness. Randomness implies the lack of pattern and the inability to predict future outcomes. However, The lack of an obvious model doesn’t imply randomness anymore than a curve fit one implies order. So what actually constitutes randomness, how can we quantify it, and why do we care? Randomness $\neq$ Volatility, and Predictability $\neq$ Profit First...

What is Random? As previously discussed, there’s no universal measure of randomness. Randomness implies the lack of pattern and the inability to predict future outcomes. However, The lack of an obvious model doesn’t imply randomness anymore than a curve fit one implies order. So what actually constitutes randomness, how can we quantify it, and why do we care? Randomness $neq$ Volatility, and Predictability $neq$ Profit First...

Partial least squares (PLS) is a versatile algorithm which can be used to predict either continuous or discrete/categorical variables. Classification with PLS is termed PLS-DA, where the DA stands for discriminant analysis. The PLS-DA algorithm has many favorable properties for dealing with multivariate data; one of the most important of which is how variable collinearity is

Any instrumental variables (IV) estimator relies on two key assumptions in order to identify causal effects: That the excluded instrument or instruments only effect the dependent variable through their effect on the endogenous explanatory variable or variables (the exclusion restriction), That the correlation between the excluded instruments and the endogenous explanatory variables is strong enough

The title of this book Informative Hypotheses somehow put me off from the start: the author, Hebert Hoijtink, seems to distinguish between informative and uninformative (deformative? disinformative?) hypotheses. Namely, something like H0: μ1=μ2=μ3=μ4 is “very informative” and the alternative Ha is completely uninformative, while the “alternative null” H1: μ1<μ2=μ3<μ4 is informative. (Hence the < signs on

The ideas for most of my blogs usually come from half-baked attempts to create some neat or useful feature that hasn’t been implemented in R. These ideas might come from some analysis I’ve used in my own research or from some other creation meant to save time. More often than not, my blogs are motivated

Today a new version (0.23.1) of the WRS package (Wilcox’ Robust Statistics) has been released. This package is the companion to his rather exhaustive book on robust statistics, “Introduction to Robust Estimation and Hypothesis Testing” (Amazon Link de/us). For a fail-safe installation of the package, follow this instruction. As a guest post, Rand Wilcox describes