2126 search results for "regression"

A Thumbnail History of Ensemble Methods

March 25, 2014
By
A Thumbnail History of Ensemble Methods

By Mike Bowles Ensemble methods are the backbone of machine learning techniques. However, it can be a daunting subject for someone approaching it for the first time, so we asked Mike Bowles, machine learning expert and serial entrepreneur to provide some context. Ensemble Methods are among the most powerful and easiest to use of predictive analytics algorithms and R...

Read more »

Wright Map Tutorial – Part 3

March 25, 2014
By
Wright Map Tutorial – Part 3

In this part of the tutorial, we’ll show how to load ConQuest output to make a CQmodel object and then WrightMaps. We’ll also show how to turn deltas into thresholds. All the example files here are available in the /inst/extdata folder of the github. If you download the latest version of the package, they should be in a folder...

Read more »

Wright Map Tutorial – Part 3

March 25, 2014
By
Wright Map Tutorial – Part 3

In this part of the tutorial, we’ll show how to load ConQuest output to make a CQmodel object and then WrightMaps. We’ll also show how to turn deltas into thresholds. All the example files here are available in the /inst/extdata folder of the github. If you download the latest version of the package, they should be in a folder...

Read more »

Stochastic search variable selection in JAGS

March 22, 2014
By
Stochastic search variable selection in JAGS

Stochastic search variable selection (SSVS) identifies promising subsets of multiple regression covariates via Gibbs sampling (George and McCulloch 1993). Here’s a short SSVS demo with JAGS and R. Assume we have a multiple regression problem: We suspect only a subset of the elements of $\boldsymbol{\beta}$ are non-zero, i.e. some of the covariates have no effect. Assume $\boldsymbol{\beta}$...

Read more »

Stochastic search variable selection in JAGS

March 22, 2014
By
Stochastic search variable selection in JAGS

Stochastic search variable selection (SSVS) identifies promising subsets of multiple regression covariates via Gibbs sampling (George and McCulloch 1993). Here’s a short SSVS demo with JAGS and R. Assume we have a multiple regression problem: We suspect only a subset of the elements of $boldsymbol{beta}$ are non-zero, i.e. some of the covariates have no effect. Assume $boldsymbol{beta}$ arises from...

Read more »

A simple array class with specialized linear algebra routines

March 20, 2014
By
A simple array class with specialized linear algebra routines

Currie, Durban and Eilers write: Data with an array structure are common in statistics, and the design or regression matrix for analysis of such data can often be written as a Kronecker product. Factorial designs, contingency tables and smoothing of data on multidimensional grids are three such general classes of data and models. In such a setting, we develop an arithmetic...

Read more »

Why multiple imputation?

March 20, 2014
By
Why multiple imputation?

Background In the forth coming week, I will be giving a presentation on the fundamentals of imputation to my colleagues. One of the most important idea I would like to present is multiple imputation. In my last post, I have...

Read more »

Secrets of Teaching R

March 18, 2014
By

by James Paul Peruvankal, Senior Program Manager at Revolution Analytics At Revolution Analytics, we are always interested in how people teach and learn R, and what makes R so popular, yet ‘quirky’ to learn. To get some insight from a real pro we interviewed Bob Muenchen. Bob is the author of R for SAS and SPSS Users and, with...

Read more »

Bayesian First Aid: Pearson Correlation Test

March 17, 2014
By
Bayesian First Aid: Pearson Correlation Test

Correlation does not imply causation, sure but, as Edward Tufte writes, “it sure is a hint.” The Pearson product-moment correlation coefficient is perhaps one of the most common ways of looking for such hints and this post describes the Bayesian First Aid alternative to the classical Pearson correlation test. Except for being based on Bayesian estimation (a good...

Read more »

Fast computation of cross-validation in linear models

March 17, 2014
By
Fast computation of cross-validation in linear models

The leave-one-out cross-validation statistic is given by     where , are the observations, and is the predicted value obtained when the model is estimated with the th case deleted. This is also sometimes known as the PRESS (Prediction Residual Sum of Squares) statistic. It turns out that for linear models, we do not actually have to estimate the...

Read more »

Sponsors

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)