I like running fortune every time when the terminal was started. A screenshot is shown below: At the end of the post trick things in R, I introduced the fortunes package of R. Read More: 1656 Words Totally

I’m currently visiting Taiwan and I’m giving two seminars while I’m here — one at the National Tsing Hua University in Hsinchu, and the other at Academia Sinica in Taipei. Details are below for those who might be nearby. Automatic Time Series Forecasting College of Technology Management, Institute of Service Science, National Tsing Hua University,

by Joseph Rickert In a recent post, where I presented some R related highlights of November's H20 World conference, I singled out and described talks by Trevor Hastie and John Chambers and remarked that it would be nice if the videos would be made available. Well, thanks to the generosity of the folks at H2O I got my wish....

Logistic Regression Continued I'm finally getting back to tackling the Titanic competition. In my last entry, I had started with some basic models (only females live, only 1st and 2nd class females live, etc), and then moved onto logistic regression. My logistic regression model at the time was not performing that well but I was also only using four...

What I like most about the R and Python developer and user communities, is their incredible openness and generosity. One of the finest examples in the past year was the online course “Statistical Learning” taught by Stanford professors Trevor Hastie and Rob Tibshirani. In this MOOC they explain very understandably (even

I’ve posted a new release of the ggRandomForests: Visually Exploring Random Forests to CRAN at (http://cran.r-project.org/package=ggRandomForests) The biggest news is the inclusion of some holiday reading – a ggRandomForests package vignette! ggRandomForests: Visually Exploring a Random Forest for Regression The vignette… Continue reading →

I've been doing some classification with logistic regression in brain imaging recently. I have been using the ROCR package, which is helpful at estimating performance measures and plotting these measures over a range of cutoffs. The prediction and performance functions are the workhorses of most of the analyses in ROCR I've been doing. For those

As John mentioned in his last post, we have been quite interested in the recent study by Fernandez-Delgado, et.al., “Do we Need Hundreds of Classifiers to Solve Real World Classification Problems?” (the “DWN study” for short), which evaluated 179 popular implementations of common classification algorithms over 120 or so data sets, mostly from the UCI … Continue reading...

e-mails with the latest R posts.

(You will not see this message again.)