My first competition at Kaggle
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
For me Kaggle becomes a social network for data scientist, as stackoverflow.com or github.com for programmers. If you are data scientist, machine learner or statistician you better off to have a profile there, otherwise you do not exist.
Nevertheless, I won’t bet on rosy future for data scientist as journalists suggest (sexy job for next decade). For sure, the demand for such specialists is on rise. However, I see one big threat for data scientist – Kaggle and similar service providers. You see, such services allows to tap high end data scientists (think of PhD in hard science) at minuscule fraction of real price. Think of Hollywood business model – top players get majority of the pool and the rest is starving. If you try the same service model on IT projects you will most likely get burned. My reasoning can be wrong, but I suspect, that project timespan is the issue – IT projects can take for while to finish (1-10 years), but main stream ML project won’t take that long.
Notwithstanding these obstacles, machine learning, information retrieval, data mining and etc. is a must with ability to write code for production, deal with streaming big data and cope with performance of intelligent system. Then, in programmers parlance, you will became “data scientist ninja” and every company will die for you. There is a good post on the subject on mikiobraun blog, but I mind you, that it is a bit controversial.
Although for last 4 years I often has been working on financial models and time-series, this competition added a new experience to me and hunger for the knowledge. During competition I found this book very practical and plentiful of ideas what to do next: Data Mining: Practical Machine Learning Tools and Techniques. As complimentary book I used Data Mining: Concepts and Techniques, though most of information can be found in one of them. I will try to summarize some chapters in my own story.
Understanding the data. “Online Product Sales” competition metadata (data about data) is miserly – there are three types of the data – the date fields, categorical fields, quantitative fields and response data for next 12 months. However metadata is most important element in all ML projects, which can save you a lot of time once you understand it better and it leads to much better forecast if you have “domain knowledge”.
Cleaning data. There is famous phrase: “garbage in garbage out”, meaning, that before any further action you have to detect and fix incorrect, incomplete or missing data. You have many possibilities to deal with missing data – remove all rows, where the data is missing; replace it with mean or regressed value or nearest value and etc. If your data is plentiful and missing values are random (meaning, that NA values do not bear any information) – just get rid of them. Otherwise you need impute new values based on mean or other technique. Mean based replacement worked best for me in this competition.
Outliers are another type of the troubles. Suppose, that variable is normally distributed, but few variables are far away from the center. The easiest solution would be to remove such values – as many do in finance by removing “crisis period”. When next crisis hits, the journalists are rushing to learn a new buzzword- black swan. Turns out, that outliers can’t be ignored, because the impact of them is huge. So be precautious while dealing with outliers.
Feature selection. It was surprising to me that too many features or variables can pollute forecast, therefore you need to do feature selection. Such task can be done manually be checking correlation matrix, co-variance and etc. However, random forest or generalized boosted methods can lead to better selection. In R you just call randomForest() or gbm() and job is done.
Variable transformation – a way to get superior performance. “Online Product Sales” competition has two date fields, however these fields encoded as integers. By transforming these variables into date and retrieving year and month led to better performance of the model. In most of cases taking logarithm for numeric fields gives performance boost. Scaling (from 0 to 1 or from -1 to 1) and centering (normal distribution) might be considered when linear models are in use. It is worth to transform categorical variables as well, where 1 would mean, that a feature belongs to the group and 0 otherwise. Check model.matrix function in R for latter transformation and preProcess function in caret package for numerical variables.
Validation stage – helps you to measure performance of the model. If you have huge database to build a model you can divide you set into two/three parts – for training, testing and cross validation and you are ready to start. However, if you are not so lucky, then other methods come to play. Most popular method is division of the set into two groups, namely “training” and “test” and rotating it for 10 times. For example, you have 100 rows, so you take first 75 for training and 25 for test and you check the performance ratio. In the next step you take the rows from 25 to 100 for training and you use first 25 for test. Once you repeat such procedure 10 times, you have 10 performance ratios and you take average of it.
Stratified sampling is a buzzword, which you should know when you do a sampling.
Keeping all this information in mind I wasn’t able to to implement accurate cross validation and my results differ within 0.05 range.
Model selection and ensemble. Intuitively you want to choose the best performing algorithm, however the mix of them can lead to superior performance. For regression problem I trained four models (two random forest versions, gbm, svm), made the predictions, averaged the results and that led to better prediction.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.