The Data Mining Process: Modeling

[This article was first published on ThinkToStart » R Tutorials, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Happy new year, everyone! Continuing this series on the data mining process that has previously examined understanding business problems and associated data as well as data preparation,  this post focuses on modeling.

Developing models calls for using specific algorithms to explore, recognize, and ultimately output any patterns or themes in your data.  The two goals of modeling are to classify or predict. Some algorithms specialize in either classifying or predicting while others can be applied to do both. Choosing which algorithms to employ in developing models depends on the goals of the business, the nature of the data (structured versus unstructured), and the quantity as well as quality of the data.

There are many popular algorithms that are often seen to develop models for specific types of business problems:

  • Predicting binary outcomes (yes/no) often utilizes logistic regression and determines a percentage likelihood that each person or thing will or will not do something. I use this algorithm frequently when building models for colleges and universities that predict prospective students’ likelihoods of enrolling at the institution.
  • FP-growth is often used to develop association rules. A grocery store might use this algorithm to determine where to place specific items that are frequently bought together, such as milk and cookies. This type of analysis is also referred to as market basket analysis.
  • K-means clustering is typically a good algorithm to try when looking for connections among attributes that make it easier to create groupings of people or things. For example, health professionals might develop models based on observations of patients’ weight, cholesterol, and habits related to eating, smoking, and exercise to create buckets of those patients who are considered as high, moderate, or low risk for heart disease.
  • Linear regression is popular when companies attempt to predict consumption of their products. For example, a home electricity provider might look at the number of people in a household, past consumption, outdoor temperature patterns, and other variables to predict how much electricity a homeowner is likely to use.
  • Naive bayesian and support vector machine algorithms are used for fraud detection and text analytics.
  • Decision trees can be used for many of the tasks mentioned above and are one of the most popular and flexible algorithms for predicting and/or classifying. Decision trees are particularly beneficial for reporting purposes because their visual nature makes it easy for all members of organizations to understand relationships among variables and what variables are most important in the analyses of various types of business problems.

It is important to mention that, while specific algorithms are often used to develop specific types of models, there are typically multiple types of algorithms that might be the best option in any specific instance. Choosing which algorithm is best for any specific modeling task requires evaluation. Evaluating models will be covered in the next post.

The post The Data Mining Process: Modeling appeared first on ThinkToStart.

To leave a comment for the author, please follow the link and comment on their blog: ThinkToStart » R Tutorials.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)