Site icon R-bloggers

naive bayes

[This article was first published on Modeling with R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Introduction

Naive bayes model based on a strong assumption that the features are conditionally independent given the class label. Since this assumption is rarely when it is true, this model termed as naive. However, even this assumption is not satisfied the model still works very well (Kevin.P murphy 2012). Using this assumption we can define the class conditionall density as the product of one dimensional densities.

\[p(X|y=c,\theta)=\prod_{j=1}^Dp(x_j|y=c,\theta_{jc})\]

The possible one dimensional density for each feature depends on the type of the feature:

  • For real_valued features we can make use of gaussion distribution:

\[p(X|y=c,\theta)=\prod_{j=1}^D\mathcal N(\mu_{jc}|y=c,\sigma_{jc}^2)\]

  • For binary feature we can use bernouli distribution:

\[p(X|y=c,\theta)=\prod_{j=1}^DBer(x_j|\mu_{jc})\]

  • For categorical feature we can make use of multinouli distribution:

\[p(X|y=c,\theta)=\prod_{j=1}^DCat(x_j|\mu_{jc})\]

For data that has features of different types we can use a mixture product of the above distributions, and this is what we will do in this paper.

Data preparation

The data that we will use here is uploaded from kaggle website, which is about heart disease. let us start by calling the packages needed and the data, then we give an appropriate name to the first column

library(tidyverse)
library(caret)
mydata<-read.csv("heart.csv",header = TRUE)
names(mydata)[1]<-"age"
glimpse(mydata)
## Observations: 303
## Variables: 14
## $ age      <int> 63, 37, 41, 56, 57, 57, 56, 44, 52, 57, 54, 48, 49, 64, 58...
## $ sex      <int> 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0...
## $ cp       <int> 3, 2, 1, 1, 0, 0, 1, 1, 2, 2, 0, 2, 1, 3, 3, 2, 2, 3, 0, 3...
## $ trestbps <int> 145, 130, 130, 120, 120, 140, 140, 120, 172, 150, 140, 130...
## $ chol     <int> 233, 250, 204, 236, 354, 192, 294, 263, 199, 168, 239, 275...
## $ fbs      <int> 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0...
## $ restecg  <int> 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1...
## $ thalach  <int> 150, 187, 172, 178, 163, 148, 153, 173, 162, 174, 160, 139...
## $ exang    <int> 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0...
## $ oldpeak  <dbl> 2.3, 3.5, 1.4, 0.8, 0.6, 0.4, 1.3, 0.0, 0.5, 1.6, 1.2, 0.2...
## $ slope    <int> 0, 0, 2, 2, 2, 1, 1, 2, 2, 2, 2, 2, 2, 1, 2, 1, 2, 0, 2, 2...
## $ ca       <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2...
## $ thal     <int> 1, 2, 2, 2, 2, 1, 2, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2...
## $ target   <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...

the target variable indicates whether a patient has the disease or not based on the following features:

  • age.
  • sex: 1=male,0=female
  • cp : chest pain type.
  • trestbps : resting blood pressure.
  • chol: serum cholestoral.
  • fbs : fasting blood sugar.
  • restecg : resting electrocardiographic results.
  • thalach : maximum heart rate achieved
  • exang : exercise induced angina.
  • oldpeak : ST depression induced by exercise relative to rest.
  • slope : the slope of the peak exercise ST segment.
  • ca : number of major vessels colored by flourosopy.
  • thal : it is not well defined from the data source.
  • target: have heart disease or not.

The most intuitive thing by which we start our analysis is by getting the summary of this data to check the range, the five quantiles, and the existance or not of missing values for each feature.

summary(mydata)
##       age             sex               cp           trestbps    
##  Min.   :29.00   Min.   :0.0000   Min.   :0.000   Min.   : 94.0  
##  1st Qu.:47.50   1st Qu.:0.0000   1st Qu.:0.000   1st Qu.:120.0  
##  Median :55.00   Median :1.0000   Median :1.000   Median :130.0  
##  Mean   :54.37   Mean   :0.6832   Mean   :0.967   Mean   :131.6  
##  3rd Qu.:61.00   3rd Qu.:1.0000   3rd Qu.:2.000   3rd Qu.:140.0  
##  Max.   :77.00   Max.   :1.0000   Max.   :3.000   Max.   :200.0  
##       chol            fbs            restecg          thalach     
##  Min.   :126.0   Min.   :0.0000   Min.   :0.0000   Min.   : 71.0  
##  1st Qu.:211.0   1st Qu.:0.0000   1st Qu.:0.0000   1st Qu.:133.5  
##  Median :240.0   Median :0.0000   Median :1.0000   Median :153.0  
##  Mean   :246.3   Mean   :0.1485   Mean   :0.5281   Mean   :149.6  
##  3rd Qu.:274.5   3rd Qu.:0.0000   3rd Qu.:1.0000   3rd Qu.:166.0  
##  Max.   :564.0   Max.   :1.0000   Max.   :2.0000   Max.   :202.0  
##      exang           oldpeak         slope             ca        
##  Min.   :0.0000   Min.   :0.00   Min.   :0.000   Min.   :0.0000  
##  1st Qu.:0.0000   1st Qu.:0.00   1st Qu.:1.000   1st Qu.:0.0000  
##  Median :0.0000   Median :0.80   Median :1.000   Median :0.0000  
##  Mean   :0.3267   Mean   :1.04   Mean   :1.399   Mean   :0.7294  
##  3rd Qu.:1.0000   3rd Qu.:1.60   3rd Qu.:2.000   3rd Qu.:1.0000  
##  Max.   :1.0000   Max.   :6.20   Max.   :2.000   Max.   :4.0000  
##       thal           target      
##  Min.   :0.000   Min.   :0.0000  
##  1st Qu.:2.000   1st Qu.:0.0000  
##  Median :2.000   Median :1.0000  
##  Mean   :2.314   Mean   :0.5446  
##  3rd Qu.:3.000   3rd Qu.:1.0000  
##  Max.   :3.000   Max.   :1.0000

After inspecting the features we see that Some variables should be treated as factors rather than numerics such as sex, cp, fbs, restecg, exange, slope, ca, thal, and the target variable, hence they will be converted to factor type as follows:

mydata<-mydata %>%
  mutate_at(c(2,3,6,7,9,11,12,13,14),funs(as.factor))
summary(mydata)
##       age        sex     cp         trestbps          chol       fbs    
##  Min.   :29.00   0: 96   0:143   Min.   : 94.0   Min.   :126.0   0:258  
##  1st Qu.:47.50   1:207   1: 50   1st Qu.:120.0   1st Qu.:211.0   1: 45  
##  Median :55.00           2: 87   Median :130.0   Median :240.0          
##  Mean   :54.37           3: 23   Mean   :131.6   Mean   :246.3          
##  3rd Qu.:61.00                   3rd Qu.:140.0   3rd Qu.:274.5          
##  Max.   :77.00                   Max.   :200.0   Max.   :564.0          
##  restecg    thalach      exang      oldpeak     slope   ca      thal    target 
##  0:147   Min.   : 71.0   0:204   Min.   :0.00   0: 21   0:175   0:  2   0:138  
##  1:152   1st Qu.:133.5   1: 99   1st Qu.:0.00   1:140   1: 65   1: 18   1:165  
##  2:  4   Median :153.0           Median :0.80   2:142   2: 38   2:166          
##          Mean   :149.6           Mean   :1.04           3: 20   3:117          
##          3rd Qu.:166.0           3rd Qu.:1.60           4:  5                  
##          Max.   :202.0           Max.   :6.20

In practice It is very usefull to inspect (by traditional statistic test such as kisq or correlation coefficient) the relationships between the target variable and each of the potential explanatory variables before building any model, doing so we can tell apart the relevant variables from the irrelvant ones and hence which of which should include in our model. Another important issue with factors is that when spliting the data between training set and testing set some factor level can be missing in one set if the the number of casses for that level is too small.
let’s check if all the factor levels contribute on each target variable level.

xtabs(~target+sex,data=mydata)
##       sex
## target   0   1
##      0  24 114
##      1  72  93
xtabs(~target+cp,data=mydata)
##       cp
## target   0   1   2   3
##      0 104   9  18   7
##      1  39  41  69  16
xtabs(~target+fbs,data=mydata)
##       fbs
## target   0   1
##      0 116  22
##      1 142  23
xtabs(~target+restecg,data=mydata)
##       restecg
## target  0  1  2
##      0 79 56  3
##      1 68 96  1
xtabs(~target+exang,data=mydata)
##       exang
## target   0   1
##      0  62  76
##      1 142  23
xtabs(~target+slope,data=mydata)
##       slope
## target   0   1   2
##      0  12  91  35
##      1   9  49 107
xtabs(~target+ca,data=mydata)
##       ca
## target   0   1   2   3   4
##      0  45  44  31  17   1
##      1 130  21   7   3   4
xtabs(~target+thal,data=mydata)
##       thal
## target   0   1   2   3
##      0   1  12  36  89
##      1   1   6 130  28

As we see the restecg,ca and thal variables have values less than the threshold of 5 casses required, so if we split the data between training set and test set the level 2 of the restecg variable will not be found in one of the sets since we have only one case. Therfore we should remove these variables from the model.

mydata<-mydata[,-c(7,12,13)]
glimpse(mydata)
## Observations: 303
## Variables: 11
## $ age      <int> 63, 37, 41, 56, 57, 57, 56, 44, 52, 57, 54, 48, 49, 64, 58...
## $ sex      <fct> 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0...
## $ cp       <fct> 3, 2, 1, 1, 0, 0, 1, 1, 2, 2, 0, 2, 1, 3, 3, 2, 2, 3, 0, 3...
## $ trestbps <int> 145, 130, 130, 120, 120, 140, 140, 120, 172, 150, 140, 130...
## $ chol     <int> 233, 250, 204, 236, 354, 192, 294, 263, 199, 168, 239, 275...
## $ fbs      <fct> 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0...
## $ thalach  <int> 150, 187, 172, 178, 163, 148, 153, 173, 162, 174, 160, 139...
## $ exang    <fct> 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0...
## $ oldpeak  <dbl> 2.3, 3.5, 1.4, 0.8, 0.6, 0.4, 1.3, 0.0, 0.5, 1.6, 1.2, 0.2...
## $ slope    <fct> 0, 0, 2, 2, 2, 1, 1, 2, 2, 2, 2, 2, 2, 1, 2, 1, 2, 0, 2, 2...
## $ target   <fct> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...

Before training our model, we can get a vague insight about the predictors that have some importance for the prediction of the dependent variable.

Let’s plot the relationships between the target variabl and the other features.

ggplot(mydata,aes(sex,target,color=target))+
  geom_jitter()

If we look only at the red points (healthy patients) we can wrongly interpret that females are less healthy than males. This is because we do not take into account that we have imbalanced number of each sex level (96 females , 207 males). in contrast, if we look only at females we can say that a particular female are more likely to have the disease than not.

ggplot(mydata,aes(cp,fill= target))+
  geom_bar(stat = "count",position = "dodge")

From this plot we can conclude that if the patient does not have any chest pain he/she will be highly unlikely to get the disease, otherwise for any chest type the patient will be more likely to be pathologique by this disease. we can expect therfore that this predictor will have a significant importance on the training model.

ggplot(mydata, aes(age,fill=target))+
  geom_density(alpha=.5)

Even there exist a large amount of overlapping between the two densities which may violate the independence assumption, it still exist some difference since these are drawn from the sample not the from the true distributions. However, we do not care much about it since we will evaluate the resulted model by using the testing set.
we can also check this assumption with the corralation matrix.

library(psych)
pairs.panels(mydata[,-11])

AS we see all the correlations are less than 50% so we can go ahead and train our model.

Data partition

we take out 80% of the data to use as training set and the rest will be put aside to evaluate the model performance.

set.seed(1234)
index<-createDataPartition(mydata$target, p=.8,list=FALSE)
train<-mydata[index,]
test<-mydata[-index,]

train the model

Note: for this model we do not need to set seed because this model uses known densities for the predictors and does not use any random method.

library(naivebayes)
modelnv<-naive_bayes(target~.,data=train)
modelnv
## 
## ================================== Naive Bayes ================================== 
##  
##  Call: 
## naive_bayes.formula(formula = target ~ ., data = train)
## 
## --------------------------------------------------------------------------------- 
##  
## Laplace smoothing: 0
## 
## --------------------------------------------------------------------------------- 
##  
##  A priori probabilities: 
## 
##         0         1 
## 0.4567901 0.5432099 
## 
## --------------------------------------------------------------------------------- 
##  
##  Tables: 
## 
## --------------------------------------------------------------------------------- 
##  ::: age (Gaussian) 
## --------------------------------------------------------------------------------- 
##       
## age            0         1
##   mean 56.432432 52.378788
##   sd    8.410623  9.896819
## 
## --------------------------------------------------------------------------------- 
##  ::: sex (Bernoulli) 
## --------------------------------------------------------------------------------- 
##    
## sex         0         1
##   0 0.1891892 0.3939394
##   1 0.8108108 0.6060606
## 
## --------------------------------------------------------------------------------- 
##  ::: cp (Categorical) 
## --------------------------------------------------------------------------------- 
##    
## cp           0          1
##   0 0.75675676 0.22727273
##   1 0.07207207 0.25000000
##   2 0.12612613 0.42424242
##   3 0.04504505 0.09848485
## 
## --------------------------------------------------------------------------------- 
##  ::: trestbps (Gaussian) 
## --------------------------------------------------------------------------------- 
##         
## trestbps         0         1
##     mean 133.82883 128.75758
##     sd    18.26267  15.21857
## 
## --------------------------------------------------------------------------------- 
##  ::: chol (Gaussian) 
## --------------------------------------------------------------------------------- 
##       
## chol           0         1
##   mean 248.52252 240.80303
##   sd    51.07194  53.55705
## 
## ---------------------------------------------------------------------------------
## 
## # ... and 5 more tables
## 
## ---------------------------------------------------------------------------------

As we see each predictor is treated depending on its type, gaussion distribution for numeric variables, bernouli distribution for binary variables and multinouli distribution for categorical variables.

all the informations about this model can be extracted using the function attributes.

attributes(modelnv)
## $names
## [1] "data"       "levels"     "laplace"    "tables"     "prior"     
## [6] "usekernel"  "usepoisson" "call"      
## 
## $class
## [1] "naive_bayes"

we can visualize the above reults with the fuction plot that provides us by plot the distribution of each features, densities for numeric features and bars for factors. .

plot(modelnv)

Evaluate the model

We can check the accuracy of the training data of this model using the confusion matrix.

pred<-predict(modelnv,train)
confusionMatrix(pred,train$target)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction   0   1
##          0  86  24
##          1  25 108
##                                           
##                Accuracy : 0.7984          
##                  95% CI : (0.7423, 0.8469)
##     No Information Rate : 0.5432          
##     P-Value [Acc > NIR] : <2e-16          
##                                           
##                   Kappa : 0.5934          
##                                           
##  Mcnemar's Test P-Value : 1               
##                                           
##             Sensitivity : 0.7748          
##             Specificity : 0.8182          
##          Pos Pred Value : 0.7818          
##          Neg Pred Value : 0.8120          
##              Prevalence : 0.4568          
##          Detection Rate : 0.3539          
##    Detection Prevalence : 0.4527          
##       Balanced Accuracy : 0.7965          
##                                           
##        'Positive' Class : 0               
## 

The accuracy rate of the training set is about 79.84%. as expected the specificity rate (81.82%) for class 1 is much larger than the snesitivity rate (77.48) for class 0. This is reflectd by the fact that we have larger number of class 1 than class 0.

print(prop.table(table(train$target)),digits = 2)
## 
##    0    1 
## 0.46 0.54

The reliable evaluation is that based on the unseen testing data rather than the training data.

pred<-predict(modelnv,test)
confusionMatrix(pred,test$target)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction  0  1
##          0 18  6
##          1  9 27
##                                           
##                Accuracy : 0.75            
##                  95% CI : (0.6214, 0.8528)
##     No Information Rate : 0.55            
##     P-Value [Acc > NIR] : 0.001116        
##                                           
##                   Kappa : 0.4898          
##                                           
##  Mcnemar's Test P-Value : 0.605577        
##                                           
##             Sensitivity : 0.6667          
##             Specificity : 0.8182          
##          Pos Pred Value : 0.7500          
##          Neg Pred Value : 0.7500          
##              Prevalence : 0.4500          
##          Detection Rate : 0.3000          
##    Detection Prevalence : 0.4000          
##       Balanced Accuracy : 0.7424          
##                                           
##        'Positive' Class : 0               
## 

The accuracy rate of the test set now is about 75%, may be due to overfitting problem, or this kind of model is not suitable for this data.

Fine tune the model:

In order to increase the model performance we can try another set of hyperparameters. Naive bayes model has different kernels and by default the usekernel argument is set to be FALSE which allows the use of the gaussion distriburtion for the numeric variables,if TRUE the kernel density estimation applies instead. Let’s turn it to be TRUE and see what will happen for the test accuracy rate.

modelnv1<-naive_bayes(target~.,data=train,
                      usekernel = TRUE)
pred<-predict(modelnv1,test)
confusionMatrix(pred,test$target)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction  0  1
##          0 19  6
##          1  8 27
##                                           
##                Accuracy : 0.7667          
##                  95% CI : (0.6396, 0.8662)
##     No Information Rate : 0.55            
##     P-Value [Acc > NIR] : 0.0004231       
##                                           
##                   Kappa : 0.5254          
##                                           
##  Mcnemar's Test P-Value : 0.7892680       
##                                           
##             Sensitivity : 0.7037          
##             Specificity : 0.8182          
##          Pos Pred Value : 0.7600          
##          Neg Pred Value : 0.7714          
##              Prevalence : 0.4500          
##          Detection Rate : 0.3167          
##    Detection Prevalence : 0.4167          
##       Balanced Accuracy : 0.7609          
##                                           
##        'Positive' Class : 0               
## 

After using the kernel estimation we have obtained a slight improvement for the accuracy rate which is now about 76%.

Another way to improve the model is to try to preprocess the data, especailly for numeric when we standardize them they would follow the normal distribution.

modelnv2<-train(target~., data=train,
                method="naive_bayes",
                preProc=c("center","scale"))
modelnv2
## Naive Bayes 
## 
## 243 samples
##  10 predictor
##   2 classes: '0', '1' 
## 
## Pre-processing: centered (13), scaled (13) 
## Resampling: Bootstrapped (25 reps) 
## Summary of sample sizes: 243, 243, 243, 243, 243, 243, ... 
## Resampling results across tuning parameters:
## 
##   usekernel  Accuracy   Kappa    
##   FALSE      0.7775205  0.5511328
##    TRUE      0.7490468  0.4988034
## 
## Tuning parameter 'laplace' was held constant at a value of 0
## Tuning
##  parameter 'adjust' was held constant at a value of 1
## Accuracy was used to select the optimal model using the largest value.
## The final values used for the model were laplace = 0, usekernel = FALSE
##  and adjust = 1.

As we see we get better accuracy rate with the gaussion distribution 78.48% (when usekernel=FALSE) than with the kernel estimation 78.48%.

Let’s use the test set:

pred<-predict(modelnv2,test)
confusionMatrix(pred,test$target)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction  0  1
##          0 19  5
##          1  8 28
##                                          
##                Accuracy : 0.7833         
##                  95% CI : (0.658, 0.8793)
##     No Information Rate : 0.55           
##     P-Value [Acc > NIR] : 0.0001472      
##                                          
##                   Kappa : 0.5578         
##                                          
##  Mcnemar's Test P-Value : 0.5790997      
##                                          
##             Sensitivity : 0.7037         
##             Specificity : 0.8485         
##          Pos Pred Value : 0.7917         
##          Neg Pred Value : 0.7778         
##              Prevalence : 0.4500         
##          Detection Rate : 0.3167         
##    Detection Prevalence : 0.4000         
##       Balanced Accuracy : 0.7761         
##                                          
##        'Positive' Class : 0              
## 

We have another slight improvment with accuracy rate 78.33 after scaling the data.

Conclusion

Naive bayes model is the most widely used model in the classical machine learning models, sepecially with features that are originally normally distributed or after transformation. However, compared to the bagged or boosted models like random forest exgboost models, or compared to deep learning models it is quite less attractive.

To leave a comment for the author, please follow the link and comment on their blog: Modeling with R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.