Consider a scenario where we need to predict a medical condition of a patient (HBP) ,HAVE HIGH BP or NO HIGH BP, based on some observed symptoms – Age, weight, Issmoking, Systolic value, Diastolic value, RACE, etc.. In this scenario we have to build a model which takes the above mentioned symptoms as input values and HBP as response variable. Note that the response variable (HBP) is a value among a fixed set of classes, HAVE HIGH BP or NO HIGH BP.
Logistic regression – a classification problem, not a prediction problem:
In my previous blog I told that we use linear regression for scenarios which involves prediction. But there is a check; the regression analysis cannot be applied in scenarios where the response variable is not continuous. In our case the response variable is not a continuous variable but a value among a fixed set of classes. We call such scenarios as Classification problem rather than prediction problem. In such scenarios where the response variables are more of qualitative nature rather than continuous nature, we have to apply more suitable models namely logistic regression for classification.
Assume we have a binary category output variable Y and a vector of p input variables X. Rather than modeling this response Y directly, logistic regression models the conditional probability, Pr(Y = 1|X = x) as a function of x, that Y belongs to a particular category.
Mathematically, logistic regression is expressed as:
Estimating Coefficients – Maximum Likelihood function:
The unknown parameters, β0/ β1, in the function are estimated by maximum likelihood method using available input training data. The Maximum likelihood function expresses the probability of the observed data as a function of the unknown parameters. The maximum likelihood estimators of these parameters are chosen to be those values that maximize this function. Thus, the estimators are those which agree most closely with the observed data.
For now we assume that solving the above equation can be used to estimate the unknown parameters.
In R, we glm() which takes training data as input and gives us the fitted model with estimated parameters as output, which we will see in the later section.
Once the coefficients have been estimated, it is a simple matter to compute the probability of response variable for any given input values by putting values of β0/ β1/X in the below equation.
Note: we have predict() in R which takes fitted model, input parameters as input values to predict the response variables.
Use case – Classify if a person may have HBP/not HBP :
Let us take a use case and implement logistic regression in R. Let us classify/predict if a person suffers with High Blood Pressure (HPB) given input predictors AGE, SEX, IsSmoking, Avg Systolic BP, Average diastolic BP, RACE, Body Weight, Height, etc. Access the data set from here. The dataset is NHANES III dataset. In order to implement logistic regression, we need to follow the below steps:
#we concluded that it is not related to data.
0 1 0 6576 1702 1 5817 1548
The image shows us the summary of the model. In my next post let us evaluate the logistic regression model and let us consider few other models and choose better model among them.