# How logistic regression work ?

**Learning Data Science**, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Discussing with a non statistician colleague, it seems that the logistic regression is not intuitive; Some basics questions like :

– Why don’t use the linear model?

– What’s logistic function?

– How can we compute by hand, step by step to listen what is dealing by the glm function?

This post aims to answer that questions and may be this helps.

Suppose that we have this data : http://www.info.univ-angers.fr/~gh/wstat/pg.dar

ID TAILLE GROUPE 1 A01 130 0 2 A02 140 0 3 C01 162 0 4 C02 160 1 5 A03 136 0 6 C03 165 1 and we want to predic the group according to the height. The problematic can be the level of risk according to age, or the customer segment according the amounts of transaction, etc. Let's remind. When we compute a linear model (let's assume just one predictor : simple linear model), we have : E(y) =Cste + a1x1. Linear regression like all regressions focuses on the conditional probability distribution of Y given X. The first think generally do is to draw the groupe = f(taille), we got :

The idea of Generaliszed Model (logistic regression is a particular ) is to replace E(Y) by something else.

For our example, we are interested by the probability of a person to be in group 0 or 1.

So, Instead of E(y) =Cste + a1x1, we seek P(Groupe==1) = a0 +a1*Taille. But, to solve the roblem, which is exactly the same to the other hand, we have to transform left hand side using a bijection between the interval[0,1]. That means to seek a “link” function that can help us to work in R.

The most useful function in logistic regression is : logit(p) = log(p/1-p). But one can also use the inverse of normal distribution(probit), the log-log distribution, or poisson distribution.

The method used to perform logistic regression is the maximization of likelihod estimator (MLE)

Read this post.

We sum up :

– Suppose in a population from which we are sampling, each individual has the same probability p to be in groupe 1 or groupe 0

– The likelihood is the joint probability of the data L = Product(P ** {Gourpe = 1} *(1 – p)**{Groupe = 0})

** Means power

For instance, we use log-likelihood.

How to interpret the likelihood 😕

When we try to assign the group for a new id, it’s natural to assign the group which have the best probability according to height.

Apply the MLE and perform logistic regression is done by

> Test = fit.logis(y=don$GROUPE,x=don$TAILLE) > Test coef.est std.err a -27.190 8.885 b 0.181 0.058

We can get the same output using glm function with "binomial" option. >viaglm Call: glm(formula = don$GROUPE ~ don$TAILLE, family = "binomial", data = don) Coefficients: (Intercept) don$TAILLE -27.2103 0.1812 Degrees of Freedom: 29 Total (i.e. Null); 28 Residual Null Deviance: 38.19 Residual Deviance: 10.89 AIC: 14.89

So, we can see that our optimisation via optim function is quite equivalent to glm function.

Just have a look

May be this helps to understand how it works !

**leave a comment**for the author, please follow the link and comment on their blog:

**Learning Data Science**.

R-bloggers.com offers

**daily e-mail updates**about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.