Art of Statistical Inference

[This article was first published on MATHEMATICS IN MEDICINE, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Art of Statistical Inference

Art of Statistical Inference

This post was written by me a few years ago, when I started learning the art and science of data analysis. It will be a good starter for the amateur data analysts.

Introduction

What is statistics? There are about a dozen of definitions of statistics given by different authors, of which the following seems to be an adequate one: “A field of study concerned with (1) collection, organization, summarization and analysis of data, and (2) drawing inferences about a body of data when only a part of data is observed.” All of us clinicians do statistical analyses many times daily in our practice subconsciously, especially Bayesian form of analysis in which based on our prior experience and present parameters we try to work out probability of outcomes. The aim of this article is to make us feel the beauty of statistics without complicating the issues too much.

Scope of statistics in biology

Statistics forms the backbone of any research activity. In biology and medical science, following are the commonly used statistical methods:

  1. Descriptive statistics.

  2. Comparing measures of central tendencies between two or more populations.

    a. Parametric tests, e.g. t test, ANOVA, etc.

    b. Non parametric tests, e.g. Kruskal Wallis test, Mann Whitney test, etc.

  3. Analysis of proportions (Chi square test, 2×2 table analysis).

  4. Regression analyses

    a. Univariate vs multivariate regression analyses.

    b. Linear, polynomial, non-linear regression analyses.

    c. Generalised Linear Models (logistic regression).

  5. Survival analysis (Life tables, log rank analysis, Kaplan Meier curves, Cox proportional regression analysis).

  6. Multivariate data analysis (MANOVA, Cluster analysis, etc.).

  7. Time series analysis and Markov chain models.

  8. Computer simulation studies (Monte Carlo simulations).

First five above-mentioned branches are used in day-to-day clinical research depending on characteristics of data and research question asked. This article does not aim to dwell into details of each of the above areas, but will try to explain the stepping-stones of statistical inference, from which we will be able to climb up to the specific area of interest after reading a bit.

Mathematics of probability forms the backbone of statistics (e.g., what is probability that … this patient is going to improve, this person is suffering from tuberculosis, the null hypothesis is acceptable, etc?). In next few paragraphs, we will try to understand that.

Basics of probability

Definitions

We carry out experiment or trial and note outcomes. An experiment is called random when the outcome is not known beforehand with certainty, but is known to occur from a finite or infinite set of possible, mutually exclusive (occurrence of one outcome rules out occurrence of another outcome after that experiment) outcomes. A number between 0 and 1, inclusive is assigned to each outcome and is known as the probability of occurrence of that particular outcome. We can add up probabilities of mutually exclusive outcomes to get probability of composite outcomes (events). We get 1 on adding up probabilities of all mutually exclusive outcomes. These are the requirement for a number to be called probability.

If we assign a number (can be decimal also) for each outcome, it becomes a random variable (RV), a central concept in probability.

For example, measuring BP in a patient (flipping a coin) is random experiment, all possible BPs (heads or tails) are the set of mutually exclusive outcomes which are associated with a probability, not exactly known for BP (0.5 for heads and 0.5 tails), which adds up to 1. If we assign number to BP, say actual BP minus 10 (1 for heads and 0 for tails), the set of outcomes becomes a RV.

Theoretical probability vs Experimental probability

Probability values known from probabilistic mathematics (like for heads and tails in flipping unbiased coin) are known as theoretical probabilities. We usually approximate theoretical probability of an outcome by carrying out many experiments and getting ratio between number of occurrences of that outcome and number of experiment carried out. This is experimental probability and it tends to theoretical probability as number of experiments tends to infinity.

Concept of probability distribution and density function

The RV along with probabilities of all of the possible values of RV is called the probability distribution. The mathematical function, which describes this distribution, is called distribution function, and after mathematical modification (differentiation), we get probability density function (pdf) for continuous RV.

Expectation and variance

Expectation of a RV is the value, which we expect to find on average, if we do experiment many times (ideally infinite). It is same as mean of a population. It is calculated by adding up all products of values of RV with their probabilities. Expectation of RV, which is sum of two or more RVs, is sum of expectation of individual RVs.

Variance is a measure of dispersion, and is the expected value of square of difference of value of RV and expectation of RV. Conceptually, we carry out experiment many a times and after each experiment we calculate square of difference of value of RV and expectation of RV. Variance is the average of all the values. It is also an expectation. Variance of RV, which is sum of two or more independent RVs, is sum of variance of individual RVs.

Parametric distribution

There are many distribution functions, which actually denote families of probability distribution functions and assume a particular probability distribution function depending on a particular set of input values of so-called parameters. These are known as parametric distributions. Examples include binomial (parameter: p), normal (parameters: mean, standard deviation), exponential (parameter: lambda), gamma (parameters: a, b) etc. In many a statistical analyses, our aim is to estimate the parameters (say mean and standard deviation) from a given sample which we assume to be from a particular parametric distribution (say normal distribution), as will be described later on.

Scope of RV

Any numerical quantity that does not have a fixed value and has got a distribution function associated with it becomes a RV. That means parameter estimators and sample means (as will be discussed later on) are also RV and have distribution functions, and expectation and variance associated with them. Similarly, the definition of experiment can also change with context, like when calculating distribution of sample means, our experiment changes from taking blood pressure of a single person to randomly selecting sample of size of say 1000 persons, checking their blood pressure and calculating their mean.

Conditional, marginal and joint probability

Conditional probability is expressed as probability of event 'A' occurring given event 'B' has occurred after conducting an experiment. Like, after rolling a dice, probability of getting 2 is 1{2}/6{1,2,3,4,5,6}. However, given it is even (event 'B'), probability of it being 2 (event 'A') is 1{2}/3{2,4,6}. Again, given it is 2 (event 'A'), probability of it being even (event 'B') is 1. Following is a table showing marginal and joint probabilities of two RVs:

Association and dependence

Two RVs are independent, if the marginal probability distribution of one RV is same as conditional probability distribution of one RV given any value of second RV. For example, marginal probability of Y=0 (previous example) is 1/6. Conditional probability of Y=0 given X=0 is 1/6 divided by ½, that is 1/3. In this case, conditional probability of Y=0 given X=0 and marginal probability of Y=0 are different, implying that X and Y are dependent RVs, as expected of course. If joint probabilities of all possible values of all RVs are equal to product of their marginal probabilities, we call those RVs as independent.

Covariance and correlation are two measures of linear association (dependence) of two RVs. Covariance is expected value of {product of (difference of values of RV1 and expected value of RV1) and (difference of values of RV2 and expected value of RV2)}. Correlation is a mathematical modification of covariance to restrict values between -1 and 1. Values of 1 means complete positive correlation, -1 means complete negative correlation (values of RVs are negatively related) and 0 means no linear correlation. It is important to note that, correlation and covariance only indicate amount of linear association. Two RVs may be absolutely related to each other by some non-linear function (as RV1 is square of RV2), will have less than 1 correlation.

It is vitally important to note that the statistical measures of association and dependence do not indicate anything about causality.

Statistical inference

Concept of population and samples

After understanding concept of random variable, next most important concept in statistics to be understood is of population and sample. Population denotes all possible values of a RV and its probability distribution. Population, like RV can be finite or infinite, continuous or discrete. If we know exactly about a population, rest of this article is not required. In real life situations, we define population at the beginning of the experiment, meaning we assign all possible values to the RV denoting population, but we are not aware of the probability distribution.

We carry out the experiment many a times and get a random sample of the underlying population and aim to infer population probability distribution (and other parameters) with the help of sample, assuming that samples behave approximately as the population. This is called statistical inference and is the heart of statistics. This assumption gets truer as the sample size increases (ideally, as it reaches infinity).

We call a sample random when, (1) every member of the sample is taken from the underlying population and has same probability distribution (unknown) and (2) every member of the sample is independent of others.

It is important to note that sample characteristics changes, if new random samples are collected. The probability distribution of sample statistic (i.e. sample mean, sample variance, sample range) is known as sampling distribution, and has associated expectation and variance.

Data description

Data is the values of sample. Before we make any inference from the data, we need to describe the data. It is very difficult and usually impossible to analyze raw data. One of the easiest ways to describe a data is by way of some low dimensional summary, which is known as a statistic. Data needs to be analyzed for its centrality, dispersion and shape. Commonly used measures of centrality are sample mean, median and mode. Commonly used measures of dispersion are variance, standard deviation, range, inter-quantile distance. Commonly used measures of shape are skewness (measures asymmetry about mean, skewness of zero meaning symmetry, negative skew meaning skew towards left, positive skew meaning skew towards right) and kurtosis (measures peakness). We are not going to dwell further into these measures.

Instead of reducing the dataset, it is often helpful to display the whole data graphically. Analysis of data visually is an extremely useful and easy preliminary step in the overall process. It allows us to assess qualitatively the probability distribution (general structure) of data, difference between two or more datasets, relationships between two datasets, etc. Most of the times, we can obtain answer after a mere visual inspection of data. Useful techniques for assessing structure of data and comparing two or more datasets are histograms, kernel density estimators, strip charts, box-plots. Q-Q plots and normality plots are useful for estimating deviation of data from normal distribution (Normally distributed data will have linear plot). Scatter plots are extremely useful for depicting relationship between two datasets.

Concept of likelihood function and maximum likelihood estimator

Understanding likelihood function and maximum likelihood estimate is central to quantitative statistical inference.

Once we have got a sample (y\( _1 \), y\( _2 \), …, y\( _n \)) of size n from a population with a probability distribution (which may not be known, just be guessed from theoretical knowledge or guessed from central limit theorem) with parameter(s) θ, our aim is to estimate value(s) of θ, which explains the data best to an arbitrary accuracy. We form likelihood function, l(θ) which is a pdf with unknown θ and known y\( _i \) s. This function indicates likelihood of the data being explained by that particular θ. This function is mathematically easier to manipulate, if we assume the sample to be random (independent). A fundamental principle in statistical inference is that if l(θ1) > l(θ2), then θ1 explains data better that θ2 and that the amount of support is directly proportional to l(θ).

We get value of θ which maximizes likelihood function by differentiation or numerically. This value of θ is known as maximum likelihood estimator (MLE). This is the most general way for statistical inference. MLE is not actual θ, which is unknown to us and is just an estimate. So, it is a RV and has a probability distribution associated with it.

Bayesian inference

The essence of Bayesian inference is continuous update of pdf of a parameter, θ with advent of new knowledge about it.

We construct p(θ), pdf to reflect our knowledge by making p(θ) large for those values of θ that seem most likely and p(θ) small for those values of θ that seem least likely, according to our state of knowledge. And different people can have different states of knowledge, hence different pdf. This is known as priori distribution.

We carry out experiment to obtain a random sample to update our knowledge about distribution of θ, for that we make likelihood function, i.e. a pdf of random sample given θ. We can then get pdf of θ given that particular random sample by a bit of mathematical manipulation. This new updated pdf of θ is known as posterior distribution.

Few assumptions for statistical inference (theories of large samples)

It is intuitively obvious that large samples are better than small, and less obviously, that with enough data one is eventually lead to the right answer. Few theorems for large samples are:

  1. If we get a random sample of size n from a distribution with mean μ, expected value of sample mean is μ.

  2. If we get random sample of size n from a distribution with variance σ\( _2 \), variance of sample mean (square of standard error of mean) is σ\( _2 \)/n. This means, as n increases, we become more accurate about estimating population mean.

  3. As sample size increases to infinity, it is certain that the sample mean will converge to population mean.

  4. As sample size increases to infinity, probability distribution of sample mean will assume that of normal distribution with mean = population mean (μ) and variance = σ\( _2 \)/n. μ can be estimated by sample mean and σ\( _2 \) by sample variance (approximately). This is irrespective of the probability distribution of population. This theorem is the Central Limit Theorem, and is the backbone of parametric statistical inference.

Next obvious question is how big the sample size should be for central limit theorem to be valid. There is no definite answer to it, as the population distribution is more asymmetric; sample size needs to be larger.

Concept of confidence intervals/likelihood set

As described previously, calculating MLE is the most general way for statistical inference and that MLE is only an estimate of the actual and unknown parameter (θ). Therefore, we need to quantify the accuracy of MLE as an estimate of θ. In other words, we want to know what other values of θ, in addition to MLE, provide a reasonably good explanation of data. Here comes the concept of likelihood set (LS). LS\( _α \) is the set of θs that give l(\( \theta \))/l(MLE) \( \geq \alpha \) , where α is a number between 0 and 1, exclusive, usually taken as 0.1 by convention and α quantifies “reasonably well”. As depicted in the diagram, LS\( _α \) is usually an interval.

As already discussed, MLE is a random variable and so has a probability distribution function, F\( _MLE \) associated with it. In addition to LS, F\( _MLE \) can be used to assess accuracy of MLE as an estimator of θ. If F\( _MLE \) is tightly concentrated around θ, MLE is highly accurate measure of θ. If MLE of a parameter is same as sample mean, as it is in binomial, normal, poisson or exponential distribution, central limit theorem applies. It implies, in large samples, MLE can be described as normally distributed with mean and variance that can be estimated by sample mean and sample variance respectively.

For any normal distribution, about 95% of the mass is within ± 2 standard deviations (square root of variance). So, the interval MLE ± 2 standard deviations is a reasonable estimation interval of population parameter (θ). This interval is the 95% confidence interval (95% CI). It can be shown that LS0.1 and 95% CI are both same in large samples. In smaller samples, LS gives a more accurate measure of θ.

Concept of hypothesis testing

Hypothesis testing is one of the commonest ways of statistical inference done in routine practice. In each instance, there are two mutually exclusive (may not be exhaustive) hypotheses – H\( _0 \), null hypothesis and H\( _a \), alternative hypothesis. By tradition, H\( _0 \) tells that current theory is right and Ha tells that our current theory needs updating. The fundamental idea is to see whether the data are “compatible” with the specific H\( _0 \). If so, there is no reason to doubt H\( _0 \), else there is reason to doubt H\( _0 \) and possibly to consider H\( _a \) in its stead. Typically, there is a four-step process:

  1. Formulate a scientific null hypothesis and translate it into statistical terms.

  2. Choose a low dimensional statistic, say w = w(y\( _1 \), y\( _2 \), …,y\( _n \)) such that the distribution of w is specified under H\( _0 \) and likely to be different under H\( _a \).

  3. Calculate or approximate, the distribution of w under H\( _0 \).

  4. Check whether the observed value of w, calculated from y\( _1 \), y\( _2 \), …,y\( _n \) is compatible with its distribution under H\( _0 \).

To illustrate in more detail, let us consider testing a new blood pressure medication. The H\( _0 \) is that the new medication is not any more effective than the old. We will consider two ways a study may be conducted and see how to test the hypotheses both ways.

METHOD 1: A large number of patients are enrolled in a study and their blood pressure are measured (large sample size). Half are randomly chosen to receive new medication (treatment) and other half receives the old (control). Blood pressure is measure for all patients at baseline and then repeated after a pre-specified time. Let, Y\( _{c,i} \) be change in blood pressure after receiving control in patient i and Y\( _{t,j} \) be change in blood pressure after receiving treatment in patient j. Y\( _{c,1} \), Y\( _{c,2} \), …, Y\( _{c,n} \) is random sample from distribution f\( _c \) and Y\( _{t,1} \), Y\( _{t,2} \), …, Y\( _{t,n} \) is random sample from distribution f\( _t \). Usually f\( _c \) and f\( _t \) are unknown. Expected value of control population is μ\( _c \) and treatment population is μ\( _t \). Variance of control population is σ\( _{2c} \) and treatment population is σ\( _{2t} \). All these parameters are not known. The translation of the hypotheses into statistical terms is H\( _0 \): μ\( _t \) = μ\( _c \) and H\( _a \): μ\( _t \) ≠ μ\( _c \). Because we are testing a difference in means, let low dimensional statistic (w) be difference between sample means (\( \bar{Y}_t \) – \( \bar{Y}_c \)). If the sample size is reasonably large, then central limit theorem says, w approximately follows a normal distribution with mean 0 and variance σ\( _{2w} \) under H\( _0 \) with σ\( _{2w} \) = (σ\( _{2t} \) + σ\( _{2c} \))/n. The mean 0 comes from H\( _0 \) and variance σ\( _{2w} \) comes from adding variances of independent random variables. σ\( _{2t} \) and σ\( _{2c} \) and so σ\( _{2w} \) can be estimated from the sample variance. So, we calculate w from the data and see whether it is within about 2 or 3 standard deviations of where H\( _0 \) says it should be. If it isn't, that's suggestive of sample not belonging to the population described in H\( _0 \). p value is the probability that the sample belongs to the population described in H\( _0 \).

METHOD 2: A large number of patients are enrolled in a study and their blood pressure is measured. They are matched together in pairs according to relevant medical characteristics. The two patients in a pair are chosen to be as similar to each other as possible. In each pair, one patient is randomly chosen to receive the new medication (treatment); the other receives the old (control). After a prespecified amount of time their blood pressures are measured again. Let Y\( _{t,j} \) and Y\( _{c,i} \) be the change in blood pressure for the j'th treatment and i'th control patients The researcher records X\( _i \) = 1, if Y\( _{t,j} \) > Y\( _{c,i} \) or 0 otherwise. X\( _1 \), X\( _2 \), …, X\( _n \) are sample of size n from bernoulli's distribution with parameter θ. The translation of the hypothesis into statistical terms is H\( _0 \): θ = 0.5 and H\( _a \): θ ≠ 0.5. Let low dimensional statistic (w) be mean of sum of all Xs. Under H\( _0 \), w behaves as random sample from binomial distribution with parameters θ=0.5 and n. We can find out of values of θ which correspond to probabilities of 0.025 and 0.975. If w falls within that range, we are 95% confident that H\( _0 \) explains the data, else it is an suggestive evidence against H\( _0 \) (we can still be wrong 5% of times). If the sample size is reasonably large, then central limit theorem says, w approximately follows a normal distribution with mean 0.5 and variance σ\( _{2w} \) under H\( _0 \). Population variance can be estimated by sample variance. So as before, we calculate w from the data and see whether it is within about 2 or 3 standard deviations of where H\( _0 \) says it should be. If it isn't, that's evidence against H\( _0 \). This was an alternative way to analyze the same data. This way is worse of the two methods, as we are losing information in converting difference in BP (continuous data) to dichotomous data (BP decreased or not), so the hypothesis testing becomes less efficient.

Conclusion

This article has presented the basic concepts of probability and statistical inference in brief. It has not dealt with more complicated topics as order statistics and non-parametric analysis. The aim of this article has been to open up a new horizon of statistical thinking amongst us, ultimate aim of which is to develop our own custom-made methods for statistical inference for the problem in hand.

To leave a comment for the author, please follow the link and comment on their blog: MATHEMATICS IN MEDICINE.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)