Non transitivity of correlation for random vectors in dimension 3

[This article was first published on Freakonometrics - Tag - R-english, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Dependence in dimension 2 is difficult. But one has to admit that dimension 2 is way more simple than dimension 3 ! I recently rediscovered a nice paper, Langford, Schwertman & Owens (2001), on transitivity of the property of being positively correlated (which inspired the odd title of this post). And more recently, Castro Sotos, Vanhoof, Van Den Noortgate & Onghena (2001) conducted a study which confirmed that there are strong misconceptions of correlation (and I guess, not only because probabilistic reasoning is extremely weak, as mentioned in Stock & Gross (1989)) and association, or correlation (as already stated in Estapa & Bataneor (1996), or Batanero, Estepa, Godino and Green (1996)). My understanding is that is it possible to have almost anything… even counterintuitive results. For instance, if we want to mix independence and comonotonicity (i.e. perfect positive dependence), all the theorems you might think of should probably be incorrect. Consider the following result (based on some old examples I have been using in my courses 5 or 6 years ago, see e.g. here)

If X and Y are comontonic, and if Y and Z are comonotonic, then X and Z are comonotonic

Well, this result seems to be intuitive, and probably valid. But it is not. Consider the following triplet,

Projections on bivariate planes of the three dimensional vector are

Here, X and Y are comonotonic, so are Y and Z, but X and Z are independent… Weird, isn’t it ? Another one ?

If X and Y are comontonic, and if Y and Z are independent, then X and Z are independent

Again, even if it is intuitive, it is not correct… Consider for instance the following 3 dimensional distribution,

Here, X and Y are comonotonic, while Y and Z are independent, but X here and Z are countercomonotonic (perfect negative dependence). It is also possible to consider the following distribution,e

that can be visualized below,

In that case, X and Y are comonotonic, while Y and Z are independent, but X here and Z are comonotonic (perfect positive dependence). So obviously, we should be able to construct any kind of counterexample, on any kind of result we might think as intuitive.

To be honest, the problem with intuition is that is usually comes from the Gaussian case, and from the perception that dependence is related to correlation. Pearson’s linear correlation. Consider the case of a 3 dimensional random vector, with correlation matrix

https://i1.wp.com/freakonometrics.blog.free.fr/public/perso6/CORRMATRICE.gif?w=578

Given two pairs of correlations, https://i1.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-a.gif?w=578 and https://i2.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-b.gif?w=578, what could we say about https://i2.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-c.gif?w=578 ? For instance, the intuition is that if https://i1.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-a.gif?w=578 and https://i2.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-b.gif?w=578 are positive, then https://i2.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-c.gif?w=578 is likely to be positive too (perhaps). The only property (at least the most important) we have on that correlation matrix is that it should be positive-semidefinite. So if we play on eigenvalues, it should be possible to derive inequalities satisfied by  https://i2.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-c.gif?w=578.

Langford, Schwertman & Owens (2001) claim (in Theorem 3) that correlations have to satisfy some property, like

https://i0.wp.com/freakonometrics.blog.free.fr/public/perso6/kendall1.gif?w=578

which is simply the fact that the determinant of the correlation matrix has to be positive, that property was already mentioned in Kendall (1948), as an exercise,

But is that a sufficient and necessary condition ? Since I am extremely lazy, let us run some numerical calculation to visualize possible values for https://i2.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-c.gif?w=578, as function of https://i1.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-a.gif?w=578 and https://i2.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-b.gif?w=578. Consider the following code

U=seq(-1,1,by=.1)
V=seq(-1,1,by=.001)
FSUP=function(a,b){
DF=function(c){min(eigen(matrix
(c(1,a,b,a,1,c,b,c,1),3,3))$values)};
V[max(which(Vectorize(DF)(V)>0))]}
FINF=function(a,b){
DF=function(c){min(eigen(matrix(
c(1,a,b,a,1,c,b,c,1),3,3))$values)};
V[min(which(Vectorize(DF)(V)>0))]}
MSUP=outer(U,U,Vectorize(FSUP))
MINF=outer(U,U,Vectorize(FINF))
library(RColorBrewer)
clr=rev(brewer.pal(6, "RdBu"))
U=U[2:20]
MSUP=MSUP[2:20,2:20]
MINF=MINF[2:20,2:20]
persp(U,U,MSUP,col="green",shade=TRUE)
image(U,U,MSUP,breaks=((-3):3)/3,col=clr)
persp(U,U,MINF,col="green",shade=TRUE)
image(U,U,MINF,breaks=((-3):3)/3,col=clr)
Here, we can derive the lower and the upper bound for https://i2.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-c.gif?w=578, as function of https://i1.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-a.gif?w=578 and https://i2.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-b.gif?w=578.

In the dark blue area, the bound for the correlation can be really low, while in the dark red, the bound is very high (either the lower bound on the left, or the upper bound on the right). Since it might be hard to read, it is possible to fix for instance https://i2.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-b.gif?w=578, and to derive bonds for https://i2.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-c.gif?w=578, as function of https://i1.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-a.gif?w=578.
V=seq(-1,1,by=.001)
U=seq(-1,1,by=.1)
U=U[2:(length(U)-1)]
V=V[2:(length(V)-1)]
U=c(-.9999,U,.9999)
V=c(-.99999,V,.99999)
FSUP=function(a){
DF=function(c){min(eigen(matrix(
c(1,a,-.7,a,1,c,-.7,c,1),3,3))$values)};
V[max(which(Vectorize(DF)(V)>0))]}
FINF=function(a){
DF=function(c){min(eigen(matrix(
c(1,a,-.7,a,1,c,-.7,c,1),3,3))$values)};
V[min(which(Vectorize(DF)(V)>0))]}
 
VS=Vectorize(FSUP)(U)
VI=Vectorize(FINF)(U)
plot(c(U,U),c(VS,VI),col="white")
polygon(c(U,rev(U)),c(VS,rev(VI)),
col="yellow",border=NA)
lines(U,VS,lwd=2,col="red")
lines(U,VI,lwd=2,col="red")
On the graph below, we have bound for a negative correlation for https://i2.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-b.gif?w=578 (on the left, with -0.7) and a positive correlation for https://i2.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-b.gif?w=578 (on the right, here +0.7),

We do observe here extremely nice ellipses… Consider the case of a null correlation https://i2.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-b.gif?w=578 then the region for possible values for https://i1.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-a.gif?w=578 and https://i2.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-c.gif?w=578 is the unit circle.

The interpretation is that if https://i2.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-b.gif?w=578 is null, and so is https://i1.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-a.gif?w=578 then https://i2.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-c.gif?w=578 might take any value between -1 and 1 (under the assumption that marginal distribution allow such values, e.g. marginal Gaussian distributions). On the other hand if https://i1.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-a.gif?w=578 is either -1 or +1 (perfect negative/positive correlation) then https://i2.wp.com/freakonometrics.blog.free.fr/public/perso6/correl-c.gif?w=578 has to be null…

To leave a comment for the author, please follow the link and comment on their blog: Freakonometrics - Tag - R-english.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)