Recently I had a discussion with a student about variability of results obtained from cross-validation procedure. While the subject is well known there are not many examples on the web showing it, so I have written its simple presentation.
Results from cross-validation are reported as a standard by rpart procedure (printcp and plotcp) and optimal cp is selected for tree pruning. Many people I have talked to think that because each time rpart is run on the same data-set the same tree is obtained that also printcp and plotcp results do not change. However, it should be remembered that x-val relative error returned by them is based on random sampling and is not constant. Therefore two runs of rpart might indicate different values of optimal cp.
Here is the code that illustrates this situation using Participation data from Ecdat package:
The resulting plot is the following:
We can see that using x-val criterion tree of size 5 is selected in around 2/3 of cases and size 6 is found best otherwise.
The other issue is why there is no variability of x-val for tree size 1 and almost no variability at size 2. The answer is that for those tree sizes the split in every cross-validation fold is made on nominal variable (for example foreign for tree size 1) at the same cut-point and all resulting trees are identical (one outlier for tree size 2 is due single different split). For tree sizes 5 and 6 continuous variables enter the tree (age and lnnlinc) and cut-points start moving, so the resulting trees in different cross-validation runs are different.