Site icon R-bloggers

Tuning LaplacesDemon

[This article was first published on Wiekvoet, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
I was continuing with my Bayesian algorithms in R exercise. For these exercises I port SAS PROC MCMC examples to the various R solutions. However, the next example was logit model and that’s just too simple, especially after last week’s Jacobian for the Box-Cox transformation. Examples for logit and probit models abound on the web. Hence I took the opportunity to experiment a bit with the various algorithms which LaplacesDemon offers on the logit model. I did nine of them, so I might do some more next week.
One thing I noticed for all algorithms. LaplacesDemon has an in build calculation which chooses the length of the burn in and suggests a thinning. This does not work very well. Sometimes it seems for may samples on the target distribution, yet indicates all samples are burn in. Other times it thinks the burn in is done, yet does not find the expected results. Running several chains and human decisions to be more efficient than these long chains with unclear decisions.
It is a bit scary and disappointing that even though this is a very small and simple model and the number of samples is sometimes large, the posterior sometimes does not contain the frequentist solution. How would I know the answer is wrong for more complex models, where the answer is not so obvious?

Data

Data are from example 3 of SAS Proc MCMC. The PROC MCMC stimates are -11.77 and 0.292 respectively. It is a logistic model. The glm() estimates are thus:
Call:  glm(formula = cbind(y, n – y) ~ x, family = binomial, data = set2)

Coefficients:
(Intercept)            x 
   -11.2736       0.2793 

Degrees of Freedom: 19 Total (i.e. Null);  18 Residual
Null Deviance:        71.8
Residual Deviance: 15.25     AIC: 46.44

Setup

For each algorithm I started with the default number of samples and thinning. Based on the output the thinning and number of samples would be adapted until such point that the thinning was at least as large as the recommended thinning and there was a decent number of samples after burn in.

MWG


MWG needed 80000 samples before it gave answers. Notice that beta[2] was not mixing very well. The estimates seem off.


Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 80000, Status = 2000, Thinning = 30)

Acceptance Rate: 0.24453
Algorithm: Metropolis-within-Gibbs
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
 beta[1]  beta[2]
2.835066 2.835066

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 45.242     45.360
pD    1.966      1.955
DIC  47.208     47.315
Initial Values:
[1] -10   0

Iterations: 80000
Log(Marginal Likelihood): -23.51154
Minutes of run-time: 0.35
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 1862
Recommended Burn-In of Un-thinned Samples: 55860
Recommended Thinning: 34
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 2666
Thinning: 30


Summary of All Samples
                Mean         SD        MCSE       ESS          LB      Median
beta[1]   -9.0326306 1.04621897 0.220388172  7.322802 -11.2499597  -8.9042795
beta[2]    0.2209152 0.02658052 0.005719566  7.354199   0.1730201   0.2174406
Deviance  45.2421985 1.98273668 0.194161638 22.653492  42.5636767  44.7799650
LP       -31.4080976 0.98541260 0.095926332 23.196296 -33.8514305 -31.1737818
                  UB
beta[1]   -7.1874098
beta[2]    0.2772263
Deviance  50.1501559
LP       -30.0892713


Summary of Stationary Samples
                Mean         SD        MCSE      ESS          LB      Median
beta[1]   -8.9359418 0.99836162 0.349118267 3.269911 -10.6298830  -8.8780160
beta[2]    0.2188631 0.02541074 0.009077136 2.979785   0.1818618   0.2180453
Deviance  45.3601829 1.97741388 0.197527290 5.636751  42.7114169  45.2011226
LP       -31.4661714 0.98254393 0.095926332 5.760722 -33.7158111 -31.3894625
                  UB
beta[1]   -7.4135378
beta[2]    0.2609228
Deviance  49.8800864
LP       -30.1541170


HARM

No specs

I ended with 160000 samples. From the plot I’d say that is somewhat overkill.

Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 160000, Status = 2000, Thinning = 36, Algorithm = “HARM”)

Acceptance Rate: 0.06213
Algorithm: Hit-And-Run Metropolis
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
2.863159336 0.001961247

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 44.449     44.449
pD    1.704      1.704
DIC  46.153     46.153
Initial Values:
[1] -10   0

Iterations: 160000
Log(Marginal Likelihood): -23.13795
Minutes of run-time: 0.38
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 0
Recommended Burn-In of Un-thinned Samples: 0
Recommended Thinning: 36
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 4444
Thinning: 36


Summary of All Samples
                Mean         SD        MCSE       ESS          LB     Median
beta[1]  -12.5516619 1.69184474 0.251682993  43.36212 -15.7525684 -12.537176
beta[2]    0.3121025 0.04404281 0.006471107  46.72382   0.2306382   0.312301
Deviance  44.4489336 1.84619493 0.166569318 221.64136  42.5029914  43.927156
LP       -31.0503518 0.93299371 0.085590782 211.80050 -33.5115921 -30.783807
                  UB
beta[1]   -9.3734152
beta[2]    0.3968852
Deviance  49.3254872
LP       -30.0617562


Summary of Stationary Samples
                Mean         SD        MCSE       ESS          LB     Median
beta[1]  -12.5516619 1.69184474 0.251682993  43.36212 -15.7525684 -12.537176
beta[2]    0.3121025 0.04404281 0.006471107  46.72382   0.2306382   0.312301
Deviance  44.4489336 1.84619493 0.166569318 221.64136  42.5029914  43.927156
LP       -31.0503518 0.93299371 0.085590782 211.80050 -33.5115921 -30.783807
                  UB
beta[1]   -9.3734152
beta[2]    0.3968852
Deviance  49.3254872
LP       -30.0617562

Specs: list(alpha.star=0.234, B=NULL)

This sometimes got stuck, making all samples the same. Such faulty runs are repeated. In addition it seems the algorithm is not able to detect a good run. When I came to a whopping 1280000 samples I decided enough is enough. 



Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 1280000, Status = 8000, Thinning = 42, Algorithm = “HARM”,
    Specs = list(alpha.star = 0.234, B = NULL))

Acceptance Rate: 0.23416
Algorithm: Hit-And-Run Metropolis
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
3.639417089 0.002441388

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 44.359     44.328
pD    1.597      1.532
DIC  45.956     45.860
Initial Values:
[1] -10   0

Iterations: 1280000
Log(Marginal Likelihood): -23.38909
Minutes of run-time: 2.91
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 6094
Recommended Burn-In of Un-thinned Samples: 255948
Recommended Thinning: 44
Specs: (NOT SHOWN HERE)
Status is displayed every 8000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 30476
Thinning: 42


Summary of All Samples
                Mean         SD        MCSE       ESS         LB      Median
beta[1]  -11.8540232 1.90772737 0.141469541  28.83441 -15.618594 -11.8276283
beta[2]    0.2941959 0.04938246 0.003631577  29.75273   0.206555   0.2931946
Deviance  44.3593200 1.78715332 0.031531123 190.88005  42.500061  43.8415966
LP       -30.9974154 0.90059957 0.016062596 182.03381 -33.375122 -30.7354077
                 UB
beta[1]   -8.489146
beta[2]    0.392589
Deviance  49.076360
LP       -30.058998


Summary of Stationary Samples
                Mean         SD        MCSE       ESS          LB      Median
beta[1]  -11.7694571 1.87121305 0.153383946  24.00847 -15.4323928 -11.8230837
beta[2]    0.2920393 0.04844789 0.003937189  24.51596   0.2027361   0.2930413
Deviance  44.3276492 1.75044211 0.033218178 182.18810  42.5044563  43.8258628
LP       -30.9805115 0.87993039 0.016062596 176.04438 -33.2870801 -30.7284209
                  UB
beta[1]   -8.3411645
beta[2]    0.3869563
Deviance  48.9102683
LP       -30.0612554

Adaptive Directional Metropolis-within-Gibbs

no specs

This algorithm seemed to be able to get stuck outside the target region. As other algorithms it seems that increasing the thinning only leads to suggestions for further increase.
 

Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 250000, Status = 2000, Thinning = 35, Algorithm = “ADMG”)

Acceptance Rate: 0.0841
Algorithm: Adaptive Directional Metropolis-within-Gibbs
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
3.694667553 0.002472681

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 44.362     44.362
pD    1.790      1.790
DIC  46.152     46.152
Initial Values:
[1] -10   0

Iterations: 250000
Log(Marginal Likelihood): NA
Minutes of run-time: 1.82
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 0
Recommended Burn-In of Un-thinned Samples: 0
Recommended Thinning: 38
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 7142
Thinning: 35


Summary of All Samples
                Mean         SD        MCSE       ESS          LB      Median
beta[1]  -11.5514694 1.92227432 0.252091958  40.70347 -15.7178588 -11.4718511
beta[2]    0.2863682 0.04966254 0.006471524  40.98657   0.1965967   0.2829988
Deviance  44.3618406 1.89210234 0.068523057 227.09714  42.4858150  43.7976307
LP       -30.9951604 0.95027797 0.034622622 221.64249 -33.4547190 -30.7101949
                  UB
beta[1]   -8.0953413
beta[2]    0.3936748
Deviance  49.2894939
LP       -30.0514927


Summary of Stationary Samples
                Mean         SD        MCSE       ESS          LB      Median
beta[1]  -11.5514694 1.92227432 0.252091958  40.70347 -15.7178588 -11.4718511
beta[2]    0.2863682 0.04966254 0.006471524  40.98657   0.1965967   0.2829988
Deviance  44.3618406 1.89210234 0.068523057 227.09714  42.4858150  43.7976307
LP       -30.9951604 0.95027797 0.034622622 221.64249 -33.4547190 -30.7101949
                  UB
beta[1]   -8.0953413
beta[2]    0.3936748
Deviance  49.2894939
LP       -30.0514927

Specs: list(n = 0, Periodicity = 100)

This seems to be a run which does not mix very well in beta[1]. The posteriour is not on the target either. However, I do not see an obvious characteristic from which to notice the latter. 

Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 64000, Status = 2000, Thinning = 33, Algorithm = “ADMG”,
    Specs = list(n = 0, Periodicity = 100))

Acceptance Rate: 0.09229
Algorithm: Adaptive Directional Metropolis-within-Gibbs
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
4.718765252 0.003150388

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 44.625     44.368
pD    1.848      0.914
DIC  46.473     45.282
Initial Values:
[1] -10   0

Iterations: 64000
Log(Marginal Likelihood): NA
Minutes of run-time: 0.36
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 1544
Recommended Burn-In of Un-thinned Samples: 50952
Recommended Thinning: 32
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 1939
Thinning: 33


Summary of All Samples
               Mean         SD       MCSE      ESS          LB      Median
beta[1]  -11.314847 2.17303791 0.51002442 11.27064 -15.1041582 -11.4860852
beta[2]    0.279974 0.05601643 0.01310839 10.57305   0.1747337   0.2834938
Deviance  44.625141 1.92250413 0.19216964 59.12247  42.5136669  44.1268753
LP       -31.124617 0.95756642 0.09526603 60.52000 -33.5147916 -30.8800217
                  UB
beta[1]   -7.2420670
beta[2]    0.3795564
Deviance  49.4135552
LP       -30.0677263


Summary of Stationary Samples
                Mean         SD        MCSE       ESS          LB      Median
beta[1]  -13.3566117 0.72016481 0.299751317  5.473664 -15.0713103 -13.1769958
beta[2]    0.3325182 0.01971691 0.007968755  6.324196   0.3014903   0.3294776
Deviance  44.3682834 1.35175074 0.137999589 30.199161  42.7948298  44.0306538
LP       -31.0192877 0.68081646 0.095266030 28.411038 -32.6277457 -30.8439440
                  UB
beta[1]  -12.2282779
beta[2]    0.3766297
Deviance  47.5762853
LP       -30.2209601

Adaptive Griddy-Gibbs

A thinning of 1000 was proposed.

Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 80000, Status = 2000, Thinning = 1000, Algorithm = “AGG”,
    Specs = list(Grid = GaussHermiteQuadRule(3)$nodes, dparm = NULL,
        smax = Inf, CPUs = 1, Packages = NULL, Dyn.libs = NULL))

Acceptance Rate: 1
Algorithm: Adaptive Griddy-Gibbs
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
19.42384441  0.01546567

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
          All Stationary
Dbar   53.739     53.739
pD   2320.547   2320.547
DIC  2374.286   2374.286
Initial Values:
[1] -10   0

Iterations: 80000
Log(Marginal Likelihood): NA
Minutes of run-time: 2.73
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 0
Recommended Burn-In of Un-thinned Samples: 0
Recommended Thinning: 1000
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 80
Thinning: 1000


Summary of All Samples
               Mean         SD       MCSE ESS          LB      Median
beta[1]  -10.922145  4.4338567 0.50560810  80 -16.5329510 -11.2359881
beta[2]    0.269604  0.1214608 0.01367873  80   0.1295182   0.2783504
Deviance  53.738973 68.1255818 7.52694767  80  42.5397709  44.3774010
LP       -35.684516 34.0782560 3.76525972  80 -40.1716354 -31.0031726
                  UB
beta[1]   -5.6863543
beta[2]    0.4200815
Deviance  62.5858557
LP       -30.0729465


Summary of Stationary Samples
               Mean         SD       MCSE ESS          LB      Median
beta[1]  -10.922145  4.4338567 0.50560810  80 -16.5329510 -11.2359881
beta[2]    0.269604  0.1214608 0.01367873  80   0.1295182   0.2783504
Deviance  53.738973 68.1255818 7.52694767  80  42.5397709  44.3774010
LP       -35.684516 34.0782560 3.76525972  80 -40.1716354 -31.0031726
                  UB
beta[1]   -5.6863543
beta[2]    0.4200815
Deviance  62.5858557
LP       -30.0729465

Adaptive Hamiltonian Monte Carlo


Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 150000, Status = 2000, Thinning = 36, Algorithm = “AHMC”,
    Specs = list(epsilon = rep(0.02, length(Initial.Values)),
        L = 2, Periodicity = 10))

Acceptance Rate: 0.36918
Algorithm: Adaptive Hamiltonian Monte Carlo
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
     beta[1]      beta[2]
1.1442983280 0.0008202951

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 43.736     43.682
pD    1.214      1.272
DIC  44.950     44.954
Initial Values:
[1] -10   0

Iterations: 150000
Log(Marginal Likelihood): NA
Minutes of run-time: 1.74
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 2912
Recommended Burn-In of Un-thinned Samples: 104832
Recommended Thinning: 36
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 4166
Thinning: 36


Summary of All Samples
                Mean         SD        MCSE        ESS          LB      Median
beta[1]  -11.4418192 1.06961398 0.191704524   7.357239 -14.1113189 -11.4442594
beta[2]    0.2837334 0.02830491 0.004949075   9.071151   0.2346106   0.2835194
Deviance  43.7356469 1.55816164 0.036118007 526.351697  42.4689017  43.1991684
LP       -30.6795260 0.78105728 0.018367981 492.409683 -32.7826226 -30.4108318
                  UB
beta[1]   -9.6350028
beta[2]    0.3515657
Deviance  47.9431739
LP       -30.0443227


Summary of Stationary Samples
                Mean         SD       MCSE        ESS          LB      Median
beta[1]  -11.9854873 0.56317415 0.15499286   8.169076 -12.9867511 -11.9369396
beta[2]    0.2982346 0.01576288 0.00396860  11.739663   0.2675003   0.2982497
Deviance  43.6824281 1.59499752 0.04774315 545.963385  42.4718352  43.1363939
LP       -30.6588754 0.79841066 0.01836798 518.072910 -33.0652295 -30.3900004
                  UB
beta[1]  -10.8925810
beta[2]    0.3269875
Deviance  48.4916661
LP       -30.0465224

Adaptive Metropolis

High thinning with 800, but nice mixing.

Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 80000, Status = 2000, Thinning = 800, Algorithm = “AM”,
    Specs = list(Adaptive = 200, Periodicity = 200))

Acceptance Rate: 0.26118
Algorithm: Adaptive Metropolis
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
     beta[1]      beta[2]
11.850286307  0.008006239

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 44.437     44.437
pD    1.630      1.630
DIC  46.067     46.067
Initial Values:
[1] -10   0

Iterations: 80000
Log(Marginal Likelihood): NA
Minutes of run-time: 0.36
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 0
Recommended Burn-In of Un-thinned Samples: 0
Recommended Thinning: 800
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 100
Thinning: 800


Summary of All Samples
                Mean         SD        MCSE ESS          LB      Median
beta[1]  -11.7026349 1.97444818 0.188498237 100 -15.4468354 -11.6906840
beta[2]    0.2905268 0.04989136 0.004767928 100   0.2047257   0.2898906
Deviance  44.4368803 1.80543310 0.166862161 100  42.5326068  43.8582816
LP       -31.0345215 0.90945995 0.084411163 100 -33.4044687 -30.7391583
                  UB
beta[1]   -8.3034612
beta[2]    0.3801087
Deviance  49.1461781
LP       -30.0717992


Summary of Stationary Samples
                Mean         SD        MCSE ESS          LB      Median
beta[1]  -11.7026349 1.97444818 0.188498237 100 -15.4468354 -11.6906840
beta[2]    0.2905268 0.04989136 0.004767928 100   0.2047257   0.2898906
Deviance  44.4368803 1.80543310 0.166862161 100  42.5326068  43.8582816
LP       -31.0345215 0.90945995 0.084411163 100 -33.4044687 -30.7391583
                  UB
beta[1]   -8.3034612
beta[2]    0.3801087
Deviance  49.1461781
LP       -30.0717992

Adaptive Metropolis-within-Gibbs

A nice example where LaplacesDemon first makes me increase the number of samples then suddenly I get a run where it decides no burn in was needed. Needless to say with over a million samples I did not go to increase the thinning a bit more.

Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 1200000, Status = 2000, Thinning = 41, Algorithm = “AMWG”,
    Specs = list(Periodicity = 200))

Acceptance Rate: 0.24915
Algorithm: Adaptive Metropolis-within-Gibbs
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
 beta[1]  beta[2]
2.835066 2.835066

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 44.407     44.407
pD    1.896      1.896
DIC  46.304     46.304
Initial Values:
[1] -10   0

Iterations: 1200000
Log(Marginal Likelihood): NA
Minutes of run-time: 4.95
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 0
Recommended Burn-In of Un-thinned Samples: 0
Recommended Thinning: 44
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 29268
Thinning: 41


Summary of All Samples
               Mean         SD        MCSE       ESS          LB     Median
beta[1]  -11.826585 1.96434369 0.146404922  39.11735 -15.8139146 -11.715120
beta[2]    0.293541 0.05072947 0.003808978  38.10170   0.1983134   0.290513
Deviance  44.407436 1.94752859 0.026815581 197.45010  42.4872692  43.822249
LP       -31.021258 0.98017230 0.013578417 192.69216 -33.7006842 -30.727256
                  UB
beta[1]   -8.0854510
beta[2]    0.3961088
Deviance  49.7428566
LP       -30.0525790


Summary of Stationary Samples
               Mean         SD        MCSE       ESS          LB     Median
beta[1]  -11.826585 1.96434369 0.146404922  39.11735 -15.8139146 -11.715120
beta[2]    0.293541 0.05072947 0.003808978  38.10170   0.1983134   0.290513
Deviance  44.407436 1.94752859 0.026815581 197.45010  42.4872692  43.822249
LP       -31.021258 0.98017230 0.013578417 192.69216 -33.7006842 -30.727256
                  UB
beta[1]   -8.0854510
beta[2]    0.3961088
Deviance  49.7428566
LP       -30.0525790

Common Code

All calls use the same data, model and plotting as given below.
s1 <- scan(what=list(integer(),double(),double()),text=’
6 0 25.7  8 2 35.9 5 2 32.9 7 7 50.4 6 0 28.3 
7 2 32.3  5 1 33.2 8 3 40.9 6 0 36.5 6 1 36.5
6 6 49.6  6 3 39.8 6 4 43.6 6 1 34.1 7 1 37.4
8 2 35.2  6 6 51.3 5 3 42.5 7 0 31.3 3 2 40.6′)
set2 <- data.frame(n=s1[[1]],x=s1[[3]],y=s1[[2]])
set2
glm(cbind(y,n-y)~ x,data=set2,family=binomial)
library(LaplacesDemon)
J <- 2 #Number of parameters
mon.names <- “LP”
parm.names <- c(“beta[1]”,”beta[2]”)
PGF <- function(Data) return(rnormv(Data$J,0,1000))
MyData <- list(J=J, PGF=PGF, n=set2$n, mon.names=mon.names,
    parm.names=parm.names, x=set2$x, y=set2$y)

Model <- function(parm, Data)
{
    ### Parameters
    beta <- parm[1:Data$J]
    ### Log-Prior
    beta.prior <- sum(dnormv(beta, 0, 1000, log=TRUE))
    ### Log-Likelihood
    mu <- beta[1] + beta[2]*Data$x
    p <- invlogit(mu)
    LL <- sum(dbinom(Data$y, Data$n, p, log=TRUE))
    ### Log-Posterior
    LP <- LL + beta.prior
    Modelout <- list(LP=LP, Dev=-2*LL, Monitor=LP,
        yhat=rbinom(length(p), Data$n, p), parm=parm)
    return(Modelout)
}
Initial.Values <- c(-10,0)


myplot <- function(where=’none’) {
    if (where!=’none’) {
        png(paste(where,’.png’,sep=”))
    }
    par(mfrow=c(2,2),mar=rep(2,4),oma=c(0,0,2,0))
    plot(ts(Fit$Posterior1[,1]),ylab=”)
    plot(ts(Fit$Posterior2[,1]),ylab=”)
    plot(ts(Fit$Posterior1[,2]),ylab=”)
    plot(ts(Fit$Posterior2[,2]),ylab=”)
    title(main=Fit$Algorithm,outer=TRUE)
    if (where!=’none’) dev.off()
}

To leave a comment for the author, please follow the link and comment on their blog: Wiekvoet.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.