PK/PD reserving models

[This article was first published on R on mages' blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

This is a follow-up post on hierarchical compartmental reserving models using PK/PD models. It will show how differential equations can be used with Stan/ brms and how correlation for the same group level terms can be modelled.

PK/ PD is usually short for pharmacokinetic/ pharmacodynamic models, but as Eric Novik of Generable pointed out to me, it could also be short for Payment Kinetics/ Payment Dynamics Models in the insurance context.

In this post I’d like to discuss an extension to Jake Morris’ hierarchical compartmental reserving model, as described in his original paper (Morris (2016)) and my previous post, which allows for a time varying parameter \(k_{er}(t)\) describing the changing rate of earning and reporting and allows for correlation between \(RLR\), the reported loss ratio, and \(RRF\), the reserve robustness factor. A positive correlation would give evidence of a reserving cycle, i.e. in years with higher initial reported loss ratios a higher proportion of reported losses are paid.

I am very grateful for the support Jake Morris and Paul-Christian Bürkner have given me over the last few weeks, answering questions around the model and brms (Bürkner (2017)).

PK/ PD model

First of all let’s take a look at my new set of differential equations, describing the dynamics of exposure (\(EX\)), outstanding (\(OS\)) and paid claims (\(PD\)) over time. I replaced the constant parameter \(k_{er}\) with the linear function \(\beta_{er} \cdot t\).

\[ \begin{aligned} dEX/dt & = -\beta_{er} \cdot t \cdot EX \\ dOS/dt & = \beta_{er} \cdot t \cdot RLR \cdot EX – k_p \cdot OS \\ dPD/dt & = k_{p} \cdot RRF \cdot OS \end{aligned} \]

The dynamical system is no longer autonomous and initially I can’t be bothered to solve it analytically. Hence, I use an ODE solver instead, but I will get back to integrating the differential equations later.

Integrating ODEs with Stan

Fortunately, an ODE solver is part of the Stan language. The following code demonstrates how I can integrate the differential equations in Stan. You will notice that the model section is empty, as all I want to do for now is run the solver.

functions{
  real[] claimsprocess(real t, real [] y, real [] theta, 
                       real [] x_r, int[] x_i){
    real dydt[3];
    dydt[1] = - theta[1] * t * y[1];
    dydt[2] = theta[1] * t * theta[3] * y[1] - theta[2] * y[2];
    dydt[3] = theta[2] * theta[4] * y[2];
    
    return dydt;
  }
}
data {
  real theta[4];
  int N_t;
  real times[N_t];  
  real C0[3];
}
parameters{
}
transformed parameters{
  real C[N_t, 3];
  C = integrate_ode_rk45(claimsprocess, C0, 0, times, theta,
                         rep_array(0.0, 0), rep_array(1, 1));
}
model{
}
generated quantities{
  real Exposure[N_t];
  real OS[N_t];
  real Paid[N_t];
  Exposure = C[, 1];
  OS = C[, 2];
  Paid = C[, 3];
}

To run the Stan code I provide as an input a list with the parameters, the number of data points I’d like to generate, the time line and initial starting values. Note also that I have to set the algorithm = "Fixed_param" as I run a deterministic model and for that reason one iteration and one chain are sufficient.

library(rstan)
input_data <- list(
  theta=c(Ber=5, kp=0.5, RLR=0.8, RRF=0.9),
  N_t=100, 
  times=seq(from = 0.1, to = 10, length.out = 100),
  C0=c(100, 0, 0))
samples <- sampling(ODEmodel, data=input_data, 
                    algorithm = "Fixed_param", 
                    iter = 1, chain = 1)

Let’s take a look at the output.

X <- extract(samples)
out <- data.frame(
  Time = input_data$times,
  Exposure = c(X[["Exposure"]]),
  Outstandings = c(X[["OS"]]),
  Paid = c(X[["Paid"]])
)
library(latticeExtra)
xyplot(Exposure + Outstandings + Paid ~ Time,
       data = out, as.table=TRUE, t="l",
       ylab="Amounts ($)", auto.key=list(space="top", columns=3),
       par.settings = theEconomist.theme(with.bg = TRUE, box = "transparent"))

The exposure declines faster and the outstanding curve has a more pronounced peak as a result of replacing \(k_{er}\) with \(\beta_{er} \cdot t\).

Updated model

My updated model is not much different to the one presented in the earlier post, apart from the fact that I allow for the correlation between \(RLR\) and \(RRF\) and the mean function \(\tilde{f}\) is the integral of the ODEs above.

\[ \begin{aligned} y(t) & \sim \mathcal{N}(\tilde{f}(t, \Pi, \beta_{er}, k_p, RLR_{[i]}, RRF_{[i]}), \sigma_{y[\delta]}^2) \\ \begin{pmatrix} RLR_{[i]} \\ RRF_{[i]}\end{pmatrix} & \sim \mathcal{N} \left( \begin{pmatrix} \mu_{RLR} \\ \mu_{RRF} \end{pmatrix}, \begin{pmatrix} \sigma_{RLR}^2 & \rho \ \sigma_{RLR} \sigma_{RRF}\\ \rho \ \sigma_{RLR} \sigma_{RRF} & \sigma_{RRF}^2 \end{pmatrix} \right) \end{aligned} \]

Implementation with brms

Let’s load the data back into R’s memory:

library(data.table)
lossData0 <- fread("https://raw.githubusercontent.com/mages/diesunddas/master/Data/WorkersComp337.csv")

Jake shows in the appendices of his paper how to implement this model in R with the nlmeODE (Tornoe (2012)) package, together with more flexible models in OpenBUGS (Lunn et al. (2000)). However, I will continue with brms and Stan.

Using the ODEs with brms requires a little extra coding, as I have to provide the integration function as an additional input, just like I did above, where I defined them as user defined functions for Stan.

To model the group-level terms (\(RLR\), \(RRF\)) of the same grouping factor (accident year) as correlated I add a unique identifier to the | operator, here I use p (looks similar to \(\rho\)), i.e. ~ 1 + (1 | p | accident_year).

Below is my code for the new updated model. I am using again Gamma distributions as my priors for the parameters and LKJ(2) as a prior for the correlation coefficient.

myFuns <- "
real[] ode_claimsprocess(real t, real [] y,  real [] theta, 
                         real [] x_r, int[] x_i){
  real dydt[3];
  
  dydt[1] = - theta[1] * t * y[1]; // Exposure
  dydt[2] = theta[1] * t * theta[2] * y[1] - theta[3] * y[2]; // OS
  dydt[3] = theta[3] * theta[4] * y[2]; // Paid
  
  return dydt;
}
real claimsprocess(real t, real premium, real Ber, real kp, 
                   real RLR, real RRF, real delta){
  real y0[3];
  real y[1, 3];
  real theta[4];
  theta[1] = Ber;
  theta[2] = RLR;
  theta[3] = kp;
  theta[4] = RRF;
  y0[1] = premium;
  y0[2] = 0;
  y0[3] = 0;
  y = integrate_ode_rk45(ode_claimsprocess, 
                        y0, 0, rep_array(t, 1), theta,
                        rep_array(0.0, 0), rep_array(1, 1),
                        0.001, 0.001, 100); // tolerances, steps
      return (y[1, 2] * (1 - delta) + y[1, 3] * delta);
}
"
library(brms) 
rstan_options(auto_write = TRUE)
options(mc.cores = parallel::detectCores())
b4 <- brm(
  bf(loss_train ~ claimsprocess(dev, premium, Ber, kp, RLR, RRF, delta),
     RLR ~ 1 + (1 | p | accident_year), # 'p' allow for correlation with RRF 
     RRF ~ 1 + (1 | p | accident_year), # 'p' allow for correlation with RLR
     Ber ~ 1,
     kp ~ 1,
     sigma ~ 0 + deltaf, # different sigma for OS and paid
     nl = TRUE), 
  stan_funs = myFuns, # defintion of 'claimsprocess' function
  data = lossData0[cal <= max(accident_year) & dev > 0], 
  family = brmsfamily("gaussian", link_sigma = "log"),
  prior = c(prior(gamma(4, 5), nlpar = "RLR", lb=0),
            prior(gamma(4, 5), nlpar = "RRF", lb=0),
            prior(gamma(12, 3), nlpar = "Ber", lb=0),
            prior(gamma(3, 4), nlpar = "kp", lb=0),
            set_prior("lkj(2)", class = "cor")),
  control = list(adapt_delta = 0.999, max_treedepth=15),
  seed = 1234, iter = 500)

While the original model with the analytical solution ran in about 3 minutes, this code took about 9 hours per chain for 500 samples. Any suggestions how to speed this up will be much appreciated. The frequentist model using nlmeODE runs in seconds.

b4
## Warning: There were 235 divergent transitions after warmup. Increasing adapt_delta above 0.999 may help.
## See http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
##  Family: gaussian 
##   Links: mu = identity; sigma = log 
## Formula: loss_train ~ claimsprocess(dev, premium, Ber, kp, RLR, RRF, delta) 
##          RLR ~ 1 + (1 | p | accident_year)
##          RRF ~ 1 + (1 | p | accident_year)
##          Ber ~ 1
##          kp ~ 1
##          sigma ~ 0 + deltaf
##    Data: lossData0[cal <= max(accident_year) & dev > 0] (Number of observations: 110) 
## Samples: 4 chains, each with iter = 500; warmup = 250; thin = 1; 
##          total post-warmup samples = 1000
##     ICs: LOO = NA; WAIC = NA; R2 = NA
##  
## Group-Level Effects: 
## ~accident_year (Number of levels: 10) 
##                                  Estimate Est.Error l-95% CI u-95% CI
## sd(RLR_Intercept)                    0.18      0.06     0.11     0.32
## sd(RRF_Intercept)                    0.15      0.05     0.09     0.27
## cor(RLR_Intercept,RRF_Intercept)     0.54      0.24    -0.04     0.88
##                                  Eff.Sample Rhat
## sd(RLR_Intercept)                       197 1.01
## sd(RRF_Intercept)                       397 1.00
## cor(RLR_Intercept,RRF_Intercept)        326 1.01
## 
## Population-Level Effects: 
##                  Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
## RLR_Intercept        0.86      0.06     0.74     0.99        161 1.01
## RRF_Intercept        0.83      0.05     0.73     0.93        179 1.02
## Ber_Intercept        5.66      0.32     4.97     6.21        359 1.01
## kp_Intercept         0.40      0.01     0.38     0.41        446 1.00
## sigma_deltafos      -3.61      0.11    -3.81    -3.40        593 1.00
## sigma_deltafpaid    -4.99      0.11    -5.20    -4.79        587 1.00
## 
## Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample 
## is a crude measure of effective sample size, and Rhat is the potential 
## scale reduction factor on split chains (at convergence, Rhat = 1).

The output above looks very similar to the output of nlmeODE. That’s good, also that Rhat is close to 1 for all parameters. However, there is a warning message of 235 divergent transitions after warm-up, which I will ignore for the time being.

The correlation coefficient \(\rho\) between \(RLR_{[i]}\) and \(RRF_{[i]}\) is estimated as 0.54, but with a wide 95% credible interval from -0.04 to 0.88. Therefore there is moderate evidence of a correlation.

As Jake puts it, a positive correlation between the reported loss ratio and reserve robustness factor parameters by accident year is indicative of a case reserving cycle effect, i.e. more conservative case reserves (low \(RRF_{[i]}\)) in a hard market (low \(RLR_{[i]}\)) to create cushions for the future.

stanplot(b4, 
         pars= c("b_RLR_Intercept", "b_RRF_Intercept", "b_Ber_Intercept",
                 "b_kp_Intercept", "b_sigma_deltafos", "b_sigma_deltafpaid",
                 "sd_accident_year__RLR_Intercept", 
                 "sd_accident_year__RRF_Intercept", 
                 "cor_accident_year__RLR_Intercept__RRF_Intercept"), 
         type="dens_overlay")

The density plots look OK, all chains seem to have behaved similarly. The last chart in the bottom right shows the distribution of the correlation parameter \(\rho\). Moderate evidence of a correlation might be an understatement. Note also that \(\sigma_{\delta[OS]}\) and \(\sigma_{\delta[PD]}\) are on a log-scale.

Predict future claims developments

To validate my model against the test data set I can use the predict function in brms. Before I can apply the method I have to expose the Stan functions I wrote above, namely claimsprocess to R.

expose_functions(b4, vectorize = TRUE) # requires brms >= 2.1.0

With the claimsprocess function now in the R’s memory I can predict and plot the model in the same way as I did in the previous post.

predClaimsPred <- predict(b4, newdata = lossData0, method="predict")
plotDevBananas(`2.5%ile`/premium*100 + `97.5%ile`/premium*100 + 
                 Estimate/premium*100 + loss_train/premium*100 + 
                 loss_test/premium*100 ~ 
                 dev | factor(accident_year), ylim=c(0, 100),
               data=cbind(lossData0, predClaimsPred)[delta==0], 
               main="Outstanding claims developments", 
               ylab="Outstanding loss ratio (%)")

plotDevBananas(`2.5%ile`/premium*100 + `97.5%ile`/premium*100 + 
                 Estimate/premium*100 + loss_train/premium*100 + 
                 loss_test/premium*100 ~  
                 dev | factor(accident_year), ylim=c(0, 120),
               data=cbind(lossData0, predClaimsPred)[delta==1], 
               main="Paid claims developments", 
               ylab="Paid loss ratio (%)")

The graphs look very similar to the output from the previous model. Yet, the prediction for 1997 improved significantly, perhaps because of the correlation between RLR and RRF no longer being assumed to be zero. But this model has tested my patience, as I used a remote 2-core machine on Digital Ocean and waited 18 hours for the results.

Analytical solution

The long run time did in the end motivate me to look for an analytical solution of the ODEs, with initial values \(\mbox{EX}(0) = \Pi\) (premium), \(\mbox{OS}(0) = 0\), \(\mbox{PD}(0) = 0\).

According to Wolfram Alpha the analytical solution is:

\[ \begin{aligned} \mbox{EX}(t) & = \Pi \exp \left(-\frac{\beta_{er} t^2}{2} \right) \\ \mbox{OS}(t) & = - \frac{\Pi \cdot RLR}{2 \sqrt{\beta_{er}}} \exp\left(-\frac{\beta_{er} t^2}{2} - k_p t \right) \left[ \sqrt{2 \pi} k_p \mbox{erf}\left(\frac{k_p}{\sqrt{2 \beta_{er}}}\right) \left(-\exp\left(\frac{k_p^2}{2 \beta_{er}} + \frac{\beta_{er} t^2}{2} \right) \right) \right. - \\ & \left. \qquad \sqrt{2 \pi} k_p \exp\left(\frac{k_p^2}{2 \beta_{er}} + \frac{\beta_{er} t^2}{2}\right) \mbox{erf}\left(\frac{\beta_{er} t - k_p}{\sqrt{2 \beta_{er}}}\right) + 2 \sqrt{\beta_{er}} \exp\left(k_p t\right) - 2 \sqrt{\beta_{er}} \exp\left(\frac{\beta_{er} t^2}{2}\right) \right] \\ \mbox{PD}(t) & = \frac{\Pi \cdot RLR \cdot RRF}{2 \sqrt{\beta_{er}}} \exp\left(-k_p t\right) \left[-\sqrt{2 \pi} k_p \exp\left(\frac{k_p^2}{2 \beta_{er}}\right) \mbox{erf}\left(\frac{\beta_{er} t - k_p}{\sqrt{2 \beta_{er}}}\right) \right. + \\ & \left. \qquad \sqrt{2 \pi} k_p \left(-\exp\left(\frac{k^2}{2 \beta_{er}}\right)\right) \mbox{erf}\left(\frac{k}{\sqrt{2 \beta_{er}}} \right) + 2 \sqrt{\beta_{er}} \exp\left(k_p t\right) - 2 \sqrt{\beta_{er}} \right] \end{aligned} \]

Now I can write down \(\tilde{f}(t)\) in a closed-form: \[ \tilde{f}(t, \delta,\dots) = (1 - \delta) \cdot \mbox{OS}(t, \dots) + \delta \cdot \mbox{PD}(t, \dots) \]

The error function is part of the Stan language (not directly in R, although it is just erf <- function(x) 2 * pnorm(x * sqrt(2)) - 1). Thus, I create a Stan function for the analytical claims process again and feed this into brm.

library(brms)
library(rstan)
rstan_options(auto_write = TRUE)
options(mc.cores = parallel::detectCores())

myFun <- "
real anaclaimsprocess(real t, real premium, real Ber, real kp, 
                      real RLR, real RRF, real delta){

 real os;
 real paid;

 os = -(RLR * exp(-(Ber * t^2)/2.0 - kp * t) * 
          (sqrt(2.0 * pi()) * kp * erf(kp/(sqrt(2.0) * sqrt(Ber))) * 
             (-exp(kp^2/(2.0 * Ber) + (Ber * t^2)/2.0)) - 
             sqrt(2.0 * pi()) * kp * exp(kp^2/(2.0 * Ber) + (Ber *t^2)/2.0) * 
             erf((Ber*t - kp)/(sqrt(2.0) * sqrt(Ber))) + 
             2.0 * sqrt(Ber) * exp(kp * t) - 2.0 * sqrt(Ber) * 
             exp((Ber*t^2)/2.0)))/(2.0 * sqrt(Ber));
 paid = (RLR * RRF * exp(-kp * t) * 
           (-sqrt(2.0 * pi()) * kp * exp(kp^2/(2.0 * Ber)) * 
              erf((Ber*t - kp)/(sqrt(2.0) * sqrt(Ber))) + 
              sqrt(2.0 * pi()) * kp * (-exp(kp^2/(2.0 * Ber))) * 
              erf(kp/(sqrt(2.0) * sqrt(Ber))) + 2.0 * sqrt(Ber) * exp(kp * t) - 
              2.0 * sqrt(Ber)))/(2.0 * sqrt(Ber));

 return (premium * (os * (1 - delta) + paid * delta)); 
}
"

b5 <- brm(bf(loss_train ~ anaclaimsprocess(dev, premium, Ber, kp, 
                                           RLR, RRF, delta),
             RLR ~ 1 + (1 | p | accident_year),
             RRF ~ 1 + (1 | p | accident_year),
             Ber ~ 1,
             kp ~ 1,
             sigma ~ 0 + deltaf,
             nl = TRUE),
          stan_funs = myFun, # defintion of 'anaclaimsprocess' function
          data = lossData0[cal <= max(accident_year)], 
          family = brmsfamily("gaussian", link_sigma = "log"),
          prior = c(prior(gamma(4, 5), nlpar = "RLR", lb=0),
                    prior(gamma(4, 5), nlpar = "RRF", lb=0),
                    prior(gamma(12, 3), nlpar = "Ber", lb=0),
                    prior(gamma(3, 4), nlpar = "kp", lb=0),
                    set_prior("lkj(2)", class = "cor")),
          control = list(adapt_delta = 0.999, max_treedepth=15),
          seed = 1234, iter = 1000)

Wonderful, the code ran in about 5 minutes, with twice the number of samples and without any warnings messages. Let’s take a look at the output:

b5
##  Family: gaussian 
##   Links: mu = identity; sigma = log 
## Formula: loss_train ~ anaclaimsprocess(dev, premium, Ber, kp, RLR, RRF, delta) 
##          RLR ~ 1 + (1 | p | accident_year)
##          RRF ~ 1 + (1 | p | accident_year)
##          Ber ~ 1
##          kp ~ 1
##          sigma ~ 0 + deltaf
##    Data: lossData0[cal <= max(accident_year)] (Number of observations: 110) 
## Samples: 4 chains, each with iter = 1000; warmup = 500; thin = 1; 
##          total post-warmup samples = 2000
##     ICs: LOO = NA; WAIC = NA; R2 = NA
##  
## Group-Level Effects: 
## ~accident_year (Number of levels: 10) 
##                                  Estimate Est.Error l-95% CI u-95% CI
## sd(RLR_Intercept)                    0.18      0.06     0.11     0.32
## sd(RRF_Intercept)                    0.15      0.04     0.09     0.26
## cor(RLR_Intercept,RRF_Intercept)     0.52      0.26    -0.08     0.89
##                                  Eff.Sample Rhat
## sd(RLR_Intercept)                       852 1.01
## sd(RRF_Intercept)                       860 1.00
## cor(RLR_Intercept,RRF_Intercept)        885 1.00
## 
## Population-Level Effects: 
##                  Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
## RLR_Intercept        0.86      0.06     0.73     0.97        480 1.00
## RRF_Intercept        0.83      0.05     0.72     0.93        723 1.00
## Ber_Intercept        5.69      0.35     5.03     6.42       1968 1.00
## kp_Intercept         0.40      0.01     0.38     0.41       1701 1.00
## sigma_deltafos      -3.61      0.11    -3.80    -3.40       1337 1.00
## sigma_deltafpaid    -4.99      0.11    -5.19    -4.75       1547 1.00
## 
## Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample 
## is a crude measure of effective sample size, and Rhat is the potential 
## scale reduction factor on split chains (at convergence, Rhat = 1).

Perfect, the estimates are very much the same as from the ODE model and the plots haven’t changed much either.

stanplot(b5, 
         pars= c("b_RLR_Intercept", "b_RRF_Intercept", "b_Ber_Intercept",
                 "b_kp_Intercept", "b_sigma_deltafos", "b_sigma_deltafpaid",
                 "sd_accident_year__RLR_Intercept", 
                 "sd_accident_year__RRF_Intercept", 
                 "cor_accident_year__RLR_Intercept__RRF_Intercept"), 
         type="dens_overlay")

The density plots look smoother and more consistent across chains with twice the number of samples compared to the previous model.

expose_functions(b5, vectorize = TRUE) # requires brms >= 2.1.0
predClaimsPred2 <- predict(b5, newdata = lossData0, method="predict")
plotDevBananas(`2.5%ile`/premium*100 + `97.5%ile`/premium*100 + 
                 Estimate/premium*100 + loss_train/premium*100 + 
                 loss_test/premium*100 ~ 
                 dev | factor(accident_year), ylim=c(0, 100),
               data=cbind(lossData0, predClaimsPred2)[delta==0], 
               main="Outstanding claims developments", 
               ylab="Outstanding loss ratio (%)")

plotDevBananas(`2.5%ile`/premium*100 + `97.5%ile`/premium*100 + 
                 Estimate/premium*100 + loss_train/premium*100 + 
                 loss_test/premium*100 ~  
                 dev | factor(accident_year), ylim=c(0, 120),
               data=cbind(lossData0, predClaimsPred2)[delta==1], 
               main="Paid claims developments", 
               ylab="Paid loss ratio (%)")

I am much happier now. The model runs in an acceptable time, allowing me to play around with my assumptions further. I have yet to understand why the integration routine in Stan took so long.

Session Info

sessionInfo()
## R version 3.4.3 (2017-11-30)
## Platform: x86_64-apple-darwin15.6.0 (64-bit)
## Running under: macOS High Sierra 10.13.2
## 
## Matrix products: default
## BLAS: /Library/Frameworks/R.framework/Versions/3.4/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/3.4/Resources/lib/libRlapack.dylib
## 
## locale:
## [1] en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8
## 
## attached base packages:
## [1] methods   stats     graphics  grDevices utils     datasets  base     
## 
## other attached packages:
## [1] brms_2.1.0          Rcpp_0.12.15        data.table_1.10.4-3
## [4] latticeExtra_0.6-28 RColorBrewer_1.1-2  lattice_0.20-35    
## [7] rstan_2.17.3        StanHeaders_2.17.2  ggplot2_2.2.1      
## 
## loaded via a namespace (and not attached):
##  [1] mvtnorm_1.0-7        gtools_3.5.0         zoo_1.8-1           
##  [4] assertthat_0.2.0     rprojroot_1.3-2      digest_0.6.15       
##  [7] mime_0.5             R6_2.2.2             plyr_1.8.4          
## [10] backports_1.1.2      stats4_3.4.3         evaluate_0.10.1     
## [13] coda_0.19-1          colourpicker_1.0     blogdown_0.5        
## [16] pillar_1.1.0         rlang_0.1.6          lazyeval_0.2.1      
## [19] curl_3.1             miniUI_0.1.1         DT_0.3.3            
## [22] Matrix_1.2-12        rmarkdown_1.8        labeling_0.3        
## [25] shinythemes_1.1.1    shinyjs_1.0          stringr_1.2.0       
## [28] htmlwidgets_1.0      loo_1.1.0            igraph_1.1.2        
## [31] munsell_0.4.3        shiny_1.0.5          compiler_3.4.3      
## [34] httpuv_1.3.5         xfun_0.1             pkgconfig_2.0.1     
## [37] base64enc_0.1-3      rstantools_1.4.0     htmltools_0.3.6     
## [40] tibble_1.4.2         gridExtra_2.3        bookdown_0.6        
## [43] codetools_0.2-15     threejs_0.3.1        matrixStats_0.53.0  
## [46] dplyr_0.7.4          grid_3.4.3           nlme_3.1-131        
## [49] xtable_1.8-2         gtable_0.2.0         magrittr_1.5        
## [52] scales_0.5.0         stringi_1.1.6        reshape2_1.4.3      
## [55] bindrcpp_0.2         dygraphs_1.1.1.4     xts_0.10-1          
## [58] tools_3.4.3          glue_1.2.0           markdown_0.8        
## [61] shinystan_2.4.0      crosstalk_1.0.0      rsconnect_0.8.5     
## [64] abind_1.4-5          parallel_3.4.3       yaml_2.1.16         
## [67] inline_0.3.14        colorspace_1.3-2     bridgesampling_0.4-0
## [70] bayesplot_1.4.0      knitr_1.18           bindr_0.1           
## [73] Brobdingnag_1.2-4

References

Bürkner, Paul-Christian. 2017. “brms: An R Package for Bayesian Multilevel Models Using Stan.” Journal of Statistical Software 80 (1): 1–28. doi:10.18637/jss.v080.i01.

Lunn, David J, Andrew Thomas, Nicky Best, and David Spiegelhalter. 2000. “WinBUGS-a Bayesian Modelling Framework: Concepts, Structure, and Extensibility.” Statistics and Computing 10 (4). http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.471.604&rep=rep1&type=pdf; Springer: 325–37.

Morris, Jake. 2016. “Hierarchical Compartmental Models for Loss Reserving.” In. Casualty Actuarial Society Summer E-Forum; https://www.casact.org/pubs/forum/16sforum/Morris.pdf.

Tornoe, Christoffer W. 2012. NlmeODE: Non-Linear Mixed-Effects Modelling in Nlme Using Differential Equations. https://CRAN.R-project.org/package=nlmeODE.

To leave a comment for the author, please follow the link and comment on their blog: R on mages' blog.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)