Review of IAIS DNS Shock Generating Algorithm Update

[This article was first published on K & L Fintech Modeling, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

This post reviews the update of IAIS DNS Shock Generating Algorithm of IAIS(2018, 2019). This update have some modificatoin which is related to the expression of conditional covariance. But this expression needs to be more refined in the mathematical manner. This post try to explain it.



IAIS DNS Shock Generating Algorithm : update


In previous post, we have explained the IAIS DNS Shock Generating Algorithm. This is based on the IAIS(2018, 2019) document which is a slightly updated version from IAIS(2017). We will investigate this update with more details.
Public 2017 Field Testing Technical Specifications (https://www.iaisweb.org/page/supervisory-material/insurance-capital-standard/file/67655/public-2017-field-testing-technical-specifications)
Public 2018 Field Testing Technical Specifications (https://www.iaisweb.org/page/supervisory-material/insurance-capital-standard//file/76130/public-2018-field-testing-technical-specifications)
Public 2019 Field Testing Technical Specifications (https://www.iaisweb.org/page/supervisory-material/insurance-capital-standard//file/82711/public-2019-iais-field-testing-technical-specifications)

From IAIS(2017) to IAIS(2018, 2019), the expression of \(M\) is updated. \(M\) in IAIS(2017) which is based on numerical approximation was as follows.
\[\begin{align} M = K^{-1}(I-e^{-K})\Sigma \end{align}\] But \(M\) is changed since IAIS (2018) in the following way.

\[\begin{align} M =\sqrt{ (\Sigma \Sigma^T) \odot \left( \frac{1-e^{-( K_i + K_j )}}{K_i + K_j} \right)_{ij} } \end{align}\]
This update is to write \(M\) as more accurate conditional covariance matrix.

Unlike the intention of IAIS(2018, 2019), some numerical error occured in the calculation process. We think that it’s not a mathematical but expressional problem. It needs to be more specific for clear understanding and calculation.

We argue that the expression of \(M\) in IAIS(2018, 2019) needs to be modified as follow.

  1) \( e^{-(K_i + K_j)} \) is changed to \(-{(e^{-K})}_i \times {(e^{-K})}_j \)
  2) Instead of \(e\) (exponential), it is better to use matrix exponential explicitly.
  3) Square root is replaced by Lower triangular matrix of Cholesky decomposition for the calculation of \(M\).

In previous post which implement R code for DNS shock algorithm, expm() in expm(-Kappa) is not scalar exponential but matrix exponential function. Therefore correct expression of \(M\) should be as follows.

\[\begin{align} &M = \text{lower triangular matrix of } \\ &\textbf{Chol}\left( (\Sigma \Sigma^T) \odot \left[ \frac{1-{(e^{-K})}_i \times {(e^{-K})}_j}{K_i + K_j} \right]_{ij} \right) \end{align}\]



Let’s investigate whey our reformulation sidestep numerical errors in the calculation process.

  1) 2017 version
  2) 2019 version – incorrect uses of exponential &  square root
  3) 2019 version – incorrect use of square root
  4) 2019 version – correct

The following R codes are for 4 cases respectively.

1) 2017 version
R code
1
2
3
4
5
6
7
8
9
# 2017 version
< solve(Kappa)%*%(diag(nk)expm(Kappa))%*%Sigma # 2017
N  < diag(c(LOT,sum(BB[,2]), sum(BB[,3])))%*%M
NN < t(N)%*%N
        
print(“2017 version”)
print(M)
print(N)
M2017 < M
cs

Result
1
2
3
4
5
6
7
8
9
10
11
12
>         print(“2017 version”)
[1“2017 version”
>         print(M)
              [,1]          [,2]        [,3]
[1,]  5.192573e03  0.0000000000 0.000000000
[2,] 3.551370e03  0.0021147776 0.000000000
[3,] 1.839468e05 0.0007767221 0.006660495
>         print(N)
              [,1]         [,2]       [,3]
[1,]  1.038515e01  0.000000000 0.00000000
[2,] 2.343925e02  0.013957658 0.00000000
[3,] 7.980107e05 0.003369629 0.02889502
cs

2017 version gives correct results.


2) 2019 version – incorrect uses of exponential &  square root
R code
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 2019 version – incorrect case 1
m.temp < matrix(0,nk,nk)
for (i in 1:nk) {
    for (j in 1:nk) {
        m.temp[i,j] < (1exp(Kappa[i,i]Kappa[j,j]))/(Kappa[i,i]+Kappa[j,j])
    }
}
        
M  < sqrt((Sigma%*%t(Sigma))*m.temp) 
N  < diag(c(LOT,sum(BB[,2]), sum(BB[,3])))%*%M
NN < t(N)%*%N
    
print(“2019 version – incorrect case 1 : kappa, sqrt problem”)
print(M)
print(N)
cs

Result
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
>         M  < sqrt((Sigma%*%t(Sigma))*m.temp) 
Warning message:
In sqrt((Sigma %*% t(Sigma)) * m.temp) : NaNs produced
>         N  < diag(c(LOT,sum(BB[,2]), sum(BB[,3])))%*%M
>         NN < t(N)%*%N
>     
>         print(“2019 version – incorrect case 1 : kappa, sqrt problem”)
[1“2019 version – incorrect case 1 : kappa, sqrt problem”
>         print(M)
            [,1]        [,2]        [,3]
[1,] 0.005194699         NaN         NaN
[2,]         NaN 0.004188958         NaN
[3,]         NaN         NaN 0.006817457
>         print(N)
     [,1] [,2] [,3]
[1,]  NaN  NaN  NaN
[2,]  NaN  NaN  NaN
[3,]  NaN  NaN  NaN
cs

In the case of 2019 version, two problems results in numerical errors.


R code
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# 2019 version – incorrect case 2
eK < expm(Kappa)
        
m.temp < matrix(0,nk,nk)
for (i in 1:nk) {
    for (j in 1:nk) {
        m.temp[i,j] < (1eK[i,i]*eK[j,j])/(Kappa[i,i]+Kappa[j,j])
    }
}
        
M  < sqrt((Sigma%*%t(Sigma))*m.temp) 
N  < diag(c(LOT,sum(BB[,2]), sum(BB[,3])))%*%M
NN < t(N)%*%N
        
print(“2019 version – incorrect case 2 : sqrt problem”)
print(M)
print(N)
cs

Result
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
>         M  < sqrt((Sigma%*%t(Sigma))*m.temp) 
Warning message:
In sqrt((Sigma %*% t(Sigma)) * m.temp) : NaNs produced
>         N  < diag(c(LOT,sum(BB[,2]), sum(BB[,3])))%*%M
>         NN < t(N)%*%N
>         
>         print(“2019 version – incorrect case 2 : sqrt problem”)
[1“2019 version – incorrect case 2 : sqrt problem”
>         print(M)
            [,1]        [,2]        [,3]
[1,] 0.005194699         NaN         NaN
[2,]         NaN 0.004188958         NaN
[3,]         NaN         NaN 0.006817457
>         print(N)
     [,1] [,2] [,3]
[1,]  NaN  NaN  NaN
[2,]  NaN  NaN  NaN
[3,]  NaN  NaN  NaN
cs

In 2019 version, scalar exponential is replace by matrix exponential but some error remains because square root is used.


4) 2019 version – correct
R code
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# 2019 version – correct
eK < expm(Kappa)
        
m.temp < matrix(0,nk,nk)
for (i in 1:nk) {
    for (j in 1:nk) {
        m.temp[i,j] < (1eK[i,i]*eK[j,j])/(Kappa[i,i]+Kappa[j,j])
    }
}
        
M  < t(chol((Sigma%*%t(Sigma))*m.temp))
N  < diag(c(LOT,sum(BB[,2]), sum(BB[,3])))%*%M
NN < t(N)%*%N
        
print(“2019 version – correct”)
print(M)
print(N)
M2019 < M
cs

Result
1
2
3
4
5
6
7
8
9
10
11
12
>         print(“2019 version – correct”)
[1“2019 version – correct”
>         print(M)
              [,1]          [,2]        [,3]
[1,]  5.194699e03  0.0000000000 0.000000000
[2,] 3.566610e03  0.0021969659 0.000000000
[3,] 1.848343e05 0.0007696041 0.006773854
>         print(N)
              [,1]        [,2]      [,3]
[1,]  1.038940e01  0.00000000 0.0000000
[2,] 2.353984e02  0.01450011 0.0000000
[3,] 8.018608e05 0.00333875 0.0293868
cs

The above results show that using two remedis such as matrix exponential and Cholesky decomposition we can get the correct outcomes.

In conclusion, although notaitonal uncertainties happen in 2018, 2019 version, some modifications make results of updated and original version nearly same. Of course, the former is more accurate than the latter because the latter is an approximation but the former is not.

Result
1
2
3
4
5
6
7
8
9
10
11
12
13
[1“2017 version”
>         print(M2017)
              [,1]          [,2]        [,3]
[1,]  5.192573e03  0.0000000000 0.000000000
[2,] 3.551370e03  0.0021147776 0.000000000
[3,] 1.839468e05 0.0007767221 0.006660495
>         print(“2019 version – correct”)
[1“2019 version – correct”
>         print(M2019)
              [,1]          [,2]        [,3]
[1,]  5.194699e03  0.0000000000 0.000000000
[2,] 3.566610e03  0.0021969659 0.000000000
[3,] 1.848343e05 0.0007696041 0.006773854
cs

In summary, IAIS (2018, 2019) modify the conditional covariance as more accurate but there are some notational uncertainties. For this problem, we apply appropriate remedies and get correct results which are consistent to the intention of IAIS (2018, 2019). \(\blacksquare\)

To leave a comment for the author, please follow the link and comment on their blog: K & L Fintech Modeling.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)