GCPM: A Flexible Package to Explore Credit Portfolio Risk

In this article, we introduce the novel GCPM package, which represents a generalized credit portfolio model framework. The package includes two of the most popular modeling approaches in the banking industry, namely the CreditRisk and the CreditMetrics model, and allows to perform several sensitivity analyses with respect to distributional or functional forms assumptions. Therefore, besides the pure quantification of credit portfolio risk, the package can be used to explore certain aspects of model risk individually for every arbitrary credit portfolio. The way the package is implemented combines a high level of flexibility and performance together with a maximum of usability. Furthermore, the package offers the possibility to apply simple pooling techniques to speed up calculations for large portfolios as well as the opportunity to combine simulation models with a user specified importance sampling approach. The article concludes with a comprehensive example demonstrating the flexibility of the package.


Introduction
Banks apply credit portfolio models in order to quantify the amount of economic capital which must be withheld in order to cover unexpected losses caused by credit defaults.As the financial crisis had shown very impressively, the use of quantitative models is always accompanied by a certain amount of model risk which has to be taken into account whenever decisions or price evaluations are based on them.Nowadays, banks are explicitly requested by supervisors to validate their quantitative models and to quantify model risk (see Board of Governors of the Federal Reserve System 2011).Ignoring model risk can lead to wrong management decisions and an underestimation of the true risk.The GCPM package addresses both of these issues -quantification of credit risk and an analysis of the underlying model risk.
A great advantage of GCPM over other available packages for R (R Core Team 2014), like QRM (Pfaff and McNeil 2014) or CreditMetrics (Wittmann 2007), is that it utilizes an object oriented approach, where one object consists of a specified model together with all portfolio information and risk figures (once the portfolio loss distribution was estimated).Therefore, it is easy to handle different models (or portfolios) simultaneously without jeopardizing their consistency.As the example in Section 5 will show, performing comparison or sensitivity studies is very simple.In addition, the package is able to deal with large portfolios.On the one hand, portfolios with several thousands of counterparties can be used, whereas in our tests the CreditMetrics package was unable to handle more than one hundred portfolio positions.On the other hand, and in contrast to the QRM package1 , risk parameters like the probability of default, the loss ratio in case of a default, the exposure and the assignment to a specific industry sector and country affecting the default dependencies can be defined individually for each counterparty.Together with a C++ implementation of the simulation framework, which takes advantage of modern multi-core systems, the package combines flexibility regarding counterparty characteristics and distributional assumptions with good performance and makes it suitable for practical applications.Moreover, for advanced users, simulation models can be combined with self-defined importance sampling techniques and counterparty pooling approaches in order to stabilize simulation results and to increase performance furthermore.
Please note that we will not address any questions regarding the parametrization of the models.In contrast, in order to guarantee a maximum flexibility regarding the distributional assumptions, we have to leave this task up to the user.However, we will provide several examples and demonstrate how already existing packages and basic R functions can be used to construct a parametrization (i.e. a sample from the multivariate sector distribution).For those who are interested in this topic, we refer to Hamerle and Rösch (2006).Please also note that the package focuses on credit risk only with respect to default events, i.e. migration risk is not considered.
The article is organized as follows.A short overview of credit portfolio models together with common notation is given in Section 2. Afterwards, we present the simulation framework and the derivation of risk contributions.The last section contains a hypothetical example, explaining how the package GCPM can be used to quantify credit and model risk.Here, starting from the basic CreditRisk + model (see Credit Suisse First Boston International 1997), which is characterized by certain distributional assumptions, we show how risk figures might change if these assumptions are modified.Along with this, the available functions of the package are introduced including a simple pooling technique which will be useful for homogeneous portfolios (e.g. retail portfolios).

Input data, loss distribution and risk figures
The key function of a classical bank is to hand out loans to enterprises or private persons.For reason of simplicity, let us assume that the loan portfolio consists of M loans given to M different counterparties or obligors.In this situation the bank faces the risk that one or more obligors default which means that they are not able or willing to pay back the out-standing amounts (principal and interests) which, in turn, leads to financial losses.The main purpose of a credit portfolio model is to forecast the portfolio loss distribution for the underlying loan portfolio and a fixed time interval, usually one year.Regardless of the specific modeling approach (two of them are introduced in the subsequent sections), every model requires the following set of information on each counterparty i: The exposure at the time of default (EAD i ), the probability of default (PD i ) for the given time horizon, usually one year, the loss given default rate (LGD i ) or recovery rate (RR i = 1 − LGD i , amount recovered through foreclosure or bankruptcy procedures in the event of default, expressed as a percentage of EAD i ) and the assignment of the obligor to predefined industry and/or country sectors in order to rebuild the dependence structure of the portfolio.
With this notation, the overall portfolio loss L reads as where D i ∼ Ber(PD i ) is the default indicator2 for obligor i (i.e.PD i = P (D i = 1)).
Under the assumption that the parameters LGD and EAD are deterministic and the loss distribution F L has already been derived, the following key figures are required for the bank's risk reporting and management information (see also Figure 1 for a graphical representation): Value at Risk VaR α := inf{l|F L (l) ≥ α} for a specified level α ∈ (0, 1).

Economic capital EC
Expected shortfall or expected tail loss ES α := E(L|L ≥ VaR α ).
Figure 1: General portfolio loss distribution with risk figures.
In practice, VaR α and EC α constitute the relevant risk measures.For example, in the regulatory framework of Basel II (see Basel Committee on Banking Supervision 2006), a loss level of α = 0.999 is used to quantify the economic capital.
Whereas the expected loss can be calculated directly from the raw portfolio data, the calculation of the loss distribution in general is a crucial issue.It requires the knowledge of the dependence structure (so-called "default correlations") between the M default indicators D 1 , . . ., D M , where M is typically large.To simplify this problem and reduce the dimension, every counterparty is assigned to one or more out of K M industry and/or country sectors such that dependence between obligors can be traced back to the belonging to same sectors and to the dependence structure between them.The sectors themselves are modeled via a (multivariate) latent variable S which is distributed according to some K-dimensional distribution3 on R K .
The GCPM package deals with two of the most popular credit portfolio models, namely CreditRisk + and CreditMetrics, which are briefly summarized in the following subsections.Whereas CreditRisk + and its generalizations provide an analytic solution under certain restrictive distributional assumptions, CreditMetrics calculates the portfolio loss distribution within a simulation framework which is more flexible but also more time-consuming.For further details on these portfolio models we also refer to Crouhy, Galai, and Mark (2000) or Gordy (2000) who provide an excellent comparative analysis of these models.

The CreditRisk + model
The CreditRisk + model was developed by the Financial Products division of Credit Suisse in 1997, see Credit Suisse First Boston International (1997) for a detailed documentation.It belongs to the class of so-called Poisson mixture models where the intensity of the Poisson distribution (which approximates the Bernoulli distribution of the default indicator D i ) itself is driven by Gamma-distributed random variables.Relying on these specific stochastic assumptions and a discretization of the exposures, it is possible to express the probability mass function of the portfolio loss (or, equivalently, its probability generating function4 ) in a closed analytical form, which is a great advantage of CreditRisk + and its major difference to its competitors.Hence, even for larger portfolios the risk figures can be obtained within a reasonable run-time.
More formally, the basic idea of the model can be summarized as follows: In a first step, a discretization parameter L 0 , called loss unit is introduced.All exposures are approximated by an integer multiple of this unit via , 1 , where x denotes the nearest integer value to x.The default probabilities are adjusted such that the discretization does not affect the expected loss.The adjusted PD is given by As for the calculation of the loss distribution, the loss unit represents the width of the exposure bands on which the marginal probabilities are calculated.For more details, please see Credit Suisse First Boston International (1997, para A 3.2).
Secondly, a further key assumption is to replace the default indicator D i (naturally Bernoulli distributed) with a Poisson distributed random variable Di with intensity parameter λ i .This assumption is necessary in order to compute the portfolio loss distribution analytically.Because, in most cases, λ i will be very small, the approximation error is not substantial.But if credit quality decreases, the effect of multiple defaults becomes crucial.
Finally, the intensity parameter of each obligor is mapped onto one or more (economic) sectors in order to introduce dependence between the counterparties belonging to the same sector via sector weights.Given a sector realization s = (s 1 , ..., s K ) T of S, the conditional default intensity reads as: with the individual adjusted PD i , individual sector weights w i,k ∈ [0, 1] for obligor i with respect to sector k such that K k=1 w i,k ≤ 1 and the idiosyncratic weight w i,0 = 1 − N k=1 w i,k , sector variables S 1 , . . ., S K which are assumed to be mutually independent and Gamma distributed with variance 5 σ 2 k and E (S k ) = 1 such that E(λ S i ) = PD i = λ i .
Under these assumptions, the default correlation between obligor i and j reads as: In order to calculate the probability mass function (PMF) of the portfolio loss, a modified 6 version of the algorithm given in Haaf, Reiss, and Schoenmakers (2003) is used.The algorithm calculates the marginal probabilities that the portfolio loss is equal to ν • L 0 with ν ∈ N 0 .It stops if a desired level of the cumulative distribution function (CDF) has been reached.
In order to keep the notation simple and comparable to the CreditMetrics model, we will denote the adjusted PD with PD i as well, instead of PD i , in the remainder of this article.
Switching back to the original notation does not imply that this approximation is unimportant.
Please bear in mind that, if an inappropriately large loss unit L 0 is used, the discretized PDs and hence also the risk figures may be changed noticeably.

The CreditMetrics model
The CreditMetrics model, described in Gupton, Finger, and Bhatia (1997), is a typical representative of so-called threshold models.The fundamental idea grounds on the firm value model of Merton (1974).For each counterparty i an asset value variable is defined as where determines the correlation of i's asset value to the systemic factors S ∼ N K (0, Σ) 7 .The idiosyncratic risk is expressed by i ∼ N (0, 1) which are independent from each other as well as from S. A default occurs if the asset value A i falls below the default threshold, defined by Φ −1 (PD i ) where Φ denotes the distribution function of a standard normal variable.Conditioning on a realization s of the systemic factor S the probability of default is given by PD Using formula (3), the default correlation between two counterparties reads as: where Φ 2 (x 1 , x 2 , r) denotes the distribution function of a bivariate normal distribution with correlation parameter r ∈ [−1, 1] and standard normal margins.The loss distribution is achieved via a Monte Carlo simulation, as described in the next section.

Simulation models
Alternatively to the analytical version of the CreditRisk + model, one can also use a simulation setting.In this case, several distributional assumptions can be modified in order to analyze model sensitivities.By changing the link function (i.e.replacing ( 2) by ( 4)), one can also 5 The variance σ 2 k can either be estimated from historical default data or using analytical approximations based on the rating specific standard deviation of the PD, see Gundlach (2003).
6 The loop-structure of the algorithm has been changed to calculate the CDF and the PMF simultaneously.
7 NK (a, Σ) denotes the K dimensional normal distribution with mean a and correlation matrix Σ.
switch to a CreditMetrics-like model.Consequently, an analysis of the risk figure sensitivities with respect to the specific link function is also possible.Please take care that the sector drawings (argument random.numbers of the init() function, see Table 1) meet the correct distributional assumptions of the chosen model, defined via the link function, described in Sections 2.2 and 2.3.E.g. normally distributed sectors are not compatible with the CreditRisk + setting.For each counterparty, the distribution of the default indicator D i can be chosen individually between "Bernoulli" (natural choice) or "Poisson" (CreditRisk + -setting) within the portfolio data (see Table 2).Depending on these three elements (sector distribution, link function and default distribution), the basic idea of the simulation framework is to simulate N different portfolio losses.Given these losses, the portfolio loss distribution and risk figures can be estimated via the empirical loss distribution.

General simulation framework
Given a set of N ∈ N >0 (multivariate) sector drawings s (1) , ..., s (N ) ∈ R K and a portfolio of M counterparties, the general simulation framework of the GCPM package is as follows: Algorithm 1 Basic simulation algorithm For n = 1, ..., N #(simulation loop) For i = 1, ..., M #(counterparty loop) Calculate conditional PD: After the simulation, the portfolio losses L (n) are discretized with respect to the loss unit L 0 , in order to group losses for the calculation of the probability mass function.The distribution is estimated based on the discretized simulated portfolio losses L = L(1) , ..., L(N) T , i.e.
where 1 A denotes the indicator function on set A. For reasons of performance, the simulation algorithm is implemented in C++ and linked to the package via the Rcpp package (see Eddelbuettel and François 2011).In order to show the progress status, the RcppProgress package (see Forner 2013) is needed as well.In order to avoid errors during the simulation, please ensure that R can allocate enough memory from your operating system, by using the R functions memory.size()and memory.limit().In order to increase performance within simulation models, one can also take advantage of multi-core systems.For this purpose, the parallel package is required (see Section 5.3.4).

Adaption of importance sampling techniques
In most cases, the risk figures are based on extreme scenarios with a low probability of occurrence.For instance, if the ES 0.999 should be estimated on a basis of 10 3 relevant scenarios (in order to achieve a reliable estimation), one has to perform 10 7 simulations.If portfolios include thousands of counterparties, the simulation will be very time-consuming and it will need lots of memory.With the help of importance sampling techniques, one can "manipulate" the simulation such that extreme scenarios occur more often and tail measures can be calculated on a higher number of simulated losses.Mathematically, importance sampling is just a change of the probability measure from P to P IS .Instead of drawing random numbers from P , one can draw from P IS where the probability of relevant scenarios is higher.The only restriction is that supp where supp f (IS) denotes the support of the corresponding density functions and A is the set of scenarios the risk measure is calculated on.In order to get an estimator with respect to original measure P , the standard estimator (e.g. for the mean) has to be adjusted by the so-called likelihood ratio In our case, the standard estimator of the density function ( 5) changes to Since a credit portfolio model in general contains a lot of different distributions, also the range of application for an importance sampling algorithm is very wide.For example, one could concentrate on the sector copula.Here, different approaches are possible.For instance, one can simply strengthen the overall level of dependence by increasing the entries of the dispersion matrix of a t-copula or by rising the degrees of freedom (e.g.see Mai and Scherer 2012).Another approach could be to concentrate on those sector drawings where extreme scenarios (e.g.exceeding the 95%-quantile) occur jointly across different sectors (see Arbenz, Cambou, and Hofert 2014).Additionally, one can also use importance sampling on the marginal distributions by shifting the mean or increasing the variance and higher moments or use a more sophisticated approach, see Glasserman and Li (2005).
Please note that, since the sector distribution itself can be defined arbitrarily by the user and the possibilities of importance sampling are manifold, the package does not perform any kind of importance sampling on its own.Instead, the sector drawings (random.numbers,see Table 1) can be simulated with a user defined importance sampling approach and passed to a portfolio model together with a vector of likelihood ratios, which will be respected when the loss distribution is calculated.In this way, as in case of the random.numbersmatrix, the user has maximum flexibility to choose which approach is suitable in his or her situation.
For a more detailed introduction to importance sampling in general we refer to Rubino and Tuffin (2009).

Identification of risk drivers
For a portfolio manager, it is important to know which obligors within the portfolio are riskier than others.In order to identify such risk drivers, we briefly introduce different measures which are available in the package for counterparty risk contributions, i.e. contributions to standard deviation σ of the portfolio loss, value at risk, economic capital and expected shortfall.For a detailed derivation of the corresponding formulas in case of the analytical CreditRisk + model, please refer to Credit Suisse First Boston International (1997) and Haaf and Tasche (2002).

Analytical CreditRisk + model
On counterparty level the following risk contributions (RC) can be calculated: , with σ k denoting the standard deviation of sector k and k := i w i,k • PD i • PL i denoting the expected loss with respect to sector k, , where L k denotes the loss in sector k, and , where M is the maximum portfolio loss a probability is calculated on (depending on alpha.max,see Table 1).
Please note that depending on the loss unit L 0 used for exposure discretization and the number of obligors within the portfolio, VaR contributions may be zero for some counterparties because they do not default in the single VaR-event.Therefore, it is reasonable to consider contributions to ES rather than VaR.Because ES is based on the upper tail of the loss distribution rather than a single loss level, the mentioned problem does not occur using ES contributions.
Finally, for all these measures it holds that the individual contributions sum up to the measure calculated on portfolio level.Therefore, one can also analyze contributions, for example on sector level (e.g. business lines or countries) by simply aggregating the corresponding counterparty contributions.

Simulation models
Within the simulation framework, expected shortfall contributions can be calculated.For this purpose, one has to define a loss threshold loss.thr>0,which should be lower to the corresponding VaR but not too low in order to stress memory usage not too much.If the portfolio loss L (n) in scenario n is above loss.thr,all counterparty losses L (n) i are stored.Counterparty risk contributions to ES on level α ∈ (0, 1) are then calculated as: where For other tail measures (VaR and EC) the risk contributions are calculated with the same approach but with respect to another level τ ∈ (0, 1) such that ES τ = VaR α or ES τ = EC α , respectively.Therefore, risk contributions to VaR and EC are approximated by risk contributions to ES but on a lower level τ .Using the ES approach instead of a direct calculation with respect to VaR or EC, risk contributions are much more stable because of the higher number of scenarios used for the calculation.
Since the portfolio loss distribution is not continuous, level τ for VaR/EC contributions is chosen such that ES τ is as close as possible to VaR α or EC α , respectively.If deviations are greater or equal to 0.01% an appropriate message comes up.

The GCPM package
The main component of the package is the S4 class GCPM.Besides this class there are some additional functions, in particular for object creation.The class represents the whole portfolio model framework.It contains all model specifications as well as the portfolio and the loss distribution once it is estimated.In case of a simulation model, losses on counterparty level are also stored depending on a predefined threshold loss.thr(see Table 1) In the next sections we give a detailed overview of the most important features.A complete list of all slots is available in the help pages of the package (see ?GCPM).
The following examples are based on the CreditRisk + framework.Please note that the same analysis can be also performed within a CreditMetrics framework.

General structure
The overall structure of the package is very intuitive.At first, one has to initialize a new model using the init() function.The process of creation is as follows.Passing the input parameters for a new model to the function creates a new object of class GCPM with the specified settings (after some plausibility checks).For example: library("GCPM") sec.var<-c(0.2,0.3, 0.4) #arbitrary sector variances names(sec.var)<-c("A", "B", "C") #assign sector names to variances CRP.classic <-init(model.type= "CRP", loss.unit= 50000, alpha.max= 0.9999, sec.var= sec.var)After creating a new portfolio model, one can analyze a credit portfolio using the analyze() method.In case of an analytical CreditRisk + model, the loss distribution will be calculated by using the algorithm described in Haaf et al. (2003).For simulation models, the simulation described in Algorithm 1 is used.If loss levels are provided via the parameter alpha, tail measures are calculated automatically with respect to those levels.Otherwise, one can calculate those measures afterwards with the corresponding methods as shown in the following examples.The portfolio data frame has to follow the structure described in Table 2.

Analyzing credit risk: A first example
Based on a portfolio distributed with the package (in the package's data folder) consisting of 3000 counterparties and three industrial sectors, we offer an example to show how the package works.We start from the CRP.classic model defined in the previous section.After deriving (or simulating) the loss distribution, risk measures like VaR, EC or ES can be calculated with the help of the corresponding methods.The probability mass function for the loss distribution together with indicators for tail measures can be plotted by using the plot() function.The second argument defines the scale of the horizontal axis.

Modifying distributional assumptions
Besides the pure quantification of credit risk, the package assists in analyzing different aspects of model risk related to distributional or functional forms assumptions.Starting from the classic CreditRisk + model (example of Section 5.2) we will show how the package can be used to build much more flexible models and how to quantify model risk similar to the analyses done by Jakob and Fischer (2014), Fischer and Mertel (2012) or Fischer and Kaufmann (2014).
A key element for this is the random.numbersmatrix which represents the (multivariate) sector distribution.Since the dimension of this matrix depends on the portfolio, i.e. on the number of sectors used, the matrix has to be defined by the user.Additionally, the sector distribution (expressed by random.numbers)also heavily depends on the economic sectors it is associated with.I.e. the sector copula and the marginal distributions may be very different across geographical regions and industries.Since this is a very crucial issue, which has also a significant impact on the risk figures, this matrix must be defined by the user (i.e.no default value is provided).Furthermore, in this way, the user has maximum flexibility to define the sector distribution according to his or her needs.However, a few examples are given on the following pages.For more information about the question of sector parametrization we refer to Hamerle and Rösch (2006) or Dorfleitner, Fischer, and Geidosch (2012).

Checking for simulation error
At first we will check if the results of the simulation model correspond to the analytical one.Therefore, we create a matrix of random numbers, which are independently Gamma distributed with mean equals one and variance given by sec.var, which we can pass to the argument random.numbers of the init() function.
Because we did not provide a vector with likelihood ratios, a corresponding warning is displayed.Similarly, we get another warning because the parameter loss.thr was not set

Introducing sector dependencies
One of the most crucial assumptions of the classic CreditRisk + model is the assumption of independent sectors.Within an analytical framework extensions to correlated sectors are proposed by Fischer and Dietz (2011) and Giese (2003).Here, we use dependent random variables (random.numbersmatrix) to introduce dependence between sectors.Before we continue with our examples, a brief introduction to the concept of copulas is given, which will be used within the following example.
A copula is a multivariate distribution function on the d-dimensional unit hypercube with uniform one-dimensional margins.By using copulas, an arbitrary multivariate distribution can be decomposed into its one-dimensional margins and the dependence structure.Following Sklar's Theorem (see Sklar 1959) it holds that for any multivariate distribution function F on R d with univariate margins F i a unique function C : × d i=1 Im(F i ) → [0, 1] exists, such that F (x) = C (F 1 (x 1 ), ..., F d (x d )) for all x ∈ R d .In reverse, if F i are arbitrary univariate distribution functions and C is a copula function, then the function F defines a valid multivariate distribution function.Famous representatives of copulas are the Gaussian and the t-copula.For further details on this topic we refer to Joe (1997) and Nelson (2006).
Within our next example, the copula package (see Hofert, Yan, Maechler, and Kojadinovic 2014) is used in order to create an exchangeable Gaussian copula for the sector drawings.The margins are again Gamma distributed with parameters equal to the former example.
VaR(CRP.bern.gauss,alpha) / VaR(CRP.bern,alpha) #compare risk figures A great advantage of the package is that one can use any arbitrary portfolio with any possible dependence structure and quantify the markup in his or her special case.
Exchanging both the sector copula and the margins In our next example, we demonstrate how the sensitivity of risk figures with respect to distributional assumptions (i.e.sector copula and margins) can be quantified.The possibilities are only restricted by the set of distributions (univariate and multivariate) available in R. In order to increase performance, we use multiple cores (i.e. 4 cores) for the Monte Carlo simulation.Therefore, the package parallel is required.
VaR(CRP.bern.t,alpha) / VaR(CRP.bern.gauss,alpha) #compare risk figures For a more detailed analysis regarding the sector copula within the CreditRisk + and the CreditMetrics framework we also refer to Fischer and Jakob (2015).When exchanging sector distributions, please take care of the specific model assumptions, e.g. that the mean equals one within the CreditRisk + framework or the quantification of the default threshold Φ −1 (PD) in a CreditMetrics type model.
In the next step, we switch the marginal sector distributions from a Gamma distribution to a log-normal distribution.Please note that the same analysis can be carried out within a CreditMetrics like default model by using link.function="CM".
The loss distributions of the examples are shown in Figure 2. The x-axis of both charts represent the loss percentile in the classic CreditRisk + model.The right one exhibits the upper tail of all distributions together with vertical lines indicating the value of VaR 0.999 in each model, clearly demonstrating how risk increases if assumptions related to the sector distribution are modified.

Pooling
Finally, we show how a simple pooling approach can be used in order to speed up calculations.For this purpose, the package's data folder contains a prepared portfolio containing three pools (see Table 3).Here, all counterparties within the same sector and a potential loss (PL= EAD • LGD) below 200,000 are grouped into one pool.
Let M P ool denote the number of counterparties within one pool.Then for each pool, the values for EAD, LGD and PD are determined via the following formulas: EAD P ool = 1 M P ool i∈P ool EAD i (average EAD per counterparty) LGD P ool = i∈P ool EAD i LGD i EAD P ool M P ool (weighted average LGD per counterparty) PD P ool = i∈P ool EAD i LGD i PD i EAD P ool LGD P ool (average number of defaults within the pool) Since the pooling criteria (i.e.potential loss threshold, sector membership) depend on the underlying portfolio as well as the desired accuracy, we have to leave this task up to the user.Additionally, in order to achieve good approximation results for the risk figures, advanced users may consider more sophisticated pooling techniques, for example based on certain PD and PL ranges, the pool loss standard deviation or the well-known Herfindahl index regarding the counterparty exposures as presented in Gordy (2003).Please note that in case of a CreditMetrics-like link function (i.e. if link.function="CM"), which includes the distribution function of a standard normal distribution, default intensities greater or equal to one are not supported.and a CreditMetrics-type model within a default framework.The examples show that, because of the flexible structure, the package helps to analyze the sensitivity of risk figures if distributional assumptions are modified and therefore to quantify aspects of model risk as well.In order to increase the performance further, simulation models can be combined with user specific importance sampling techniques and pooling approaches.The combination of these possibilities and a fast implementation of the simulation core in C++ together with the capability of parallel computing makes the package a powerful tool which also allows to perform calculations on portfolios with a large number of counterparties.
For more information about the package, especially about the individual methods, please have a look at the help pages provided in the package (e.g.?init).

Figure 2 :
Figure 2: Loss distributions of example models together with indicators for VaR 0.999 .
The above code generates a GCPM model, named CRP.classicwith the given attributes.For some slots of the GCPM class, default values (e.g. for alpha.max)areprovided,but they are not necessarily the best choice.Considering this, one should better choose them individually for each portfolio according to exposures, number of counterparties, and hardware restrictions.Depending on the model.type,differentargumentshave to be provided.A summary is given in Table1below.

Table 1 :
.var ...is a named numeric vector defining the sector variances.The names have to correspond to the sector names given in the portfolio.numericvalueusedto initialize the random number generator.If seed is not provided, a value based on the current system time will be used.Therefore, the results are truly random in this case.loss.thr... is a numeric value specifying a lower bound for portfolio losses to be stored in order to derive counterparties' risk contributions.random.numbers... is a matrix with sector drawings.The columns represent the different sectors, whereas the rows represent the scenarios.The column names must correspond to the sector names used in the portfolio.Arguments for init() in case of a simulation and an analytical model.
secLHR ... is a numeric vector of length equal to nrow(random.numbers)defining the likelihood ratio of each scenario.If not provided, all scenarios are assumed to be equally likely.max.entries ... is the number of scenarios stored to calculate risk contributions.The value should be set in consideration of the amount of available memory.

Table 2 :
Structure of the portfolio data frame.