Guaranteed Links to the Life Market

December 3, 2016 | Author: dcr25568 | Category: N/A
Share Embed Donate


Short Description

This article addresses a frequent problem relevant for dynamically hedging unit- linked life insurance contracts with gu...

Description

cutting edge

Guaranteed links to the life market

This article addresses a frequent problem relevant for dynamically hedging unitlinked life insurance contracts with guarantees: the determination of the hedge sensitivities in a numerically efficient manner. For this purpose we use a replication portfolio approach by tigran kalberer THE EUROPEAN life insurance market is currently being flooded by unitlinked products with investment guarantees (ULG). While these products are relatively new to the European market, they are very well known in the US and Canada, where they are called ‘variable annuities’. The design, pricing and risk-management processes of these products present new challenges to the industry. Unfortunately the technology currently available does not provide solutions to all of the problems that these products can generate. A popular risk management approach for these products is dynamic hedging. This approach is based on a stochastic simulation approach for valuing liabilities, which typically is very time-consuming.

The approach requires the determination of the sensitivity of the option value to small changes in a range of risk factors. These sensitivities are called ‘greeks’ and have specific names for each risk factor: Delta, for the first-order sensitivity towards the assets that are part of any underlying funds; Rho, for first-order-interest rate sensitivity; Vega, for first-order-volatility sensitivity; Theta, for first-order-time sensitivity; Gamma, for the second-order sensitivity towards the assets, which are part of the underlying funds – that is, the sensitivity to delta. Additionally there is a requirement that financial market instruments that show the same dependency on the risk factors as the value of the guarantee are available. It is assumed that these instruments can be short sold. If an amount of these instruments is short sold that exactly offsets the sensitivity of the value of the guarantee, at the end of a small period the value of the hedge portfolio is still the value of the guarantee and the hedging can continue. This paper addresses the task of determining the sensitivities of the value of the guarantee over a whole portfolio of contracts, in an efficient way at each future point in time for a large range of possible capital market situations.

Valuation of ULG products Unit-linked products with investment guarantees The wide variety of ‘typical’ unit-linked life insurance contracts are well known throughout the industry. The policyholder pays a single premium or regular premiums. The insurance company deducts expense charges and risk premiums, and invests the remaining part of the premium in fund units – either mutual funds or funds created internally within the insurance company for this purpose. The insurance company charges fees regularly to the funds account of the policyholder. Upon occurrence of a defined event – such as maturity, annuitisation or death – the policyholder receives the value of the funds account. Alternatively, the funds account can be converted into a guaranteed traditional annuity or another guaranteed benefit for the policyholder.

The current best practice approach for valuing ULG products is to use stochastic simulation – that is, to produce a sufficiently large set of market-consistent scenarios Xi , i = 1 … n, describing all relevant market parameters. If the number of scenarios is sufficiently large, the law of large numbers applies and



Z  1 n Z ( Xi ) Value = E Q  ℑ0  ≈ ∑ N  n i =1 N ( Xi )



Here Z is the contingent cashflow at time T; N is a reference asset, called Numeraire; ℑ0 is the information available at time 0; and Q is the so-called risk-neutral measure (see Kalberer (2006)).

Numerical problems in a dynamic hedging approach

Dynamic hedging

When determining the above-mentioned sensitivities (the greeks), two problems arise:

Dynamic hedging is an approach where a hedge portfolio is purchased and actively managed in such a way that all shortfalls arising from an investment guarantee can be financed by this portfolio under all possible financial market situations.

Problem 1: The calculation of the greeks usually requires repeated stochastic simulation for a large number of scenarios over a huge portfolio of contracts, potentially condensed into a considerable number of model points

www.life-pensions.com December 2007

technical_december.indd 39

39

5/12/07 10:41:08

cutting edge

Figure 1. Value of the maturity guarantee at maturity

Figure 3. Payoff of liability portfolio compared to payoff of the replication portfolio 1,000 Value of the guarantee at maturity (shortfall)

1,000

500

0 0

0.5

1.0

1.5 2.0 Price of a funds unit

2.5

800 600 400 200

3.0

Figure 2. Value of the maturity guarantee at maturity

0

100

500 600 200 300 400 Payoff of replication portfolio

700

800

900

40

Value of maturity guarantee Value of replication portfolio

500

0

–500 0

0.5

1.0

1.5 2.0 Price of a funds unit

2.5

3.0

in order to determine their market-consistent value and the sensitivities of this value. This procedure is usually very time consuming. Problem 2: The scenarios used for this purpose have to be calibrated such that they are market-consistent – that is, they reflect the market prices of financial instruments, which are liquidly traded and where prices are available. The asset models used for generating such scenarios typically do not adequately allow for the reflection of all features of the market prices of financial instruments. Features such as the dependency of volatility on time and moneyness (‘smile’) are difficult to reflect in the calibration of the scenarios used for valuation. This effect can lead to significant errors in valuing ULG products, sometimes referred to as ‘model error’.

A potential solution: The replication portfolio approach Both problems could be solved by using a replication portfolio approach to the valuation of ULG products. The underlying idea is very simple and the accuracy of this approach can be quantified. In most cases, the replication approach gives very accurate approximations for the required calculation results. The replication portfolio The replication portfolio of a set of liabilities is defined as a portfolio consisting of potentially fictitious assets (candidate assets) with certain properties that generate very similar cashflows at all points of time under all possible investment scenarios. For some purposes it is only necessary to replicate the market value and its dynamics at a certain point in time.

Cumulative probability

Price of a funds unit

0

Figure 4. Hedge slippage and replication-portfolio hedge error in comparison

1,000

40

–100

20 0 –20 Hedge slippage Deviation replication portfolio to actual portfolio payoff Worst case: slippage and error added

–40 –60 –80

0

200

400 600 Scenarios

800

1,000

The properties these replicating assets should possess are as follows: Their value should be easy to determine, preferably using closed-form solutions. This allows the determination of the value of the guarantees with the minimum of numerical error but preserves the potential calibration error. If possible, there should be market prices (in contrast to just modelprices). If this is possible, the calibration error can also be minimised. The idea to replicate complex and possibly large portfolios of liabilities using replication portfolios is an old one, but only recently has there been widespread application of this process within the insurance industry. An introduction to this approach, which focuses on traditional life business, is given by Oechslin et al (2007). The approach is based on a list of candidate assets, which have the properties described above, and on finding the linear combination of these assets, which replicates the given liability cashflows in an optimal way – for example, by minimising the average deviation between the cashflows of the liabilities and the replication-portfolio measured using a convenient metric, such as least squares. In most cases the weights of the optimal portfolio can be determined using linear algebra, as also described in Oechslin et al (2007). Replication portfolios for ULG-products For illustrative purposes, I focus on path-independent guarantees in the following example.

Life & Pensions

technical_december.indd 40

5/12/07 10:41:13

Table 1: Optimal linear combination of candidate assets Strike

Notional 0.2

0

0.4

39.20392

0.6

133.0466

0.8

218.4072

1

282.6702

1.2

204.7578

1.4

82.24168

1.6

34.89139

1.8

10.96966

2

–0.60765

Figure 1 examines ULG contracts that mature in a certain point of time t, where the maturity payment depends on the same underlying funds’ value at t only. This implies that there is no dependency on either the path or any other financial instruments (aside from the underlying ones). The overall pay-off generated by the guarantees of the contracts will be a decreasing function of the funds value. As figure 1 illustrates, this looks surprisingly smooth. For this and the following examples I have used a sample portfolio of 1,000 single premium ULG-contracts with varying guarantee-levels Gp and volumes Vp, a term of 10 years and make the initial unit price = 1. The pay-off function is





policies p

(

Vp ⋅ max G p − FV , 0

)

Where FV is the funds value in year 10. The graph contains all the information necessary to describe the pay-off of the guarantee over the entire liability portfolio. If this graph is successfully approximated by a linear combination of simple functions (dependent on the same underlying), it is possible to represent a potentially large portfolio of contracts (1,000 in this case) by a small set of functions. An obvious candidate to approximate the graph is a set of plain vanilla put options on the underlying with a 10-year term and a range of different strikes. This example chooses strike prices 0.2, 0.4 and so on, up to 2. The strike prices are chosen such that they cover the area where the pay-off function needs to be approximated, here in the range between 0 and 2. The optimal linear combination of these candidate assets is shown in table 1. The approximation given by these instruments can be visualised as shown in figure 2. The optimal replication portfolio for our purposes is defined as the linear combination of the candidate assets minimising the squared deviations between the pay-off function and the approximating replication portfolio for a sufficiently large number of scenarios. Depending on the purpose, it might be necessary to choose another metric for this optimisation. It is important to note that this approach uses the results produced for the purpose of pricing – that is, the scenarios used for pricing, building on these results in a natural way. If the liability cashflows were path dependent then there would be multidimensional dependencies between the different risk factors at different

points of time and the cashflows. This would be slightly harder to visualise but the same principles would apply. In these cases, path-dependent candidate assets would have to be chosen. The choice of candidate assets Determining the candidate assets is the real challenge of applying the replication portfolio approach and requires high levels of skill and experience. The example used here is a very simple one and in reality the features of the guarantees would be far more complex – for example, they would include path dependency. This does not mean that the approach no longer works. It just implies that the candidate assets used in determining the replication portfolio as linear combination of these candidate assets, may be more complex also – that is, these also include path-dependency. There are several approaches that can be taken in order to determine these candidate assets, including: Analysing the relative importance of each potential candidate asset from a large list of potential assets in the first step, and then only focusing on the most important ones in the second step, as a means of of avoiding over fitting; Using a priori knowledge of the liabilities to choose appropriate candidate assets – for example, floating strike look-back options to reflect certain advanced features of ULG products. Typically these candidate assets are based on total-return indices in order to avoid dividend risk. The importance of asymptotic behaviour When estimating sensitivities, it is very important that the replication portfolio has the same asymptotic behaviour concerning extreme capital market situations as the pay-off of the liability portfolio of guarantees. For the ULG products under consideration, it is easy to determine the value of the guarantees – as the unit price goes to infinity, it is zero. The value of the guarantees as the unit price goes to zero is as easy to determine; it is simply the sum of the guarantees. This asymptotic behaviour can be enforced by adding additional constraints to the optimisation process. Using put options as candidate assets automatically ensures that the value of replication portfolio approaches zero as the unit price increases. Advantages of the replication portfolio approach To illustrate this approach, it is assumed that the dynamics of the funds’ prices follow a simple Geometric Brownian motion with drift µ = 3% and volatility σ = 20% per annum.



S  1 ln  t +1  = (µ − σ 2 )dt + σ dWt 2  St 

The closeness of fit of the replication portfolio can be illustrated by plotting the value of the replication portfolio against the pay-off generated by the guarantee for a number of stochastic scenarios based on this process (1,000 in this case), as shown in figure 3. The standard deviation of the difference of the pay-offs (replication portfolio minus liabilities) is about 1.3 – that is, 0.13% of the aggregated premiums. The value of the cashflows is: Exact value, calculated contract-by-contract using a closed-form solution: 124.62

www.life-pensions.com December 2007

technical_december.indd 41

41

5/12/07 10:41:15

cutting edge

Table 2: Comparison of sensitivities derived from a range of approaches Sensitivity Base case

Exact

Bumping the model

Replication portfolio approach

124.62

125.46

124.66

Delta, Initial funds price +.01

–214.10

–219.06

–214.08

Delta, Initial funds price +.1

–196.49

–201.40

–196.47

Delta, Initial funds price +.5 Rho

–139.55

–142.90

–139.55

–3,370.26

–3,427.90

–3,370.36

Value determined using above mentioned market-consistent scenarios: 128.33 Value of the replication portfolio, determined using closed-form ­solutions: 124.66 An introduction into stochastic simulation in an insurance environment is given in Kalberer (2006). The control-variate approach The difference between the liability portfolio and the replication portfolio can be considered as a new asset, the ‘difference asset’. The pay-offs of this asset have a considerably lower volatility than the pay-off of the liability portfolio itself (in our example: the above mentioned 1.3 for the ‘difference asset’ versus 210 for the liability cashflow). Thus, the estimation of the value of this difference asset using stochastic scenarios has a considerably higher degree of accuracy than the direct estimation of the value of the pay-offs. In this example, the estimated value of the new difference asset is –0.02 at time 0 (in other words, the average of the discounted difference of the cashflows of the replication portfolio and the liabilities). Therefore, the value of the liability portfolio can be determined with very low stochastic error as: Value of replication portfolio (closed form solution available, thus no stochastic error) + value of ‘difference-asset’ (low stochastic error due to low variance of pay-offs) in this example: 124.66 – 0.02 = 124.64, much closer to the exact value. This effect can be explained by the central limit theorem, which shows that lower variance leads to higher accuracy. This approach is called the control-variate approach and is widely used in determining the value of contingent payments using stochastic simulation. If the replication portfolio approach were not used for anything else but as control-variate, this alone would justify its determination. Using the replication portfolio as the control variate requires considerably fewer scenarios for estimating the value of the guarantees, while preserving the level of accuracy. Alternatively, the level of accuracy can be considerably increased while using the same number of scenarios as under the naive approach. If it is possible to determine direct market values for the assets of the replication portfolio then the calibration problem can also be addressed. In fact, the potential model error introduced by inadequate calibration impacts the stochastic valuation of the difference asset only, which, by construction, is small. The valuation of the replication portfolio is usually exposed to far lower model error, as the assets in the replication portfolio can be valued directly in the market. So the impact of both the above mentioned problems is substantially reduced.

42

Determining sensitivities (greeks) The replication portfolio approach can also be used to determine the sensitivities of the value of the liabilities to changes in the underlying economic variables, the so-called ‘greeks’. The most popular method of determining these sensitivities is known as the ‘bumping the model’ approach where the value of the liabilities is determined for the current value of the variable and a ‘shocked’ value. However, this approach has severe disadvantages: The sensitivities estimated this way usually have a large estimation error; Determination of the sensitivities is time consuming, as the stochastic simulation has to be performed repeatedly for each sensitivity, and for each model point.

Using the replication portfolio approach to produce sensitivities It is relatively straightforward to use a replication portfolio approach to determine the sensitivities. After the replication portfolio has been determined, its sensitivity can easily be determined, as the replication portfolio is based on instruments that have closed-form solutions for their values and usually also for their sensitivities. In the example, the sensitivities derived by the different approaches can be compared, the results of which are shown in table 2 (where 1,000 scenarios were used). In this instance, we did not use the control-variate approach to produce the ‘bumping the model’ sensitivities. The replication method shows a superior degree of accuracy compared to all other approximation approaches. It involves considerably less computation time than the bumping the model approach. Error bounds for the replication portfolio approach In order to be able to use this method in a reliable manner, the error bounds need to be established for the sensitivities that are determined using the replication portfolio approach. To estimate the error potentially introduced by this approach, two methods are discussed below: Error bounds for estimating sensitivities using the likelihood-ratio approach The likelihood ratio approach uses re-weighting of the scenarios in order to determine the value of a cashflow under changed assumptions. Let Q denote the probability measure based on the current calibration and let Q´ denote the probability measure for a calibration where the economic variable has been changed by a small amount. Let dQ and dQ´ denote the associated probability densities and assume all necessary requirements on

Life & Pensions

technical_december.indd 42

5/12/07 10:41:16

dQ, dQ´ are fulfilled, such that:

 Z dQ ′  Z E Q′   = E Q  ⋅  .  N  N dQ  This result can be used to estimate the error produced, by determining the sensitivities using the replication portfolio approach compared to the exact sensitivity. Exact sensitivity – estimated sensitivity (via replication portfolio) = R   Q  ZR  Q′  Z  Q Z Q′  Z    E  N  − E  N   −  E  N  − E  N       

Z = EQ  N

(

 ZR  dQ ′   ⋅ − 1  − E Q   dQ   N

)

 dQ ′   ⋅ − 1   dQ 

 Z − Z R  dQ ′   = EQ  ⋅ − 1   dQ   N  where Z R is the cashflow as generated by the replication portfolio. This error term is now estimated by using the empirical estimator for this error term, using the already generated results for the stochastic scenarios:

(

)

 Z − Z R  dQ ′   EQ  ⋅ − 1   dQ   N  ≈

(

)

R 1 N Z (T , Xi ) − Z (T , Xi )  dQ ′ ( Xi )  ⋅   − 1 ∑ N i =1 N (T , Xi )  dQ ( Xi ) 

The variance of this estimator can also be approximated. This allows error bounds for any required level of security to be estimated using the law of large numbers. If the replication portfolio is not determined using the scenarios to determine the error bound, the estimator for the error is unbiased. The likelihood ratios dQ’/dQ should be determined using the analytical formulas for the distributions, if available. The economic scenario generator producing the asset scenarios for the stochastic valuation is typically based on stochastic differential equations, which can be used to determine the likelihood ratios. However, caution must be exercised, as the random variables could be multivariate distributed, meaning that the pay-off could depend on more than one economic variable. For our example, the error bounds for a 99% level of security (for estimating the error as above) are astonishingly small: Error estimate (exact sensitivity – estimated sensitivity (via replicationportfolio)): –0.023; 1%/99% quantile of the error estimate: ±0.022 around the mean, assuming the law of large numbers is applicable. Considering that the exact delta is 214.1, this is a remarkably small error. It could be argued that this article could finish here, as this approach seems to work so nicely. The answer is that, so far, the law of large numbers has been used to estimate error bounds. At this point, it is not known beforehand when the convergence

of the approximation of the expected value by stochastic simulation to the real expected value is sufficient such that we can assume that the error is really normally distributed with the empirical volatility as standard deviation. Large movements in the risk factors, such as the funds price in our example, generate too few scenarios to guarantee that the results of the central limit theorem and law of large numbers apply. One indication for this is that

1 n ∑ n i =1

 dQ ′ ( Xi )  ⋅   dQ ( Xi ) 

is considerably smaller than 1 in such situations. The approach described above for estimating error bounds is not applicable in these situations. The effect of this is important for one of the potential applications mentioned below. However, the approach is perfect for just estimating sensitivities from a given set of scenarios, as very small shocks are required, which is exactly the case when the approach to estimate error bounds works well. Do we need error bounds? A more robust approach is to estimate the impact the potential error has on the overall hedging process. In the end, this is what matters and this approach can be used easily for pathdependent options and options dependent on several economic variables. A so-called ‘hedge assessment’ is performed for this approach. This is a task that would have to be prepared in order to assess the quality of the dynamic hedge process anyway. For this purpose, a sufficiently large set of scenarios is generated under the physical measure (that is, using the best estimate ‘real world’ probabilities, including risk-premiums) with sufficient granularity, which in most cases means on a daily basis. The aim is to simulate the effect of the hedge operations along these scenarios, measuring the shortfall at maturity. The shortfall is defined as the difference between the amount necessary to fill up the fund’s value to a potentially higher guarantee level and the value of the hedge portfolio at maturity. Due to the fact that the hedging instruments used are typically linear instruments and the change in value of the liabilities is non-linear, a hedge slippage can be observed. The accrued sum of all these hedge slippages at maturity is the total hedge slippage for each scenario. The hedge slippages for all scenarios can be used to derive an empirical distribution of the hedge slippages. If an initially fitted replication portfolio is used to determine the sensitivities and greeks at each future point in time for each scenario, instead of ‘properly’ calculated sensitivities, then the error produced by this approach can be estimated very easily. In fact, it can be assumed that the exposure of the replication portfolio is hedged and the difference between the pay-off of the hedge portfolio and the pay-off implied by the guarantee is exactly the error introduced by using the replication portfolio. This error can be determined for the scenarios considered, which in turn can be regarded as an empirical estimate of the error. In addition to this error, the hedging process will not work perfectly and produces various losses and gains, mainly because of hedge slippages, and basis risk. Typically the underlying funds are actively managed and cannot be short sold, so for hedging purposes they have to be replicated using market indices. This approach gives rise to basis risk. In my experience, the error introduced by hedging the replication portfo-

www.life-pensions.com December 2007

technical_december.indd 43

43

5/12/07 10:41:19

cutting edge

lio will be negligibly small compared to the hedge slippage itself. If a hedge assessment is performed for the sample portfolio, assuming a weekly delta hedge, the result seen in figure 4 is produced. The graph shows the P&L from the hedging operations and the error implied by using the replication portfolio per scenario. The biggest losses are to the left. The fact that the replication portfolio was hedged instead of the actual portfolio does not add a significant amount of risk. The hedge slippage itself has a far bigger magnitude than the error introduced by the replication portfolio approach. Owing to tracking error, in most cases, the basis risk alone is far bigger than the deviation caused by using the replication portfolio. This justifies the use of the replication portfolio approach. Of course, in this example, the hedge strategy is simplified, in that we assume a weekly delta hedge only. In reality, a more frequent hedge, potentially using non-linear hedging instruments, would be used, decreasing the hedge slippage considerably. But on the other hand, the hedge slippage in this example was calculated without taking any volatility risk into account, which would increase the hedge slippage. The advantage is that the replication portfolio approach is much simpler to handle sensitivities – that is, greeks can be determined based on closed-form solutions within mere fractions of a second, in comparison to the full stochastic approach, “bumping the model”, which needs hours, if not days, of computation time to determine the greeks. This approach in determining the error of the replication portfolio approach has the following advantages: It measures the impact of a potential deviation in terms of risk and not the sensitivity itself; It compares the replication approach error to other sources of risk, which are usually much bigger and thus prevents overly complex approaches delivering spurious accuracy; It is a method to perform a proper hedge assessment without the need of nested stochastic calculations, saving an immense amount of ­computation time. Appropriate experience with this approach is required to determine the candidate assets and to prevent nasty surprises concerning approximation errors. However, as demonstrated above, this approach is worthwhile to consider, as the actual hedging application can be simplified and sped up significantly, allowing for additional checks and risk-measurements on a daily basis.

case can lead to unreliable estimations of the error bonds and convergence problems; The stochastic simulation and the determination of the replication portfolio has to be repeated if the risk-factors move too far away from the base-case or if the liability portfolio has changed, at least monthly Perform hedge assessment calculations. Performing a hedge assessment based on a full stochastic approach is not always feasible, as discussed above. The main problem is that this is a nested stochastic approach and needs considerable run-times. Performing the hedge assessment based on the replication portfolio is a very efficient alternative and sufficient to discuss the main issues. Whether any of these approaches are applicable depends on the nature of the ULG-products. There is no general rule that will give advanced indication as to whether the approach works reliably. However, using the techniques presented in this article, it should be possible to decide whether the approaches work and how accurate they are once they have been applied. The advantages of these approaches are so huge that the possibility that they could work always justifies exploring whether they are applicable.

Summary Implementing a dynamic hedging approach for unit-linked life insurance products with investment guarantees presents large technical challenges. The determination of the sensitivities and greeks can be especially timeconsuming and involve considerable estimation error. To perform the tasks implied by a dynamic hedging scheme, it is suggested that a replication portfolio approach is used. This not only increases speed and reliability of the computations but also increases accuracy and removes calibration error. L&P Tigran Kalberer is a principal at Towers Perrin. E-mail  [email protected] I would like to thank Manuel Sales and Carole Bozkurt for their review, and Jo Oechslin for his review, excellent input and the very valuable discussions about the contents of this paper.

Applications of the replication portfolio approach

References

The approach can be used for three different purposes: To determine the daily sensitivities and greeks based on one stochastic base-run, by performing the base-run for liabilities and all candidate assets; determining the replication portfolio; determining the sensitivities based on the replication portfolio; and increasing the accuracy of the base-run using the replication portfolio approach. As previously mentioned, this approach should work in most circumstances and there are good error bounds. To perform stochastic base-runs only if necessary and use the replication portfolio in the meantime. This would include the following factors: The daily hedging routine will be based on the replication portfolio as long as the risk factors do not move too far away from the base-case used for determining the stochastic simulation; Large deviations to the base-

Kalberer T (2006) Market consistent valuation of insurance liabilities – Basic approaches and tool-box Der Aktuar 12 (2006) Heft 1

44

Oechslin J et al (2007) Replicating embedded options Life & Pensions, February 2007 Hull J Options, futures and other derivatives Prentice-Hall, 2000

Life & Pensions

technical_december.indd 44

5/12/07 10:41:19

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF