If P be the transition matrix for 1 year, how can we find the transition matrix for 4 months?
Assuming time invariance and the Markov property, it is easy to calculate the transition matrix for any time period as P^n, where P is the given transition matrix for one period andn the number of time periods that we need to compute the new transition matrix for.
However, when the new time period is less than the time period the matrix is available for, the only way to deriving a transition matrix for a partial period is to numerically calculate a matrix M such that M^n = P. Therefore Choice 'b' is the correct answer. Taking cube roots of a matrix is not a possible operation, dividing by 3 gives a matrix meaningless in this context, and P x P x P will give us the transition matrix for 3 years, not 1/3rd of a year.
Changes in which of the following do not affect the expected default frequencies (EDF) under the KMV Moody's approach to credit risk?
EDFs are derived from the distance to default. The distance to default isthe number of standard deviations that expected asset values are away from the default point, which itself is defined as short term debt plus half of the long term debt. Therefore debt levels affect the EDF. Similarly, asset values are estimated using equity prices. Therefore market capitalization affects EDF calculations. Asset volatilities are the standard deviation that form a place in the denominator in the distance to default calculations. Therefore asset volatility affects EDF too. The risk free rateis not directly factored in any of these calculations (except of course, one could argue that the level of interest rates may impact equity values or the discounted values of future cash flows, but that is a second order effect). Therefore Choice 'b' is the correct answer.
What would be the consequences of a model of economic risk capital calculation that weighs all loans equallyregardless of the credit rating of the counterparty?
I. Create an incentive to lend to the riskiest borrowers
II. Create an incentive to lend to the safest borrowers
III. Overstate economic capital requirements
IV. Understate economic capitalrequirements
If capital calculations are done in a standard way regardless of risk (as reflected by credit ratings), then it creates a perverse incentive for the lenders' employees to lend to the riskiest borrowers that offer the highest expected returns as there is no incentive to 'save' on economic capital requirements that are equal for both safe and unsafe borrowers. Therefore statement I is correct.
Given that the portfolio of suchan institution is likely to then comprise poor quality borrowers, and economic capital would be based upon 'average' expected ratings, it is likely to carry lower economic capital given its exposures. Therefore any such economic risk capital model is likely to understate economic capital requirements. Therefore statement IV is correct.
Statements II and III are incorrect and Choice 'b' is the correct answer.
Which of the following is not a risk faced by a bank from holding a portfolio of residential mortgages?
Choice 'd' represents a risk that does not arise from its holdings of mortgages. Therefore Choice 'd' is the correct answer.
All the other risks identified are correct - the bank faces interest rate, default and prepayment risks on its mortgages.
Which of the following statements is true
I. If no loss data is available, good quality scenarios can be used to model operational risk
II. Scenario data can be mixed with observed loss data for modeling severity and frequency estimates
III. Severity estimates should not be created by fitting models to scenario generated loss data points alone
IV. Scenario assessments should only be used as modifiers to ILD or ELD severity models.
There are multiple ways to incorporate scenario analysis for modeling operational risk capital - and the exact approach used depends upon thequantity of loss data available, and the quality of scenario assessments. Generally:
- If there is no past loss data available, scenarios are the only practical means to model operational risk loss distributions. Both frequency and severity estimates can be modeled based on scenario data.
- If there is plenty of past data available, scenarios can be used as a modifier for estimates that are based solely on data (for example, consider the MAX of the loss estimates at the desired quantile as provided bythe data, and as indicated by scenarios)
- If high quality scenario data is available, and there is sufficient past data, one could mix scenario assessments with the loss data and fit the combined data set to create the loss distribution. Alternatively, both could be fitted with severity estimates and then the two severities could be parametrically combined.
In short, there is considerable flexibility in how scenarios can be used.
Statement I is therefore correct, and so is statement II as both indicate valid uses of scenarios.
Statement III is not correct because it may be okay to create severity estimates based on scenario data alone.
Statement IV is not correct because while using scenarios as modifiers to other means of estimation is acceptable, that isnot the only use of scenarios.
Which of the following is true for the actuarial approach to credit risk modeling (CreditRisk+):
The actuarial model considers defaults to follow a Poisson distribution with a given mean per period, and these are binary in nature, ie a default happens or it does not happen. The model does not consider the loss of value from credit downgrades, and focuses only on defaults. The model also does not consider default correlations between obligors. Therefore Choice 'c' is the correct answer.
The other choices are not true statements that would apply tothe actuarial approach.
Under the ISDA MA, which of the following terms best describes the netting applied upon the bankruptcy of a party?
Netting is the ability to set just the net balances when amounts are both owed and due. Netting can takemany forms. Payment netting is netting between counterparties that owe moneys to each other in the same currency under the same transaction (or master agreement). Closeout netting is when parties settle a net amount for the value of all outstanding transactions upon the occurrence of an event of default such as bankruptcy. Multiateral netting involves a third party that sets off exposures across counterparties that owe moneys to each other.
Closeout netting under the ISDA master agreement enables a party toterminate transactions early if an Event of Default or Termination Event occurs in respect of the other party. It involves the calculation and netting of the termination values of all transactions to produce a single amount payable between the parties. Closeout netting is therefore the correct answer.
Which of the following is NOT an approach used to allocate economic capital to underlying business units:
Other than Choice 'c', all others represent valid approaches to allocate economic capital to underlying business units. There is no such thing as 'fixed ratioeconomic capital contribution'
Economic capital under the Earnings Volatility approach is calculated as:
The Earnings Volatility approach to calculating economic capital is a top down approach that considers economic capital as being the capital required to make for the worst case fall in earnings, and calculates EC as equal to the worst case decrease in earnings capitalized at the rate of return expected of the firm. The worst case decrease in earnings, or the earnings-at-risk can only be stated at a given confidence level, and is equal to the Expected Earnings less Earnings under the worst case scenario.
A bank prices retail credit loans based on median default rates. Over the long run, it can expect:
The key to pricing loans is to make sure that the prices cover expected losses. The correct measure of expected losses is the mean, and not the median. To the extent the median is different from the mean, the loans would be over or underpriced.
The loss curve for credit defaults is a distribution skewed to the right. Therefore its mode is less than its median which is less than its mean. Since the median is less than the mean, the bank is pricing in fewer losses than the mean, which means over the long run it is underestimating risk and underpricing its loans. Therefore Choice 'd' is the correct answer.
If on the other hand for some reason the bank were overpricing risk, its loans would be more expensive than its competitors and it would lose market share. In this case however, this does not apply.Loan pricing decisions are driven by the rate of defaults, and not the other way round, therefore any pricing decisions will not reduce the rate of default.
Which of the following was not a policy response introduced by Basel 2.5 in response to the global financial crisis:
The CCAR is a supervisory mechanism adopted by the US Federal Reserve Bank to assess capital adequacy for bank holding companies it supervises. Itwas not a concept introduced by the international Basel framework.
The other three were indeed rules introduced by Basel 2.5, which was ultimately subsumed into Basel III.
Stressed VaR is just the standard 99%/10 day VaR, calculated with theassumption that relevant market factors are under stress.
The Incremental Risk Charge (IRC) is an estimate of default and migration risk of unsecuritized credit products in the
trading book. (Though this may sound like a credit risk term, it relates to market risk - for example, a bond rated A being downgraded to BBB. In the old days, the banking book where loans to customers are held was the primary source of credit risk, but with OTC trading and complex products the trading book also now holds a good dealof credit risk. Both IRC and CRM account for these.)
While IRC considers only non-securitized products, the CRM (Comprehensive Risk Model) considers securitized products such as tranches, CDOs, and correlation based instruments.
The IRC, SVaR and CRMcomplement standard VaR by covering risks that are not included in a standard VaR model. Their results are therefore added to the VaR for capital adequacy determination.
Which of the following statements are true ?
I.Risk governance structures distribute rights and responsibilities among stakeholders in the corporation
II. Cybernetics is the multidisciplinary study of cyber risk and control systems underlying information systems in an organization
III. Corporate governance is a subset of the larger subject of risk governance
IV. The Cadbury report was issued in the early 90s and was one of the early frameworks for corporate governance
Governance structures specify the policies, principles and procedures for making decisions about corporate direction. They distribute rights and responsibiliies among stakeholders that typically include executive management, employees, the board etc. Statement I is therefore correct.
"Cybernetics is a transdisciplinary approach for exploring regulatory systems, their structures, constraints, and possibilities. In the 21st century, the term is often used in a rather loose way to imply "controlof any system using technology" (Wikipedia). Governance literature has been affected by cybernetics, which is not the same thing as information security or cyber security. Statement II is incorrect.
Corporate governance includes risk governance, and not the other way round. Therefore statement III is incorrect.
The Cadbury Report, titled Financial Aspects of Corporate Governance, was a report issued in the UK in December 1992 by "The Committee on the Financial Aspects of Corporate Governance". The report is eponymous with the chair of the committee, and set out recommendations on the arrangement of company boards and accounting systems to mitigate corporate governance risks and failures. Statement IV is therefore correct.
A bank's detailed portfolio data on positions held in a particular security across the bank does not agree with the aggregate total position for that security for the bank. What data quality attribute is missing in this situation?
The term 'data quality' has multiple elements, ie, data in order to be considered of a high quality must have multiple attributes such ascompleteness, timeliness, auditability etc. Because this is not an exact science, every expert or text book will have a different view of what goes into data quality. For our purposes however, we will stick to what the PRMIA study material specifies, and according to the study material the following are the elements that can be considered attributes that make for quality data:
I am notgoing to describe each of these here as that would be repetitive of the study material, but suffice it to say that the break-down of a number into its constituents should tie to the aggregate total. If that is not true, then the data lacks integrity - andtherefore Choice 'b' is the correct answer. The other choices address other aspects of data quality but not this, and therefore are not correct.
Financial institutions need to take volatility clustering into account:
I. To avoid taking on an undesirable level of risk
II. To know the right level of capital they need to hold
III. To meet regulatory requirements
IV. To account for mean reversion in returns
Volatility clustering leads to levels of current volatility that can be significantly different from long run averages.When volatility is running high, institutions need to shed risk, and when it is running low, they can afford to increase returns by taking on more risk for a given amount of capital. An institution's response to changes in volatility can be either to adjust risk, or capital, or both. Accounting for volatility clustering helps institutions manage their risk and capital and therefore statements I and II are correct.
Regulatory requirements do not require volatility clustering to be taken into account (at least not yet). Therefore statement III is not correct, and neither is IV which is completely unrelated to volatility clustering.
Which of the following are true:
I. The total of the component VaRs for all components of a portfolio equals the portfolio VaR.
II. The total of the incremental VaRs for each position in a portfolio equals the portfolio VaR.
III. Marginal VaR and incremental VaR are identical for a $1 change in the portfolio.
IV. The VaR for individual components of a portfolio is sub-additive, ie the portfolio VaR is less than (or in extreme cases equal to) the sum of the individual VaRs.
V. The component VaR for individual components of a portfolio is sub-additive, ie the portfolio VaR is less than the sum of the individual component VaRs.
Statement I is true - component VaR for individual assets in the portfolio add up to the total VaR for the portfolio. This property makes component VaR extremely useful for risk disaggregation and allocation.
Stateent II is incorrect, the incremental VaRs for the positions in a portfolio do not add up to the portfolio VaR, in fact their sum would be greater.
Statement III is correct. Marginal VaR for an asset or position in the portfolio is by definition the change in the VaR as a result of a $1 change in that position. Incremental VaR is the change in the VaR for a portfolio from a new position added to the portfolio - and if that position is $1, it would be identical to the marginal VaR.
Statement IV is correct, VaR is sub-additive due to the diversification effect. Adding up the VaRs for all the positions in a portfolio will add up to more than the VaR for the portfolio as a whole (unless all the positions are 100% correlated, which effectively would mean they are all identical securities which means the portfolio has only one asset).
Statement V is in incorrect. As explained for Statement I above, component VaR adds up to the VaR for the portfolio.
The cumulative probability of default for a security for 4 years is 11.47%. The marginal probability of default for the security for year 5 is 5% during year 5. What is the cumulative probability of default for the security for 5 years?
The cumulative probability of default for the security for the 5 years is [1 - (1 - probability of default upto year 4)*(1 - probability of default in year 5)]. An easier way to think about this is that the Probability of survival till year 5 = (Probability of survival till year 4 * Probability of survival during year 5). Using the relationship that probability of default = 1 - probability of survival, we can calculate the required probability in all cases.
In this case, the cumulative probability of default for the security for 5 years = 1 - (1 - 11.47%)*(1 - 5%) = 15.8695%, therefore Choice 'c' is the correct answer.
Under the CreditPortfolio View model of credit risk, the conditional probability of default will be:
When the economy is expanding, firms are less likely to default. Therefore the conditional probability of default, given an economic expansion, is likely to be lower than the unconditional probability of default. Therefore Choice 'a' is the correct answerand the other statements are incorrect.
Identify the correct sequence of events as it unfolded in the credit crisis beginning 2007:
I. Mortgage defaults increased
II. Collapse in prices of unrelated assets as banks tried to create liquidity
III. Banks refused to lend or transact with each other
IV. Asset prices for CDOs collapsed
According to a paper by the BCBS, here is an excellent summary of what happened. Based on this, Choice 'c' is the correct answer.
"At the outset of the crisis, mortgage default shocks played a part in the deterioration of marketprices of collateralised debt obligations (CDOs). Simultaneously, these shocks revealed deficiencies in the models used to manage and price these products. The complexity and resulting lack of transparency led to uncertainty about the value of the underlying investment. Market participants then drastically scaled down their activity in the origination and distribution markets and liquidity disappeared. The standstill in the securitisation markets forced banks to warehouse loans that were intended to be soldin the secondary markets. Given a lack of transparency of the ultimate ownership of troubled investments, funding liquidity concerns were triggered within the banking sector as banks refused to provide sufficient funds to each other. This in turn led to the hoarding of liquidity, exacerbating further the funding pressures within the banking sector. The initial difficulties in subprime mortgages also fed through to a broader range of market instruments since the drying up of market and funding liquidity forced market participants to liquidate those positions which they could trade in order to scale back risk. An increase in risk aversion also led to a general flight to quality, an example of which was the high withdrawals by households from money market funds."
According to the implied capital model, operational risk capital is estimated as:
Operational risk capital estimated using the implied capital model is merely the capital that is not attributable to market or credit risk. Therefore Choice 'b' is the correct answer. All other responses are incorrect.
A stock that follows the Weiner process has its future price determined by:
The change in the price of a security that follows a Weiner process is determined by its standard deviation and expected return. To get the price itself, we need to add this change in price to the current price. Therefore the future price in a Weiner process is determined by all three of current price, expected return and standard deviation.
Which of the following are valid approaches to leveraging external loss data for modeling operational risks:
I. Both internal and external losses can be fitted with distributions,and a weighted average approach using these distributions is relied upon for capital calculations.
II. External loss data is used to inform scenario modeling.
III. External loss data is combined with internal loss data points, and distributions fitted to the combined data set.
IV. External loss data is used to replace internal loss data points to create a higher quality data set to fit distributions.
Internal loss data isgenerally the highest quality as it is relevant, and is 'real' as it has occurred to the organization. External loss data suffers from a significant limitation that the risk profiles of the banks to which the data relates is generally not known due to anonymization, and may likely may not be applicable to the bank performing the calculations. Therefore, replacing external loss data with external loss data is not a good idea. Statement IV is therefore incorrect.
All other approach described are valid approaches for the risk analyst to consider and implement. Therefore statements I, II and III are correct and IV is not.
An operational loss severity distribution is estimated using 4 data points from a scenario. The management institutes additional controls to reduce the severity of the loss if the risk is realized, and as a result the estimated losses from a 1-in-10-year losses are halved. The 1-in-100 loss estimate however remains the same. What would be the impact on the 99.9th percentile capital required for this risk as a result of the improvement in controls?
This situation represents one of the paradoxes in estimating severity that one needs to be aware of - the improvement in controls reduces the weight of the body/middle of the distribution and moves it towards the tails (as the total probability under the curve must stay at 100%) and the distribution becomes more heavy tailed. As a result, the 99.9th percentile loss actually increases. instead of decreasing, creating a counterintuitive result. Therefore the correct answer is that the capital required will increase.
If scenario analysis produces such a result, the analyst must question if the 1 in 100 year loss severity is still accurate. If the new control has reduced the severity in the body of the distribution, the question as to why the more extreme losses have not changed should be raised.
For a back office function processing 15,000 transactions a day with an error rate of 10 basis points, what is the annual expected loss frequency (assume 250 days in a year)
An error rate of 10 basis points means the number of errors expected in a day will be 15 (recall that 100 basis points = 1%). Therefore the total number of errors expected in a year will be 15 x250 = 3750. Choice 'a' is the correct answer.
Which of the following formulae describes CVA (Credit Valuation Adjustment)? All acronyms have their usual meanings (LGD=Loss Given Default, ENE=Expected Negative Exposure, EE=Expected Exposure, PD=Probability of Default, EPE=Expected Positive Exposure, PFE=Potential Future Exposure)
The correct definition of CVA is LGD * EPE * PD. All other answers areincorrect.
CVA reflects the adjustment for counterparty default on derivative and other trading book transactions. This reflects the credit charge, that neeeds to be reduced from the expected value of the transaction to determine its true value. It is calculated as a product of the loss given default, the probability of default and the average weighted exposure of future EPEs across the time horizon for the transaction.
The future exposures need to be discounted to the present, and occasionally the equations for CVA will state that explicitly. Similarly, in some more advanced dynamic models the correlation between EPE and PD is also accounted for. The conceptual ideal though remains the same: CVA=LGD*EPE*PD.
The largest 10 lossesover a 250 day observation period are as follows. Calculate the expected shortfall at a 98% confidence level:
For a dataset with 250 observations, the top 2% of the losses will be the top 5 observations. Expected shortfall is the average of the losses beyond the VaR threshold. Therefore the correct answer is (20 + 19 + 19 + 17 + 16)/5 = 18.2m .
Note that Expected Shortfall is also called conditional VaR (cVaR), Expected Tail Lossand Tail average.
Which of the following should be included when calculating the Gross Income indicator used to calculate operational risk capital under the basic indicator and standardized approaches underBasel II?
Gross income is defined by Basel II (see para 650 of the Basel standard) as net interest income plus netnon-interest income. It is intended that this measure should: (i) be gross of any provisions (e.g. for unpaid interest); (ii) be gross of operating expenses, including fees paid to outsourcing service providers; (iii) exclude realised profits/losses from the sale of securities in the banking book; and (iv) exclude extraordinary or irregular items as well as income derived from insurance.
What this means is that gross income is calculated without deducting any provisions or operating expenses from net interest plus non-interest income; and does not include any realised profits or losses from the sale of securities in the banking book, and also does not include any extraordinary or irregular item or insurance income.
Therefore operating expenses are to be notto be deducted for the purposes of calculating gross income, and neither are any provisions. Profits and losses from the sale of banking book securities are not considered part of gross income, and so isn't any income from insurance or extraordinary items.
Of the listed choices, only net non-interest income needs to be included for gross income calculations, and the others are to be excluded. Therefore Choice 'd' is the correct answer. Try to remember the components of gross income from the definition abovebecause in the exam the question may be phrased differently.
Which of the following carry greater counterparty risk: a forward contract on a 10 year note, or a commercial paper carrying a AA credit rating with identicalmaturity and notional?
The commercial paper has greater credit risk as the entire notional is outstanding. On the forward contract, only the replacement value of the contract, which normally would be a mere fraction of the notional, would be at risk.
Therefore Choice 'd' is the correct answer.
Which of the following can be used to reduce credit exposures to a counterparty:
I. Netting arrangements
II. Collateral requirements
III. Offsetting tradeswith other counterparties
IV. Credit default swaps
Offsetting trades with other counterparties will not reduce credit exposure to a given counterparty. All other choices represent means of reducing credit risk. Therefore Choice 'c' is the correct answer.
A key problem with return on equity as a measure of comparative performance is:
The major problem with using return onequity as a measure of performance is that return on equity is not adjusted for risk. Therefore, a riskier investment will always come out ahead when compared to a less risky investment when using return on equity as a performance metric.
Return on equitydoes not ignore the effect of leverage (though return on assets does) because it considers the income attributable to equity, including income from leveraged investments.
Return on equity is generally measured after interest and taxes at the company wide level, though at business unit level it may use earnings before interest and taxes. However this does not create a problem so long as all performance being covered is calculated in the same way.
Cash flows being different from accounting earnings can createliquidity issues, but this does not affect the effectiveness of ROE as a measure of performance.
Which of the following are ordered correctly in the order of debt seniority in a bankruptcy situation?
I. Equity, Subordinate debt, Senior debt
II. Senior debt, Preferred stock, Equity
III.Secured debt, Accounts payable, Preferred stock
IV. Secured debt, DIP financing, Equity
In a bankruptcy, equity ranks last. Preferred equity is one level above equity. Senior debt gets paid outfirst compared to junior debt, and secured debt is paid out first to the extent of the asset securing it (after which it counts as unsecured debt). Accounts payable and other short term liabilities are treated like unsecured creditors. Debtor-in-possession(DIP) financing ranks higher than any other asset as it is financing secured after the bankruptcy to continue the business.
Based on the above, statement I does not represent a correct ordering of seniority as equity is paid last. Similarly, DIP financingreceives higher priority than even secured debt, and therefore statement IV is incorrect. Therefore the only correct statements are II and III and Choice 'a' is the correct answer.
What does a middle office do for a trading desk?
The 'middle office' is a term used for the risk management function, thereforeChoice 'd' is the correct answers. The other functions describe what the 'back office' does (IT, accounting). The 'front office' includes the traders.
The Options Theoretic approach to calculating economic capital considers the value of capital as being equivalent to a call option with a strike price equal to:
The Options Theoretic approach to calculating economic capital is a top-down approach that considers the value of capital as being equivalent to a calloption with a strike price equal to the notional value of the debt - ie, the shareholders have a call option on the assets of the firm which they can acquire by paying the debt holders a value equal to their notional claim (ie the face value of the debt).Therefore Choice 'a' is the correct answer and the other choices are incorrect.
Which of the following steps are required for computing the aggregate distribution for a UoM for operational risk once loss frequency and severity curves have been estimated:
I. Simulate number of losses based on the frequency distribution
II. Simulate the dollar value of the losses from the severity distribution
III. Simulate random number from the copula used to model dependence between the UoMs
IV. Compute dependent losses from aggregate distribution curves
A recap would be in order here: calculating operational risk capital is a multi-step process.
First, we fit curves to estimate the parameters to our chosen distribution types for frequency (eg, Poisson), and severity (eg, lognormal). Note that these curves are fitted at the UoM level - which is the lowest level of granularity at which modeling is carried out. Since there are many UoMs, there are are many frequency and severity distributions. However what we are interested in is the loss distribution for the entire bank from which the 99.9th percentile loss can be calculated. From the multiple frequency and severity distributions we have calculated, this becomes a two step process:
- Step 1: Calculate the aggregate loss distribution for each UoM. Each loss distribution is based upon and underlying frequency and severity distribution.
- Step 2: Combine the multiple loss distributions after considering the dependence between the different UoMs. The 'dependence' recognizes that the various UoMs are not completely independent, ie the loss distributions are not additive, and that there is a sortof diversification benefit in the sense that not all types of losses can occur at once and the joint probabilities of the different losses make the sum less than the sum of the parts.
Step 1 requires simulating a number, say n, of the number of losses that occur in a given year from a frequency distribution. Then n losses are picked from the severity distribution, and the total loss for the year is a summation of these losses. This becomes one data point. This process of simulating the number of losses andthen identifying that number of losses is carried out a large number of times to get the aggregate loss distribution for a UoM.
Step 2 requires taking the different loss distributions from Step 1 and combining them considering the dependence between the events. The correlations between the losses are described by a 'copula', and combined together mathematically to get a single loss distribution for the entire bank. This allows the 99.9th percentile loss to be calculated.
Which of the following are valid methods for selecting an appropriate model from the model space for severity estimation:
I. Cross-validation method
II. Bootstrap method
III. Complexity penalty method
IV. Maximum likelihood estimation method
Once we have a number of distributions in themodel space, the task is to select the "best" distribution that is likely to be a good estimate of true severity. We have a number of distributions to pick from, an empirical dataset (from internal or external losses), and we can estimate the parameters for the different distributions. We then have to decide which distribution to pick, and that generally requires considering both approximation and fitting errors.
There are three methods that are generally used for selecting a model:
1. Thecross-validation method: This method divides the available data into two parts - the training set, and the validation set (the validation set is also called the 'testing set'). Parameter estimation for each distribution is done using the training set, anddifferences are then calculated based on the validation set. Though the temptation may be to use the entire data set to estimate the parameters, that is likely to result in what may appear to be an excellent fit to the data on which it is based, but without any validation. So we estimate the parameters based on one part of the data (the training set), and check the differences we get from the remaining data (the validation set).
2. Complexity penalty method: This is similar to the cross-validation method, but with an additional consideration of the complexity of the model. This is because more complex models are likely to produce a more exact fit than simpler models, this may be a spurious thing - and therefore a 'penalty' is added to the more complex modelsas to favor simplicity over complexity. The 'complexity' of a model may be measured by the number of parameters it has, for example, a log-normal distribution has only two parameters while a body-tail distribution combining two different distributions mayhave many more.
3. The bootstrap method: The bootstrap method estimates fitting error by drawing samples from the empirical loss dataset, or the fit already obtained, and then estimating parameters for each draw which are compared using some statistical technique. If the samples are drawn from the loss dataset, the technique is called a non-parametric bootstrap, and if the sample is drawn from an estimated model distribution, it is called a parametric bootstrap.
4. Using goodness of fit statistics: The candidate fits can be compared using MLE based on the KS distance, for example, and the best one selected. Maximum likelihood estimation is a technique that attempts to maximize the likelihood of the estimate to be as close to the true value of the parameter.It is a general purpose statistical technique that can be used for parameter estimation technique, as well as for deciding which distribution to use from the model space.
All the choices listed are the correct answer.
There are two bonds in a portfolio, each with a marketvalue of $50m. The probability of default of the two bonds over a one year horizon are 0.03 and 0.08 respectively. If the default correlation is zero, what is the one year expected loss on this portfolio?
The probabilities of default of the two bonds are independent (as indicated by a zero default correlation). The various possible states of the portfolio are as follows:
First bond defaults, and the second does not: Probability * Loss = 0.03*0.92* $50m = $1.38m
Second bond defaults, and the first does not: Probability * Loss = 0.97*0.08 * $50m = $3.88m
Both bonds default: Probability * Loss = 0.03*0.08 * $100m = $0.24m
Thus total expected loss on this portfolio = $5.5m. Since recovery rates are not provided, those should be assumed to be zero.
There is an easier way to solve this as well: default correlation does not affect expected losses, but their volatility. You can calculate the expected losses of the two bonds and add them up, ie, $50m*0.03+ $50m *0.08 = $5.5m
Which of the following statements are true:
I. Credit VaR often assumes a one year time horizon, as opposed to a shorter time horizon for market risk as credit activities generally span alonger time period.
II. Credit losses in the banking book should be assessed on the basis of mark-to-market mode as opposed to the default-only mode.
III. The confidence level used in the calculation of credit capital is high when the objective is tomaintain a high credit rating for the institution.
IV. Credit capital calculations for securities with liquid markets and held for proprietary positions should be based on marking positions to market.
Statement I is correct as credit VaR calculations often use a one year time horizon. This is primarily because the cycle in respect of credit related activities, such as loan loss reviews, accounting cycles for borrowers etc last a year.
Statement II is false. There are two ways in which loss assessments in respect of credit risk can be made: default mode, where losses are considered only in respect of default, and no losses are recognized in respect of the deterioration of the creditworthiness of the borrower (which is often expressed through a credit rating transition matrix); and the mark-to-market mode, where losses due to both defaults and credit quality are considered. The default mode is used for the loan book where the institution has lentmoneys and generally intends to hold the loan on its books till maturity. The mark to market mode is used for traded securities which are not held to maturity, or are held only for trading.
Statement III is correct. The confidence interval, or the quintile of losses used for maintaining credit ratings tends to be very high as the possibility of the institution's default needs to be remote.
Statement IV is correct too, for the reasons explained earlier.