In the footnotes of JP Morgan’s third quarter release1, the bank stated that it transferred the synthetic credit portfolio that caused the “London whale” incident earlier this year from the office of the CIO to the Investment bank. It then applied a new VaR model to the portfolio  that lowered its VaR by $28 million to $122 million. The financial press duly questioned the relevance of reporting this change without additional informationand the fact that this was the third model implemented for this position in less than a year3. Unfortunately many treat VaR as just a number rather than what it really is: a statistic that has estimation error that should be reported with it. 

Estimation error is measured by the estimate’s standard error or its confidence interval (don’t confuse it with the confidence level of the VaR estimate, which is the quantile used to calculate VaR, while a 95% confidence interval is an interval that has a 95% probability of covering the VaR estimate). This is important because it shows how accurate the VaR model estimate is. 

Consider these two portfolios: the first has a VaR of $9 million and a confidence interval of $5 million to $13 million, while the second has a VaR of $11 with a confidence level of $9 to $13. The second portfolio appears riskier but the level of uncertainty for the first portfolio is much higher, and it leads me to question the risk model construction and process.

Of course the need to calculate and report the standard error in VaR is not new. Jorion discusses two methods to calculate the standard error of VaR estimatesbut as we’ll see they are not appropriate for financial VaR risk estimation. I’ll use them to highlight a few features of estimation error, and then I will illustrate an easier and more accurate method to calculate estimation risk.Jorion defines VaR as the product of the Initial wealth and the lowest possible simple return given a confidence level c. He assumes the simple return to be normally distributed and discusses two methods to calculate VaR and the associated estimation error:

  • The first one is the sigma based VaR where VaR = (Initial wealth ×ασ√∆t  ) and its standard error would be that of the standard deviation: 

Where T is the sample size and σis the standard deviation.

  • The second method is quantile-based VaR that ought to work for any distribution (such a VaR model is usually described as a historical based VaR). Jorion proposes the asymptotic standard error estimate for a quantile q to be:


Where T is the sample size, σ the standard deviation, c the confidence level and f(q) the probability distribution function evaluated at the quantile q.

The problem with the first method is that simple returns are not normal. Although a normal approximation is suitable for continuously compounded returns, getting a standard error for simple returns from the standard error for continuously compounded returns is very difficult. The issue with the second method is that it is asymptotic and works best with very large samples in addition to the difficulty in getting the correct value of the pdf of the quantile.

Both illustrate these facts about standard error: it gets smaller with a larger the sample size and is larger for a higher VaR confidence level, i.e., quantile.

A better approach is to use bootstrapping, or sampling with replacement, to simulate hundreds of scenarios drawn from the historical returns of a portfolio. To illustrate this approach, I used the past year’s returns for the S&P 500 index calculated by FactSet to generate 1999 random samples.  I used an odd number to make it easier to calculate quantile-based confidence intervals in case the distribution of the estimates wasn’t normal. Using this method I was able to generate not only confidence intervals but also full histograms of the estimates for 95% and 99% VaR as well as the expected tail loss (ETL) for both levels. The actual number of simulations does not matter, but it should be at least twice as large as the number of observations in the sample.


I used these histograms to quickly evaluate the 95% and 99% VaR and ETL estimates of the FactSet multi-asset class risk model for the S&P 500. The black dashed line is the model’s VaR estimate while the red dotted lines are the 95% confidence intervals.  It is interesting that the risk model’s estimates are closer to the lower bound (lower loss) of the distribution of estimates. Here it could be instructive to check the trends in confidence intervals by running the bootstrapping process daily or monthly to get a time series of smoothed confidence intervals. This time series could then be used to better evaluate trends in the risk estimates. 

Bootstrapping is a quick and powerful way to evaluate any statistical estimate. It can be applied to historical results as I did in the example above or in Monte Carlo simulation to evaluate the ex-ante risk estimates. In addition to confidence intervals, it gives us a lot of information about the shape of the estimates’ distribution that can be used to improve the risk model.


2JPMorgan Changes VaR Model Again, But It's Still a Black Box, American Banker, October 16th 2012 (by Clifford Rossi).

3JPMorgan Deploys New Risk Model for Derivative Bet, Bloomberg News, October 12th 2012

4Jorion, Philippe. 1996 “Risk2 : Measuring the Risk in Value at Risk.” Financial Analysts Journal, November/December 1996:47-56.


Join Our Newsletter

Join over 2,000 other marketing pros in subscribing to the award-winning our Blog.


Leave a Comment

Product Manager, Fixed Income Product Development Product, Content and Technology
Mido left FactSet in 2013.

FactSet Insight

Contact Us

If you are looking to source FactSet data or analytics in your publication, have a question for one of our authors, or need help with your subscription, please drop us a line.
Contact Us