Financial time series analysis often assumes that the statistical properties of equity returns do not vary over time. However, the statistical properties of actual returns data from financial markets do vary over time. In particular, empirical evidence suggests that volatility or risk, the square root of the variance of returns, changes with time. FIG. 1 shows a plot of both predicted risk 202 from a global factor risk model and one month forward looking realized risk 200 for a broad global benchmark portfolio. The one month, realized forward looking risk 200 changes over time. Even within relatively short time intervals, such as a few weeks, the realized risk is not constant, exhibiting intermittent spikes of both modest and large magnitude as well as fluctuating noticeably. During periods of market turmoil, such as late 2008, volatility surges from a low value varying between 10% and 20% annual volatility to over 70% annual volatility in a matter of one or two months.
The challenge for commercial risk model vendors is to produce risk models that predict future volatility, or, in other words, accurately predicting the realized risk 200 shown in FIG. 1. The quality of risk model predictions can be measured with respect to at least three metrics:                1. Prediction Accuracy. The difference between the realized and predicted volatilities.        2. Stability. The risk model predictions should not exhibit the smaller, transient changes observed in realized risk. In other words, the risk predictions should be smoother than the realized risk. Such smoothness ensures that portfolio rebalancing and risk management decisions are not driven by market transients of shorter duration than the investment holding horizon.        3. Responsiveness. When the overall level of market volatility rises or falls, the predicted risk should respond similarly with as little time lag in the response as possible.        
Stability and responsiveness both bear on how changes in realized risk are tracked by risk model predictions. On the one hand, stability requires that smaller, temporary changes in realized volatility should not appear in the risk model predictions. On the other hand, responsiveness requires that larger, sustained changes in realized volatility should appear. Thus smaller changes are interpreted as noise that should not affect investment decisions while the larger changes are interpreted as meaningful changes that can and should affect investment decisions. The difference between smaller and larger changes or temporary and sustained changes depends, of course, on the manner in which the risk model is used. A portfolio manager who trades every day may consider a weeklong change in realized volatility a sustained change that should be captured by a high quality risk model, while a portfolio manager who invests over a time horizon of months may consider a weeklong change in realized volatility a temporary effect that should be filtered out of a high quality risk model. In both cases, the portfolio manager wants to react to meaningful changes in market volatility that cause material changes to his or her investment decisions while simultaneously avoiding any overreaction to temporary, noisy market conditions that may lead to unnecessary trading. Stability seeks to ensure that the risk model predictions are smooth over a sufficiently long period of time, while responsiveness seeks to ensure that the risk model predictions change and respond to market changes in volatility over a sufficiently short period of time.
In FIG. 1, risk model accuracy is measured by the difference between the realized risk 200 and the predicted risk 202. Stability is measured by the fact that the predicted risk 202 is smoother than the realized risk 200. Responsiveness is determined by how well the predicted risk 202 tracks the realized risk 200 when the overall level of volatility changes.
In FIG. 1, the predicted risk 202 is reasonably accurate during the early years of the decade and from 2006 to 2009. However, it is less accurate in reducing the predicted volatility from 2003 to 2006, when market volatility drops to a historic low, remaining low for several years. In particular, the gap 201 between the predicted and realized risk in 2003, indicated by the arrows, is more than 5% throughout most of 2003, and the gap 203 between the predicted and realized risk in 2009, indicated by the arrows, is more than 10% throughout most of 2009. Approximately twenty-four months elapse starting from the beginning of 2003 when market volatility falls before the predicted and realized volatilities are at the same level. Similarly, the predictions throughout 2009 are significantly higher than the realized volatility. More particularly, gaps 201 and 203 are larger than desirable.
There are several well known mathematical modeling techniques for estimating the risk of a portfolio of financial assets such as securities and for deciding how to strategically invest a fixed amount of wealth given a large number of financial assets in which to potentially invest.
For example, mutual funds often estimate the active risk associated with a managed portfolio of securities, where the active risk is the risk associated with portfolio allocations that differ from a benchmark portfolio. Often, a mutual fund manager is given a “risk budget”, which defines the maximum allowable active risk that he or she can accept when constructing a managed portfolio. Active risk is also sometimes called portfolio tracking error. Portfolio managers may also use numerical estimates of risk as a component of performance contribution, performance attribution, or return attribution, as well as, other ex-ante and ex-post portfolio analyses. See for example, R. Litterman, Modern Investment Management: An Equilibrium Approach, John Wiley and Sons, Inc., Hoboken, N.J., 2003 (Litterman), which gives detailed descriptions of how these analyses make use of numerical estimates of risk and which is incorporated by reference herein in its entirety.
Another use of numerically estimated risk is for optimal portfolio construction. One example of this is mean-variance portfolio optimization as described by H. Markowitz, “Portfolio Selection”, Journal of Finance 7(1), pp. 77-91, 1952 which is incorporated by reference herein in its entirety. In mean-variance optimization, a portfolio is constructed that minimizes the risk of the portfolio while achieving a minimum acceptable level of return. Alternatively, the level of return is maximized subject to a maximum allowable portfolio risk. The family of portfolio solutions solving these optimization problems for different values of either minimum acceptable return or maximum allowable risk is said to form an “efficient frontier”, which is often depicted graphically on a plot of risk versus return. There are numerous, well known, variations of mean-variance portfolio optimization that are used for portfolio construction. These variations include methods based on utility functions, Sharpe ratio, and value at risk.
Suppose that there are N assets in an investment portfolio and the weight or fraction of the available wealth invested in each asset is given by the N-dimensional column vector w. These weights may be the actual fraction of wealth invested or, in the case of active risk, they may represent the difference in weights between a managed portfolio and a benchmark portfolio as described by Litterman. The risk of this portfolio is calculated, using standard matrix notation, asV=wTQw  (1)where V is the portfolio variance, a scalar quantity, and Q is an N×N positive semi-definite matrix whose elements are the variance or covariance of the asset returns. Risk or volatility is given by the square root of V.
The individual elements of Q are the expected covariances of security returns and are difficult to estimate. For N assets, there are N(N+1)/2 separate variances and covariances to be estimated. The number of securities that may be part of a portfolio, N, is often over one thousand, which implies that over 500,000 values must be estimated. Risk models typically cover all the assets in the asset universe, not just the assets with holdings in the portfolio, so N can be considerably larger than the number of assets in a managed or benchmark portfolio.
To obtain reliable variance or covariance estimates based on historical return data, the number of historical time periods used for estimation should be of the same order of magnitude as the number of assets, N. Often, there may be insufficient historical time periods. For example, new companies and bankrupt companies have abbreviated historical price data and companies that undergo mergers or acquisitions have non-unique historical price data. As a result, the covariances estimated from historical data can lead to matrices that are numerically ill conditioned. Such covariance estimates are of limited value.
Factor risk models were developed, in part, to overcome these short comings. See for example, R. C. Grinold, and R. N. Kahn, Active Portfolio Management: A Quantitative Approach for Providing Superior Returns and Controlling Risk, Second Edition, McGraw-Hill, New York, 2000, which is incorporated by reference herein it its entirety, and Litterman.
Factor risk models represent the expected variances and covariances of security returns using a set of M factors, where M is much less than N, that are derived using statistical, fundamental, or macro-economic information or a combination of any of such types of information. Given exposures of the securities to the factors and the covariances of factor returns, the covariances of security returns can be expressed as a function of the factor exposures, the covariances of factor returns, and a remainder, called the specific risk of each security. Factor risk models typically have between 20 and 80 factors. Even with 80 factors and 1000 securities, the total number of values that must be estimated is just over 85,000, as opposed to over 500,000.
In a factor risk model, the covariance matrix Q is modelled asQ=BTΣB+Δ2  (2)where B is an N×M matrix of factor exposures, Σ is an M×M matrix of factor-factor covariances, and Δ2 is a matrix of specific variances. Normally, Δ2 is assumed to be diagonal.
The factor-factor covariance matrix Σ is typically estimated from a time series of historical factor returns, ft, for each of the M factors, while the specific variances are estimated from a time series of historical specific returns.
Risk models used in quantitative portfolio management partly address the issues of stability and responsiveness when predicting time varying volatility by relying on an exponentially weighted covariance estimator since this estimator places greater emphasis on current observations, implicitly assuming that the most recent subset of return values often vary around a constant value. The returns can be asset returns, or they may be factor returns or specific returns used for estimating a risk model covariance. Given a time series of T returns {rt, rt−1, rt−2, . . . , rt−T+1}, we form the weighted returns series {{tilde over (r)}t}{{tilde over (r)}t}={(wt,rt),(wt−1,rt−1), . . . ,(wt−T+1,rt−T+1)}  (3)wt−k=2−k/H, k=0, . . . ,T−1  (4)where H is the half-life parameter. The exponentially weighted covariance estimator givesE[var(rt+1)|t]≡{circumflex over (σ)}t+12=var({tilde over (r)}t)  (5)This is frequently seen expressed in the RiskMetrics™ specification in which the half-life is reformulated as a decay factor λ. See, for example, J. Longerstaey and M. Spencer, RiskMetrics™—Technical Document, Morgan Guaranty Trust Company, New York, 4th ed., 1996, which is incorporated by reference herein it its entirety. Equation (5) can be rewritten as:{circumflex over (σ)}t+12=λσt2+(1−λ)rt2  (6)Ease and speed of computation, robustness, and parsimony have largely been responsible for the widespread adoption of exponentially weighted covariance estimates in commercial risk models. Exponential weighting generally improves the accuracy of the risk model.
However, when realized risk changes rapidly, the risk predictions of risk models using exponentially weighted covariance estimates often lag realized risk changes over considerable periods of time. In other words, exponential weighting does not always lead to the desired level of responsiveness in a risk model. This lag is shown in FIG. 1 during 2003 by gap 201 and 2009 by gap 203, for example. The predicted risk in FIG. 1 202 is computed from a risk model that uses exponential weighting with a 125-day half life for volatility estimation and a 250-day half-life for correlations. A larger half life is used for the correlation estimation in order to ensure a stable estimate.
One problem with exponentially weighted covariance estimates recognized and addressed by the present invention is that large returns have a disproportionate effect on the covariance estimate even with exponential weighting. These large returns can inflate risk estimates, and they impact risk estimates for very long times, resulting in lagged risk predictions, especially when volatility falls from a high level, such as shown by gap 201 in 2003 and gap 203 in 2009.
In order to produce stable risk predictions, risk models typically require a long history of data for the covariance estimate. The longer the data history, however, the more likely it is that the return history will span a time period over which the volatility of the older returns is at a substantially different level than the volatility of the recent returns. Although the exponentially weighted covariance estimate will give the older return data less weight than the most recent returns, the resulting volatility forecast may noticeably lag, in other words, not be as responsive to the realized volatility results as desired, if the volatility of the older return data is substantially different than the volatility of the more recent return data.
One approach to the problem of lagging risk model predictions is to use shorter data histories and/or aggressive decay factors in order to reduce the influence of the older data on the forecasts. However, if the data history is too short or the decay factors too aggressive, the stability of the risk model predictions may be jeopardized.
Other methods besides more aggressive half-lives have been proposed to address the issue of non-stationarity of asset returns, factor returns, and specific returns. For example, generalized autoregressive conditional heteroskedasticity (GARCH,) models have been proposed. See, for example, Tim Bollerslev, “Generalized Autoregressive Conditional Heteroskedasticity”, Journal of Econometrics, 31:307-327, 1986, which is incorporated by reference herein it its entirety. However, GARCH models normally produce risk models that are too unstable for use in commercial risk models.