Stock market crashes like those in October 1987 and October 1997, the turbulent period around the Asian Crisis in 1998 through 1999 or the burst of the “dotcom bubble” together with the extremely volatile period after Sep. 11, 2001 constantly remind financial engineers and risk managers how often extreme events actually happen in real-world financial markets. These observations have led to increased efforts to improve the flexibility and statistical reliability of existing models to capture the dynamics of economic variables.
The history of probabilistic modeling of economic variables and, especially, price processes, by means of stochastic processes goes back to Bachelier, who suggested Brownian Motion as a candidate to describe the evolution of stock markets. 70 years later, Black and Scholes, in “The Pricing of Options and Corporate Liabilities,” Jour. of Political Economy (1973), and Merton, in “The Theory of Rational Option Pricing,” Bell Jour. of Economics and Management Sciences (1973), the disclosures of which are incorporated by reference, used the Geometric Brownian Motion to describe the stock price movements in their famous solution of the option pricing problem. Their Nobel prize winning work inspired the foundation of the arbitrage pricing theory based on the martingale approach, which was described in the famous paper by Harrison and Kreps, in “Martingales and Arbitrage in Multi-Period Securities Markets,” Jour. of Economic Theory (1979), and, subsequently, by Harrison and Pliska, in “Martingales and Stochastic Integrals in the Theory of Continuous Trading,” Stochastic Processes and Their Applications (1981), the disclosures of which are incorporated by reference.
The key observation that pricing of derivatives has to be effected under a so-called risk-neutral or equivalent martingale measure (usually denoted as Q), which differs from the data generating “market measure” (usually denoted as P), has lead to increasing literature on what are called “implicit models.” Examples for implicit models include stochastic volatility models (see, e.g., Heston, “A Closed Form Solution for Options with Stochastic Volatility with Applications to Bond and Currency Options,” Review of Financial Studies (1993), or Hull and White, “The Pricing of Options on Assets with Stochastic Volatilities,” Jour. of Finan. (1987), the disclosures of which are incorporated by reference), local volatility models (see, e.g., Derman and Kani, “Riding on a Smile,” Risk 7 (1994), or Dupire, “Pricing with a Smile,” Risk 7 (1994), the disclosures of which are incorporated by reference), and martingale models for the short rate and implied volatility models. The common characteristic inherent to implicit models is that the model parameters are determined through calibration on market prices of derivatives directly under the martingale measure and not through estimation from observations under the market measure. As a direct consequence, and at the same time, the main drawback of the calibration framework is that, the market prices cannot be explained, they are just fitted. The prices of liquid market instruments are used for the calibration procedure and, consequently, are reproduced more or less perfectly. However, for exotic derivatives, the prices derived from the implicit models differ substantially, since there is no market data on which to directly calibrate. Moreover, from an objective viewpoint, there is no way to determine which pricing model is the most reliable one given that the statistical fit to historical realizations of the underlying data is not taken into account.
An alternative approach to model price processes is pursued by econometricians. The goal of this approach is to provide the highest possible accuracy with respect to the empirical observations, or, in other words, to model the statistical characteristics of financial data. Thus, the focus of this approach lies in the statistical properties of historical realizations and the quality of forecasts. However, in using these models important aspects of derivative pricing are neglected.
Most econometric approaches neither present any risk neutral price processes nor are the markets defined in these models checked for the absence of arbitrage. Further, the econometric model approach assumes implicit or explicit knowledge of the statistical characteristics of financial data. Starting with the seminal work of Fama, “The Behaviour of Stock Market Prices,” Jour. of Business (1965), and Mandelbrot, “New Methods in Statistical Economics,” Jour. of Political Economy (1963) and “The Variation of Certain Speculative Prices,” Jour. of Business (1963), the disclosures of which are incorporated by reference, and subsequently reported by various authors (see, e.g., Rachev and Mittnik, “Stable Paretian Models in Finance,” Wiley (2000), the disclosure of which is incorporated by reference, for an extensive overview), researchers widely accept that financial return distributions are left-skewed and leptokurtic. A probability distribution is considered leptokurtic if the distribution exhibits kurtosis, where the mass of the distribution is greater in the tails and is less in the center or body, when compared to a Normal distribution. A number of quantitative measures of kurtosis have been developed, such as described in Scheffe, “The Analysis of Variance,” p. 332, Wiley & Sons, Inc., New York (1959), the disclosure of which is incorporated by reference.
In addition, a probability distribution can be considered asymmetric if one side of the distribution is not a mirror image of the distribution, when the distribution is divided at the maximum value point or the mean. Additionally, in time-series or longitudinal sections of return distributions, one observes volatility clustering, that is, calm periods are followed by highly volatile periods or vice versa. Engle, “Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of United Kingdom Inflation,” Econometrica (1982) and Bollerslev, “Generalized Autoregressive Conditional Heteroscedasticity,” Jour. of Econometrics (1986), the disclosures of which are incorporated by reference, introduced tractable time series models, denoted as ARCH and GARCH models, which gave rise to the explanation of the observed heteroscedasticity. In the subsequent years, various generalizations and variants of the original models have been published. The overview of Duan, “Augmented GARCH(p,q) Processes and its Diffusion Limit,” Jour. of Econometrics (1997), the disclosure of which is incorporated by reference, introduced a general treatment of the different variants and examined their diffusion limits.
Although members of the ARCH/GARCH-class generate stochastic processes with heavy tailed marginals, the results of applying such processes to the option pricing problem are disappointing in various respects. In many cases, the predictive value of the model and the quality of the statistical fit are poor, leading to an inability to explain prices of liquid derivatives. The main reason for the poor performance of the statistical model can be ascribed to the underlying probability distribution of the innovation process. The underlying white noise process, which can be seen as the driving risk factor for the stochastic process are traditionally modeled as independent standard Normal random variables. Various authors (see, e.g., Rachev and Mittnik, supra, for an overview) have suggested replacing the Normal probability distribution by the Stable probability distribution. The class of Stable distributions forms an ideal alternative to the normal distributions, by combining the stability or self-similarity property, the ability to model the leptokurtic and skewed behavior of financial returns, and the mathematical modeling flexibility of the Normal distribution. The main drawback of stable non-Gaussian distributions is their infinite variance.