1. Field of the Invention
This invention relates generally to systems, and their methods of use, that measure actual performance of a security analyst, and more particularly to systems, and their methods of use that measure performance of a security analyst's recommendations using a value add that is determined by subtracting a return of a benchmark portfolio from a return of a simulated portfolio return and then multiplied by an adjustment factor.
2. Description of the Related Art
Many individuals and institutions analyze financial data, financial instruments, such as equity and fixed-income securities, and other things, at least in part to predict future economic events. Such individuals may include, for example, security analysts. The role of the security analyst is generally well-known and includes, among other things, issuing earnings estimates or recommendations on whether investors should buy, sell, or hold financial instruments, such as equity securities, and other predictions. Security analyst estimates may include, but are not limited to, quarterly, semi-annual, and annual earnings estimates for companies whether or not they are traded on a public securities exchange.
For each security an analyst covers, the analyst issues a “recommendation” or “rating” on the security. This recommendation or rating on serves as a recommendation as to whether to own or weight, relative to a neutral baseline level, holdings of a particular security during a particular time period. Different entities use different language and sometimes different number of levels of recommendations.
Usually more than one analyst follows a given security. Analysts often disagree on earnings estimates and recommendations and, as a result, analysts' earnings estimates and recommendations often vary.
A number of financial information services providers (“FISPs”) gather and report analysts' earnings estimates and recommendations. At least some FISPs report the high, low, and mean (or consensus) earnings estimates, as well as mean recommendations for equity securities (as translated to an FISP's particular scale, for example, one to five). In addition, FISPs may also provide information on what the earnings estimates and recommendations were at historical points in time including, but not limited to seven and thirty days prior to the most current consensus, as well as the differences between the consensus (e.g., consensus growth or consensus P/E) for a single equity security and that of the relevant industry.
For some clients, FISPs provide earnings estimates and recommendations on an analyst-by-analyst basis. An advantage of the availability of analyst-level estimates and recommendations is that a client may view the components of the mean estimate or recommendation by analyst. FISP's also work with the employers of the analysts to standardize the firms ratings to a single scale.
One method for determining estimates utilizes a software program that displays all current estimates. For a selected fiscal period security, the software provides the ability to simply “include” or “exclude” each estimate or recommendation from the mean. This is problematic for several reasons. First, commercially available databases of estimates and recommendations contain “current” data on thousands of stocks. Each stock may have estimates from 1 to 70 or more analysts. In addition, each analyst may provide estimates for one or more periods. The data may be updated throughout the day. Manually dealing with this volume of information may be time consuming and tedious.
The actual performance of a security analyst relative to his recommendations on whether investors should buy, sell, or hold financial instruments has been measured by including various degrees of positive recommendations in the construction of a simulated portfolio of securities with these ratings. However, this method fails to address analyst performance on ratings at other levels.
Other methods and systems have utilized a two-tier system using a simulated portfolio calculation that employs a own securities rated positively or don't own securities not rated positively scheme. Additional methods have been utilized with overweight securities rated strong positive over securities rated positive.
Methods and systems to date that measure actual performance of a security analyst typically compare an analyst's security recommendation performance to a benchmark. The purpose of comparing to a benchmark is often to determine the extent to which the analyst's performance was due to his/her abilities versus the extent to which his/her performance was due to external factors. External factors can be the overall market performance and the performance of the types of business covered by the analyst. One approach for selecting a benchmark is to choose an industry group from a published industry scheme, such as Dow Jones, or Morgan Stanley Capital International, as a benchmark. In this case, a published industry group is chosen that corresponds to an industry covered by the analyst.
The published industry consists of a set of securities in related fields of business. The return of this set of securities as a whole is used as a benchmark against which to compare the analyst's performance. A problem with this approach is that industries in published industry grouping schemes vary in their homogeneity with respect to the securities in each grouping. Some industry groupings contain very similar securities or companies in similar lines of business, while others contain companies in widely varying types of business. This results in some securities covered by some analysts that aren't included in their main industry groups. Thus, stocks that fall outside of an industry category will not count toward the performance of that analyst in their main industry group. It also results in the inclusion in an analyst's benchmark portions of an industry group not covered by the analyst.
Current approaches for measuring the performance of security analyst fail to distinguish securities in a portfolio that are rated neutral from those rated negatively. Additionally, current methods and systems fail to incorporate analyst's ratings at levels other than positive and strong positive into the analyst's overall performance calculation. Additionally, existing benchmarks can introduce factors unrelated to the analyst coverage and do not always fully encompass the analyst coverage.
There is a need for methods and systems for measuring performance of a security analyst that distinguishes the treatment of securities rated neutral and those that are rated negatively. There is a further need for methods and systems for measuring performance of a security analyst that incorporate analyst ratings other than strong positive or positive into an overall analyst performance calculation. There is yet a further need for methods and systems for measuring performance of a security analyst that use benchmarks without introducing factors not directly related to the analyst coverage.