For more than twenty years, the planning and buying of television advertising has been based on the concept of effective frequency. A compilation of research is provided in Effective Frequency: The Relationship Between Frequency and Advertising Effectiveness, by Mike Naples (1979) recently updated by Colin McDonald (1996). A key concept set forth in the book is that a single exposure is not enough to create a desired sales effect; most media planning models assume an effective frequency of three. In part based on this concept, a majority of television media plans are “flighted”, that is, weeks of dense exposure are followed by weeks off-air. Off-air weeks are necessitated by the cost of acquiring enough air time to provide for an effective frequency of three at whatever level of reach is desired. The belief in effective frequency causes advertisers to plan to be off-air rather than expose their advertising at frequency levels below the targeted three.
Over the last several years a number of publications have changed the perception of effective frequency. The works of John Philip Jones, particularly When Ads Work (1995), are seminal to the changes taking place in the concept of effective frequency. Using single-source data and a share-based analytical scheme, Jones has examined purchases within one week of ad exposure, finding that a single exposure within that time period produces the majority of the positive share effect. While additional exposures beyond the first produce small gains, Jones concludes that effective frequency is in fact one, and that continuity of airing, rather than flighting, should be the advertiser's goal.
Expanding on the work of Jones, Ephron (1995) draws media conclusions that (weekly) reach should be the planning and buying criteria, and that being off-air, as required by the flighting pattern, is equivalent to being out-of-stock at the point of sale. Ephron uses a concept of recency to explain the manner in which a single exposure of advertising works. He postulates that there are a pool of “this week's buyers” which may be affected by the advertising which airs this week, plus a pool of “next week's buyers” which may be affected by next week's airings, but which are unaffected by this week's advertising exposures, and so on forward in time. Thus continuity of exposure is rewarded, and off-air weeks (which result from flighting to gain frequency of exposure greater than one on the air), penalize a product.
These publications illustrate that the study of advertising marketing effects on a products' sales performance is an important area of study and concern for product manufacturers. First time buyers due to advertising are likely to be repeat buyers.
FIG. 4 provides an example of a study of advertising activity. Referring to FIG. 4, a chart is shown. The chart above shows an objective measure of the level of temporary price reduction (TPR) activity, measured in percent % of All Commodity Volume, a measure which weights large and small stores by the volume of all goods sold. Weeks designated by a bulls eye were counted as promotion weeks (Prom. Period). Weeks designated by a bullet were counted as non-promotional weeks.
Unfortunately, to get a true evaluation of the effectiveness of any advertising campaign a baseline must be known which represents what the expected sales for a particular product would have been absent the advertising promotion. To attempt to model the same, companies/manufacturers look to numerous consumer polling groups for information in order to approximate what the expected value of these data points would be, e.g. to understand the effectiveness of an advertising campaign.
For example, AC Nielsen, Inc., and Information Resources, Inc. (IRI) work in the area of modeling advertising effects. Media Marketing Assessment (MMA), Hudson River Group and Millward/Brown also work in this area. These entities utilize aggregate data plugged into extremely complex equations having forty to fifty parameters. Perhaps seventy to eighty estimates are made to aggregate back to the national estimate cumulating data. From this sort of convoluted data manipulation, these groups offer their analyses.
For example, Bases, which is presently a division of AC Nielsen, Inc., provides forecasting, or market sales volume simulation, for new products. The Bases' processes, however, are anchored in a 52 week market, and cannot provide information prior to or beyond the 52 week expected picture or prediction.
AC Nielsen and IRI have modeling groups and access to raw data, and still do not make use of it. These and other modelers of consumer data use aggregate data instead with extremely complex equations having 40-50 parameters. They work with data at the level of a retail chain. They require 70-80 estimates to aggregate back to national estimate cumulating data. The less that data is, or must be, manipulated the more accurate in character that data. Therefore, less manipulated data would be more useful and accurate in providing evaluations, forecasts or expected future performance of a product.
Prior art methods have about fifty parameters to estimate and require regression based adjustment that may independently effect the values of each data point, thereby making the analysis of the data less reliable. These, conventional modeling methods do not isolate week by week data within a given class of products and thus are not able to provide a true week by week analysis, but instead provide only a more generalized picture at 52 weeks.
For the reasons stated above, and for other reasons stated below which will become apparent to those skilled in the art upon reading and understanding the present specification, it is desirable to develop systems and methods which can afford greater flexibility in analyzing advertising effects, and more timely forecasting and analysis of advertising exposure and expected future performance for product sales, in a manner which minimizes the manipulation of data and provides greater accuracy.