1. Field of the Invention
This invention relates to the field of data mining. More specifically, the present invention relates to the detection of outliers within a large body of multi-dimensional data.
2. Description of the Related Art
Organizations collect huge volumes of data from their daily operations. This wealth of data is often under-utilized. Data mining is a known technology used to discover patterns and relationships in data. It involves the process of analyzing large amounts of data and applying advanced statistical analysis and modeling techniques to the data to find useful patterns and relationships. These patterns and relationships are used to discover key facts that can drive decision making. This helps companies reap rewards from their data warehouse investments, by transforming data into actionable knowledge and by revealing relationships, trends, and answers to specific questions that cannot be easily answered using traditional query and reporting tools.
Data mining, also known generically as “knowledge discovery,” is a relatively young, interdisciplinary field that cross-fertilizes ideas from several research areas, including machine learning, statistics, databases, and data visualization. With its origins in academia about ten years ago, the field has recently captured the imagination of the business world and is making important strides by creating knowledge discovery applications in many business areas, driven by the rapid growth of on-line data volumes.
Until recently, it was highly impracticable to build large, detailed databases that could chronicle thousands, and from a statistical view, preferably millions, of data points related to, for example, consumer transactions or insurance claims. Deriving useful information from these databases (i.e., mining the databases) was even more impractical. With the advent of modern technology, however, building and processing large databases of information has become possible. For example, most organizations now enter all of their important business information into a computer system, and information that is not already entered into the system can be scanned into the system quickly and easily. In consumer transactions, a bar code reader can almost instantaneously read so-called basket data, i.e., when a particular item from a particular lot was purchased by a consumer, the items the consumer purchased, and so on, for automatic electronic storage of the basket data. Further, when the purchase is made with, for example, a credit card, the identity of the purchaser can be almost instantaneously known, recorded, and stored along with the basket data.
Likewise, processing power is now available at relatively inexpensive cost, making the mining of databases for useful information simple and efficient. Such database mining becomes increasingly problematic as the size of databases expand into the gigabyte and even the terabyte range, and much work in the data mining field is now directed to the task of finding patterns of measurable levels of consistency or predictability in the accumulated data.
Fayyad et al. (“From Data Mining to Knowledge Discovery: An Overview,” in Chapter 1, Advances in Knowledge Discovery and Data Mining, American Association for Artificial Intelligence (1996)) presents a good, though somewhat dated, overview of the field. Bigus (Data Mining with Neural Networks, McGraw-Hill (1996)) and Berry and Linoff (Data Mining Techniques: For Marketing, Sales, and Customer Support, John Wiley & Sons (1997)), among others, have written introductory books on data mining that include good descriptions of several business applications.
Predictive modeling is a well-known technique used for mapping relationships among actual data elements and then, based on the model derived from the mapping process, predicting the likelihood of certain behavior with respect to hypothetical future actions or occurrences. There are numerous techniques and method for conducting predictive modeling and the specific details of these know methods and techniques are not discussed further herein.
One of the first steps in the process of building predictive models is to review the characteristics (attributes) of data that is to be used. Typically, measures such as the mean, range, and the distribution of the data are considered for each attribute. A judgment is often made about what data values are considered to be “outliers”. Outliers are data points that fall outside of the norm.
Generally, there are two kinds of outliers. A value that is identifiably incorrect, e.g., the age of a person being 180 years, is an identifiably incorrect outlier. For modeling purposes, identifiably incorrect outliers are essentially removed from consideration altogether (e.g., they are considered as having a value of “null”), since they represent errant data.
A second type of outlier is a value that is considered extreme, but correct. This type of outlier represents correct data, but data that is so unusual that it inappropriately skews the analysis of the more normal data. For modeling purposes, it may be desired to represent this extreme, correct outlier by some value that is less extreme, to minimize the skewing impact of the outlier on the model. As an example, when analyzing annual incomes of Americans, Bill Gates'income, while accurate, would be considered an outlier relative to the majority of the population. For outliers of this type, a limit may be placed upon the data value (as opposed to removing it from the data set altogether) to “filter out” such correct but statistically misleading data points.
Identifying these types of outliers is relatively simple when considering a single variable at a time (e.g., the age of a person or the annual income of a person) and standard box plots can be used to help identify these outliers. However, it may be desirable to identify an analyze combinations of variables that are outliers, e.g., the age of an insurance customer (a first variable) and the amount of insurance coverage that that person carries (a second variable). There are many reasons why it might be desirable to identify and analyze such multi-variable outliers; they may, for example, represent unusual occurrences such as the existence of fraud.
The original analysis of one variable can be expanded to two variables by plotting their occurrence on a scatter plot. Likewise, the combination of three variables can be visualized in a three-dimensional plot. Beyond three variables, however, a problem exists because there is no dimension beyond the third dimension to use for plotting. In the complex environments that analysts currently strive to manage, there may be many hundreds of potential attributes to use in modeling and thus the “maximum of three variables” approach is inadequate.
Predictive models can be used to identify unusual combinations of inputs exceeding three. A weakness of the traditional modeling approach, however, is that it isn't focused on the goal of the model. As an example, a company may routinely compile a broad range of data related to individuals, e.g., demographic data, interests, hobbies, profession, likes and dislikes, etc., but may wish to create a model focused on characteristics relevant to a narrow subject area, e.g., work-related characteristics. Traditional predictive modeling approaches will use either all the data, including data unrelated to the narrow subject area, or someone must cull through the data and identify which characteristics to include. Thus, all characteristics for which data has been stored will be input to the model unless steps are taken to hand-select only the characteristics of interest. Time and effort considering attributes that will not be used in the model is time wasted and thus leads to inefficiency.