1. Technical Field
The present invention relates generally to automated and/or computer-assisted database design. In particular, the present invention is directed to a method, computer program product, and data processing system for discovering a latent star schema structure in an existing relational database.
2. Description of the Related Art
One of the most important applications of computer technology is in organizing, storing, and retrieving vast quantities of information. To this end, the field of database management systems has evolved to a high state of maturity. The foundation of most modern database management systems is the relational database concept. Relational databases organize information in the form of tables, which may be thought of as two-dimensional grids, where each entry in the table (called a “tuple”) forms a row and each entry contains a plurality of fields or attributes (columns), representing different component pieces of information. FIG. 1 illustrates the organization of a single database table for recording product orders for a business, where each column of the table represents a different piece of information, such as the invoice number, item number for a particular item, quantity purchased of that item, customer information, etc.
From FIG. 1, it should be apparent that this sort of single-table database can be inefficient from a storage standpoint in that it may store a significant amount of duplicate information. For example, in the table depicted in FIG. 1, if a single customer purchases multiple items as part of the same order, a separate tuple in the table is needed for each item (i.e., each item requires a separate row in the table). However, each of these tuples would duplicate the customer's address information, the order date, etc., resulting in a significant amount of redundancy. This redundancy, in addition to increasing storage requirements, also makes performing alterations on the data more complicated, since any modification to one of these duplicated items of information must also be duplicated for each tuple in which the item to be altered appears.
This issue is typically dealt with within the relational database framework through what is known as “database normalization.” Through database normalization, a single database may be broken into multiple tables to avoid redundant storage while preserving the informational integrity of the database. According to relational database theory, there are a number of “normal forms” (1st Normal Form, 2nd Normal Form, 3rd Normal Form, 4th Normal Form, Boyce-Codd Normal Form, etc.) in which a database schema can be organized, each of which preserves certain functional dependencies between attributes. A functional dependency exists when the value of one or more attributes determines the value of another attribute. For example, the identity of a particular customer would functionally determine the customer's address. Likewise, an invoice number would functionally determine the identity of the customer being invoiced.
Relational databases rely heavily on the concept of primary keys and foreign keys to interrelate tables with one another. A primary key of a relational database table is an attribute or group of attributes of the table that uniquely identifies each entry in the table. An example of a primary key in a table of university students would be “Student ID No.,” since every student in a university is (or at least should be) uniquely identified by his/her student identification number.
Obviously, different tables will usually have different primary keys. However, a key concept of the relational database model is that of a “foreign key.” A foreign key is an attribute or group of attributes of one table that serves as a primary key of a second table such that the foreign key is used to reference entries of the second table.
For example, FIG. 2 shows a normalized relational database schema in the third normal form containing the same attributes as the single-table schema in FIG. 1. The single table of FIG. 1 is here replaced by four tables. Each of the four tables has a primary key (indicated in FIG. 2 by underlining those attributes). The attribute “Cust. ID” is a primary key of table 202 (the “customers” table). In table 200 (the “invoices” table, where each entry represents a particular invoice), there is also an attribute “Cust. ID,” which is a foreign key referencing the “Cust. ID” primary key of table 202, thus referring to the customer for a particular invoice.
Many relational databases are defined using “Structured Query Language” or (SQL), a declarative language for defining, updating, and querying relational databases. The database described in FIG. 2 in graphical form may be specified in SQL as shown in FIG. 3 through a series of “CREATE TABLE” commands, each of which defines a particular table in the relational database schema.
As shown in FIG. 3 it is possible to identify constraints on particular attributes. For instance, at line 302 in FIG. 3, the constraint “NOT NULL” is specified for the “INVNO” attribute (representing an invoice number), meaning that the “INVNO” attribute in the “INVOICES” table is not allowed to contain null values (i.e., every invoice in the “INVOICES” table must contain an invoice number). Other constraints related to whether certain attributes are primary keys or foreign keys, such as line 304, which states that “INVNO” is a primary key in the “INVOICES” table, and line 306, which states that the attribute “CUSTID” in the “INVOICES” tables (representing a customer ID number) is a foreign key that references the “CUSTID” attribute of the “CUSTOMERS” table.
The primary purpose of specifying these constraints is so that the database management system can verify that the data inserted into the database tables meets these constraints. Another side-benefit to explicitly specifying the constraints is that it provides some level of self-documentation of the database's structure. However, when constraints are explicitly defined in the database, the computational overhead associated with verifying the database's consistency with respect to those constraints can be substantial. For that reason, many databases in practical use are specified without explicit constraint definitions, as in the example provided in FIG. 4, which is an SQL listing illustrating how the same database structure defined in FIG. 3 can be defined without explicit constraints.
While traditional normalized relational databases are generally well adapted to database update operations (e.g., addition, deletion, and modification of data in the database), the advantages of the traditional normalized relational database (e.g., decreased redundancy, more efficient updates, etc.) often come at the expense of query efficiency/complexity. This occurs largely because query processing often requires the evaluation of “join operations,” where attributes in one table are matched to their counterparts in another table in order to reconstruct a single de-normalized table from the normalized set of tables. In a well-normalized relational database, it is often necessary to construct complex multi-join queries to obtain even simple information from a relational database. This presents a potentially high burden to those parties who need to extract data from a database for management decisions, as well as a high computational burden for processing such queries, since join operations are notoriously slow in most relational database systems.
For example, FIG. 5 shows SQL code defining a query intended for use with the database defined in FIGS. 3 and 4 to determine the total revenue associated with sales in Austin, Tex. This query contains two join conditions, “INVOICES.INVNO=ITEMORDERS.INVNO” (line 502) and “INVOICES.CUSTID=CUSTOMERS.CUSTID” (line 504), which match invoice numbers in the table of invoices to the items ordered on each invoice and match customer ID numbers in the table of invoices to the customers' information in the table of customers, respectively. This query as a whole means, essentially, “Select those customers in the customers table with addresses in Austin, Tex. Match those customers' customer ID numbers to the customer ID numbers of their invoices in the invoices table, so as to pick out the invoice numbers associated with those customers. Next, for each of those invoice numbers, pick out the individual items ordered on each of those identified invoices (from the “ITEMORDERS” table). For each of those item orders, multiply the quantity ordered of that item by the price. Finally, add up the multiplication results for all of the items to get the total revenue.” What makes this query complicated is the fact that two join operations must be performed, since at each join operation, each tuple in one table must have one or more of its attributes matched to a tuple in another table. Depending on how the tables are organized, each join operation can take as many as m×n compare operations (where m is the number of tuples in one table and n is the number of tuples in the other). Obviously, the computational complexity of a given query can become quite large where there are multiple join operations to be performed, particularly when one considers that, in many cases, the multiple join operations may be performed in an arbitrary order with no loss of correctness (e.g., in the example in FIG. 5, it would be no less correct to first match all invoices with their respective ordered items, then match the matched invoices and items to their respective customers, then filter out all but the “Austin, Tex.” tuples, although one would expect processing the query in this order to be much less efficient).
To address the increased complexity of query processing vis-à-vis database updates, the concept of a “data warehouse” (as opposed to a “database”) was introduced. The fundamental difference between a database and a data warehouse is that a database is designed for supporting data updates (transactions), whereas a data warehouse is specially tailored to performing queries on existing data. The basic idea behind the “data warehouse” concept is that once a collection of data has been accumulated over a given time period, there comes a point where that data will no longer change. For example, in a “product orders” database such as is described in FIGS. 2-4, once the orders for a given time period have been processed and recorded, at some point the information about those orders will not change—they merely become historical data. At this point, those pieces of information that are not subject to future change can be placed in a “data warehouse,” where they are organized for optimal data retrieval and analysis, as opposed to efficient transaction processing. Two terms that are frequently used to describe this distinction are “On-Line Analytical Processing” (OLAP) and “On-Line Transaction Processing” (OLTP). In general, data warehouses are designed for OLAP, whereas databases are designed for OLTP.
One particularly useful concept in data warehousing is the “multidimensional” storage model, in which data are conceptualized as existing in a multidimensional space (such as a mathematical vector space). Such a model is particularly useful for correlating data to particular time periods and locations. In the previous example of product orders, for instance, sales revenue data could be organized in a multidimensional model where one dimension represents “time,” another dimension represents “location,” and yet another dimension represents the particular product in question. This multidimensional approach is particularly useful where it is desirable to group items of data according to particular subdivisions of a dimension (e.g., grouping sales revenue by week, month, quarter, or year).
In practice, multidimensional modeling is often performed in the context of a relational database management system through the use of “fact tables” and “dimension tables.” A dimension table consists of tuples of attributes of a particular dimension. For example, a dimension table for a “quarter” (unit of time) dimension may include such attributes as “quarter number” and “year.” A fact table consists of measurement fields (such as “gross revenue”) and pointers to tuples in the dimension tables associated with the fact table (e.g., a pointer to a tuple in the “quarter” dimension table to denote the quarter in which the gross revenue amount in a fact table tuple occurred, a pointer to a tuple in the “location” dimension table to denote where the gross revenue was earned, etc.) A fact table, together with a set of dimension tables the fact table references, is generally known as a “star schema.” An example of such a star schema is provided in FIG. 6, where fact table 602 contains tuples that consist of a single measurement field (revenue) and pointers to tuples in various dimension tables 604, 606, and 608.
One of the advantages to using a star schema to implement a data warehouse is that the schema may be implemented by defining the fact and dimension tables in a relational database management system (using SQL, for instance). FIG. 7 is a diagram of a star schema, based on FIG. 6, as implemented using tables in a relational database management system. In the example provided in FIG. 7, fact table 702 references dimension tables 704, 706, and 708 through the use of foreign keys (namely, “Item No.,” “Loc ID,” and “Qtr ID”).
One of the challenges in making practical use of data warehousing is in reorganizing the data collected in a traditional relational database into a multidimensional structure, such as a star schema. This task is usually performed manually (by a database designer or programmer, for example). In a commercial setting, where the source database may be very large and complex, the task of defining a star schema to warehouse data from a given database may be very difficult, particularly if the original database schema is not well documented.
What is needed, therefore, is a tool for assisting a database designer with developing a star schema from a given relational database schema. The present invention provides a solution to this and other problems, and offers other advantages over previous solutions.