1. Field of the Invention
Embodiments of the present invention relate generally to business process management and, more particularly, to methods and systems for computer-based business process management.
2. General Background and Description of Related Art
The automated processing of information has been an enormous benefit to businesses because it has greatly reduced the cost of certain tasks. Every enterprise regardless of whether it is a government, commercial business or not-for-profit organization has the operational necessity to manage information.
This information is used to acquire customers, input orders, ship product, bill customers, collect invoices, pay employees and vendors, order product, audit inventory and maintain records of transactions between employees, customers and suppliers, for example, in the case of a commercial business.
In the normal course of events, information is acquired, processed and consolidated utilizing software, computer hardware and digital networks in accordance with each organization's internal operational model.
Unfortunately, the automated processing of information has also created several problems for businesses, especially where the information in the company's data store is incorrect. Automated processing of incorrect information carries a high cost for businesses in and of itself. In addition, the time, effort, and expense required to correct the undesired results can significantly impact an organization's resources.
Typical examples of the impact of errors on an organization include:                (1) Recipients receive multiple copies of the same offers in the mail, which may result in: (a) the sender wasting postage and printing, and (b) the recipient being negatively affected by waste and, as a result, not ordering products;        (2) Postal systems and other message and package shippers are unable to deliver a significant percentage of their material to the intended recipients, which may have the result that: (a) the product is not delivered on time and returned to shipper due to inaccurate address, (b) costly efforts being made to determine the correct address and repack and reship the product, (c) the invoices are returned and not paid on time or at all, (d) costly efforts are made to determine correct address and resend invoice, (e) clients become annoyed by poor service and, as a result, switch to another vendor, if possible, and (f) customer service, billing, collections, shipping all require additional resources to perform their functions;        (3) Individual operational units contain inaccurate information on clients as well, which may have the result that: (a) enterprise efforts at consolidating information are incomplete, costly and prolonged, and (b) errors in individual operational units, when consolidated, compound the overall error rates and impair the ability for meaningful analysis;        (4) Incomplete and inaccurate information is consolidated in data warehouses, data marts, operational data stores, customer information files and centralized data stores for CRM, ERP, SCM and other centralized processes, which may result in: (a) marketing not being unable to accurately forecast the value and potential of individual clients and client segments and losing valuable market opportunity, (b) customer service not being able to provide the proper service and, as a result, losing clients due to dissatisfaction with service, and (c) fraud not being detected in a timely fashion, resulting in the enterprise being defrauded of large amounts of money;        (5) Operational units are unable to determine the correct tax jurisdiction and tax assignment, which may result in: (a) the enterprises not charging the right tax to the client and paying the right amount to the right authority, (b) taxing authorities not collecting all the proper taxes due them, (c) consumers paying more taxes than they should, and (d) corporations suffering liabilities with tax jurisdictions and clients; and        (6) Customers are unhappy and move to a competitive service.        
The impact of erroneous information on a company's revenue is easily explained using a typical mass mailing as an example. Naturally, this is but one example, as the list presented above identifies a number of other potential impacts on a company's revenue.
Companies generate customer lists in a variety of ways. Information collected by one part of an enterprise is often used by other parts of an organization to perform their functions. If the company has a retail component and a catalog component, customer information may be entered into the company's data store at the retail location, at the catalog location, or even through the Internet. Each of these three entry positions presents a location where the information may be erroneous or duplicative of pre-existing information. Any error in the accuracy of information as it is collected, processed and consolidated can impact the effectiveness of multiple functions within an enterprise.
It is possible that a customer may have customer information entered correctly at the retail level. Subsequently, the customer may make a purchase through the catalog division. At that time, the data entry specialist may, for example, enter the customer's name into the data store incorrectly (e.g., by misspelling the person's last name). On still another occasion, the same customer may make a purchase through the Internet. At that time, the customer will be required to supply his or her information for a third time. In this situation, assume that the customer typed his or her last name incorrectly. Therefore, in this scenario, the customer information has been entered three times, two of which were incorrect.
Relying on its stored data, the company then prints and sends an updated copy of its catalog to its customers. Since the customer described has three separate entries in the business's data store, the customer receives three copies of the same catalog. As is easily understood, the cost to the company of printing and mailing the catalog has been tripled simply because of errors in the company's data store.
The problem of data quality exists with many businesses. In the prior art, there have been several approaches offered to minimize the data quality problem.
Previous approaches to data quality include providing a variety of vendor-specific or application-specific functions to the business. The data quality functions are designed specifically to automatically review a company's data, identify errors, in some cases, and correct those errors. To do this in such a way that is flexible to different businesses, the data quality functions typically provide several settings that may be adjusted by businesses to meet their individual needs.
Typically, businesses select a standardized set of settings for a particular data quality function that the business finds most beneficial. The settings may, for example, remove the errors in a data store to “clean” the data store to a point where the data is 95% accurate (which is considered to be a very good result).
The problem of data quality is compounded when a company includes multiple units, each of which have separate data stores that contain overlapping information, and the company tries to create a consolidated data store.
In the example presented above, the company had three business components, a retail component, a catalog component, and an Internet component, each of which contained a separate customer information data store. Typically, if each component wished to “clean” up its data, each component would purchase a data quality solution and apply that solution internally. If each component were to produce a data store that was 95% accurate, the result would be considered quite good.
If the company then tries to create a consolidated data store, a problem arises in that the errors in each data store compound one another. In the case presented, each data store has the same error. The combined data store will have an error rate of 0.95×0.95×0.95=0.857375. This means that the resulting data store has an error rate of about 15%, three times higher than each of the data stores that were combined. As may be appreciated, this problem is particularly pronounced when four or more data stores are combined together.
Furthermore, because the vendor-specific and application-specific data quality functions have unique strengths and weaknesses in detecting and/or correcting errors, the errors may be inadvertently propagated throughout the enterprise as applications pass data back and forth. This may have the undesirable result of causing a multiplicative increase in the overall error rate for the enterprise as a whole.
While data quality solutions are beneficial to the operation of businesses, they are limited in their ability, especially in cases where companies try to create centralized data stores for multiple business entities in an enterprise or group of entities.
Moreover, when a business desires to develop a program to “clean up” its centralized data store, current doctrine dictates that the business retain a firm to develop the appropriate software. The development of software follows what is commonly referred to as the Software Development Life Cycle (SDLC).
The known SDLC adhered to by developers of business processes, such as, for example, data quality processes, results in a time and resource costly lock-step sequential phase approach. For example, a known SDLC includes the following phases which are performed sequentially: requirements definition, general design, detailed design, development, testing, quality assurance checking, trial, implementation, and maintenance/modification. In addition, different personnel or service providers with different skill sets may be required as the project moves from phase to phase. For example, a systems engineer or business analyst may be required during the requirements analysis/definition and test phases, but a software engineer may be required for the design phases. Project handoff between phases introduces errors into the final product and adds cost due to increased project time, increased overhead, and higher project headcount.
Still further, because the requirements definition phase may be arbitrarily halted (i.e., requirements “frozen”) in order to permit design activities to begin, the known SDLC is inflexible and unaccommodating to the evolving needs of customers for software products.
The failings identified above with respect to the prior art cry out for a solution.