1. Field of Art
The present invention relates generally to the field of quality and performance assurance, and more particularly to the subject of measuring, managing and improving the quality and performance of products and services supplied by suppliers both inside and outside of an organization or enterprise. Still more particularly, the present invention relates to automated, computer network-based systems and methods for measuring, managing and improving the quality and performance of supplied products and services across large and potentially very diverse organizations and enterprises.
2. Related Art
Measuring, managing and improving the quality and performance of supplied products and services usually comprises one of the most significant problems large and diverse organizations, such as national and international manufacturing firms, national defense contractors, telecommunications providers, and state university systems, must face and solve in order to achieve their business, growth, profit, savings and/or budget objectives. In some contexts, the organization's ability to measure, control and improve the quality and performance of supplied products and services across the entire organization may be a crucial factor to the organization's success or failure in a competitive or economically-distressed industry. Consequently, companies, businesses and organizations are constantly searching for ways to measure, manage and improve the quality and performance of products and services supplied by outside vendors.
Various conventional quality assurance and quality management systems and procedures have been introduced in an attempt to address and reduce quality and performance problems. Due to a variety of shortcomings, however, conventional quality assurance and quality management systems and procedures have not resulted in much success.
To begin with, conventional supplier performance and quality management systems typically rely primarily—if not entirely—on the results of “satisfaction surveys” conducted by the vendor or the organization after the product or service which is the subject of the survey has been deployed. In other words, the satisfaction surveys are typically conducted only during the “post-deployment” stage of the life of the product or service, when all of the engineering, testing installation, integration and debugging problems have already been solved. Moreover, such surveys usually address only a particular product line of a supplier, rather than multiple or all product lines for that supplier.
The value of almost any satisfaction survey depends substantially on when the survey is conducted. It has been found, however, that satisfaction survey results obtained only after the deployment of the product or service usually do not provide an accurate view of supplier quality. By the time a post-deployment satisfaction survey is conducted and the results become widely available, memories have faded, details are forgotten, letters, documents and e-mails disclosing and discussing quality and performance issues, and the failures and successes that arose during the “pre-deployment” stage (i.e., before the product or service was finally ready for deployment) are often lost and/or forgotten. A division-level manager now basking in the glow (and obtaining the rewards) of being able to produce twice as many widgets, for instance, often forgets or underrates the impact of numerous engineering, testing, delivery and installation problems he had to endure before the responsible product or service was finally put into production.
Not surprisingly, then, snapshot survey results obtained only during post-deployment typically fail to tell the complete story of a supplier transaction and typically fail to shed light on many quality-related issues that could benefit from analysis and improvement. Moreover, because they represent only a snapshot measurement of quality during the post-deployment stage, conventional quality systems usually focus only on relatively obvious, short-term supplier performance issues, such as missed product or service delivery dates.
Another problem with customer satisfaction surveys is that they only ask the survey participants to grade, rank or assign a “score” to a quality or performance metric. As such, these surveys provide only quantitative results (e.g., discrete performance or quality “grades” or “scores”) from the survey participants. But different individuals usually have different—sometimes substantially different—views and opinions when it comes to assigning a quantitative quality or performance grade to a particular product or service. Conduct or performance that one person considers “excellent,” may be considered by another person—even another person who works very closely with the first person—as merely “acceptable.” Thus, asking these two people to grade or rank the quality of a product or service on a scale ranging from “poor,” “acceptable,” “good,” “very good” to “excellent,” for example, sometimes leads to inadequate, confusing or misleading results. In this respect, even quantitative grades and scores can be very subjective.
Conventional supplier quality management systems and processes suffer from another problem in that they typically cannot be applied to the whole of a large organization or enterprise having a multiplicity of diverse sub-organizations, departments or divisions which obtain products and services from that supplier. Rather, they can only be applied at the level of the sub-organization, department or division that provides the data. Such sub-organizations, departments and, divisions may (and usually do) have substantially different quality requirements, grading systems and problem-reporting procedures. Consequently, the results of satisfaction surveys and/or quality reports produced by individual and diverse departments or divisions are not all that useful for measuring, managing and improving supplier quality across an entire organization, which makes it extremely difficult to generate concrete cross-organization level corrective action plans to improve supplier quality.
For all of these reasons, conventional supplier performance and quality management systems lack the continuity, consistency and objectivity required by most organizations to achieve significant, long-term supplier quality and performance improvements across the entire organization. Accordingly, there is a need for systems and methods for measuring and managing supplier quality and performance issues on a level appropriate for an entire organization or enterprise, and for the entire life of the product or service, including both the pre- and post-deployment stages. There is a further need for such systems and methods to incorporate both quantitative and qualitative data concerning quality and performance, as well as supplemental supplier performance indicators, such as the suppliers' compliance or noncompliance with critical contract provisions and diversity objectives associated with delivering the product or service. Further still, these methods and systems need to include components and processes for creating, tracking and resolving quality and performance corrective action plans on an on-going basis.