This invention relates to a system and method for reporting supplier on-time performance and a system and method for reporting supplier reject performance.
Supplier on-time performance is typically reported as a percentage of orders that are delivered within a specified period of time with respect to a standardized start point and end point. In order to measure the amount of time that a supplier takes to deliver product in response to an order, a “start point,” i.e. an event triggering the start of a time period used to measure delivery time, must be identified. An “end point,” i.e., an event triggering the end of the time period used to measure delivery time, must also be identified. Typically, the possible start points used for measuring delivery time include: the time at which the buyer placed the order (“order sent” or “OS”), the time at which the supplier received the order (“order received” or “OR”), and the time at which the supplier confirmed the order with the buyer (“order confirmed” or “OC”). Possible end points that might be used for measuring delivery time are, for example, the arrival time of the supplier shipment at one of the following: the customer's receiving dock (“CRD”) (note, the terms “customer” and “buyer” are used interchangeably herein); the customer's final destination (“CFD”, e.g. customer storeroom, customer assembly line, customer mail stop); the origin transport on board (“OTO”, i.e., loaded onto the shipping vehicle at the origin); destination transport onboard (“DTO”, i.e. when the shipping vehicle arrives at its destination country); destination customs inbound (“DCI”, i.e. arrival at customs in the destination country prior to customs processing); destination customs outbound (“DCO”, i.e., point at which customs processing in the destination country is completed); or the supplier shipping dock (“SSD”).
Existing supplier performance reporting systems generally pick a single start point and a single end point to use as the standardized start and end points for determining the time period against which to measure whether or not a delivery is “on time.” Such systems require users to report on time performance using a standardized start and end point. This approach has at least two disadvantages. First, the customers that are providing on-time delivery data for use by such systems may not ordinarily track delivery times using the standardized start point and end point required by the system. Thus, customers may have to adjust existing internal procedures in order to report delivery time data that is useable by such systems. A second disadvantage of such systems is that customers whose businesses place an importance on delivery times using start and end points different from those used by the reporting system will not be able to gauge the performance of suppliers in a manner that is consistent with the particular business needs of such customers.
Existing supplier performance reporting systems also typically report the percentage of orders that are returned. However, existing systems simply report “reject performance” as a percentage of total orders that are returned without distinguishing between whether the return was supplier caused or customer caused. In some contexts, it may be useful for users of a supplier performance reporting system to know how many returns were caused by the supplier and how many were caused the customer.