Process control systems, like those used in chemical, petroleum or other processes, typically include one or more process controllers and input/output (I/O) devices communicatively coupled to at least one host or operator workstation and to one or more field devices via analog, digital or combined analog/digital buses. The field devices, which may be, for example, valves, valve positioners, switches and transmitters (e.g., temperature, pressure and flow rate sensors), perform process control functions within the process such as opening or closing valves and measuring process control parameters. The process controllers receive signals indicative of process measurements made by the field devices, process this information to implement a control routine, and generate control signals that are sent over the buses or other communication lines to the field devices to control the operation of the process. In this manner, the process controllers may execute and coordinate control strategies using the field devices via the buses and/or other communication links.
Process information from the field devices and the controllers may be made available to one or more applications (i.e., software routines, programs, etc.) executed by the operator workstation (e.g., a processor-based system) to enable an operator to perform desired functions with respect to the process, such as viewing the current state of the process (e.g., via a graphical user interface), evaluating the process, modifying the operation of the process (e.g., via a visual object diagram), etc. Many process control systems also include one or more application stations (e.g., workstations) which are typically implemented using a personal computer, laptop, or the like and which are communicatively coupled to the controllers, operator workstations, and other systems within the process control system via a local area network (LAN). Each application station may include a graphical user interface that displays the process control information including values of process variables, values of quality parameters associated with the process, process fault detection information, and/or process status information.
Typically, displaying process information in the graphical user interface is limited to the display of a value of each process variable associated with the process. Additionally, some process control systems may characterize simple relationships between some process variables to determine quality metrics associated with the process. However, in cases where a resultant product of the process does not conform to predefined quality control metrics, the process and/other process variables can only be analyzed after the completion of a batch, a process, and/or an assembly of the resulting product. While viewing the process and/or quality variables upon the completion of the process enables improvements to be implemented to the manufacturing or the processing of subsequent products, these improvements are not able to remediate the current completed products, which are out-of-spec.
This problem is particularly acute in batch processes, that is, in batch process control systems that implement batch processes. As is known, batch processes typically operate to process a common set of raw materials together as a “batch” through various numbers of stages or steps, to produce a product. Multiple stages or steps of a batch process may be performed in the same equipment, such as in a tank, while others of the stages or steps may be performed in other equipment. Because the same raw materials are being processed differently over time in the different stages or steps of the batch process, in many cases within a common piece of equipment, it is difficult to accurately determine, during any stage or step of the batch process, whether the material within the batch is being processed in a manner that will likely result in the production of the end product that has desired or sufficient quality metrics. That is, because the temperature, pressure, consistency, pH, or other parameters of the materials being processed changes over time during the operation of the batch, many times while the material remains in the same location, it is difficult to determine whether the batch processes is operating at any particular time during the batch run in a manner that is this likely to produce an end product with the desired quality metrics.
One known method of determining whether a currently operating batch is progressing normally or within desired specifications (and is thus likely to result in a final product having desired quality metrics) compares various process variable measurements made during the operation of the on-going batch with similar measurements taken during the operation of a “golden batch.” In this case, a golden batch is a predetermined, previously run batch selected as a batch run that represents the normal or expected operation of the batch and that results in an end product with desired quality metrics. However, batch runs of a process typically vary in temporal length, i.e., vary in the time that it takes to complete the batch, making it difficult to know which time, within the golden batch, is most applicable to the currently measured parameters of the on-going batch. Moreover, in many cases, batch process variables can vary widely during the batch operation, as compared to those of a selected golden batch, without a significant degradation in quality of the final product. As a result, it is often difficult, if not practically impossible, to identify a particular batch run that is capable of being used in all cases as the golden batch to which all other batch runs should be compared.
A method of analyzing the results of on-going batch processes that overcomes one of the problems of using a golden batch involves creating a statistical model for the batch. This technique involves collecting data for each of a set of process variables (batch parameters) from a number of different batch runs of a batch process and identifying or measuring quality metrics for each of those batch runs. Thereafter, the collected batch parameters and quality data is used to create a statistical model of the batch, with the statistical model representing the “normal” operation of the batch that results in desired quality metrics. This statistical model of the batch can then be used to analyze how different process variable measurements made during a particular batch run statistically relate to the same measurements within the batch runs used to develop the model. For example, this statistical model may be used to provide an average or a median value of each measured process variable, and a standard deviation associated with each measured process variable at any particular time during the batch run to which the currently measured process variables can be compared. Moreover, this statistical model, may be used to predict how the current state of the batch will effect or relate to the ultimate quality of the batch product produced at the end of the batch.
Generally speaking, this type of batch modeling requires huge amounts of data to be collected from various sources such as transmitters, control loops, analyzers, virtual sensors, calculation blocks and manual entries. Most of the data is stored in continuous data historians. However, significant amounts of data and, in particular, manual entries, are usually associated with process management systems. Data extraction from both of these types of systems must be merged to satisfy model building requirements. Moreover, as noted above, a batch process normally undergoes several significantly different stages, steps or phases, from a technology and modeling standpoint. Therefore, a batch process is typically sub-divided with respect to the phases, and a model may be constructed for each phase. In this case, data for the same phase or stage, from many batch runs, is grouped to develop the statistical model for that phase or stage. The purpose of such a data arrangement is to remove or alleviate process non-linearities. Another reason to develop separate batch models on a stage, phase or other basis is that, at various different stages of a batch, different process parameters are active and are used for modeling. As a result, a stage model can be constructed with a specific set of parameters relevant for each particular stage to accommodate or take into account only the process parameters relevant at each batch stage. For example at a certain stage, additives may be added to the main batch load, and process parameters pertaining to those additives do not need to be considered in any preceding batch stage, but are relevant to the batch stage at which the additives are added.
However in creating this statistical batch model, it is still necessary to deal with the fact that different batch runs typically span different lengths of time. This phenomena is based on a number of factors such as, for example, different wait times associated with operators taking manual actions within the batch runs, different ambient conditions that require longer or shorter heating or other processing times, variations in raw material compositions that lead to longer or shorter processing times during a batch run, etc. In fact, it is normal that the data trend for a particular process variable spans a different length of time in different batch runs, and therefore that common batch landmarks in the different batch process runs have time shifted locations with respect to one another. To create a valid statistical model, however, the data for each stage, operation, or phase of a batch must be aligned with comparable data from the same stage, operation or phase of the other batches used to create the model. Thus, prior to creating a statistical model for a batch process based on measured batch runs, it is necessary to align the batch data from the different batch runs to a common time frame.
A traditional technique used for aligning batch data from multiple different batch runs of a batch process uses an indicator variable to represent the progress of a particular batch run. The best indicator variable is typically smooth, continuous, and monotonic, and spans the range of all of the other process variables within the batch data set. To create the time aligned batch runs, batch data is then collected for all of the process variables and is adjusted in time with respect to the indicator variable. In this technique, a measure of the maturity or percent of completion of any batch run at any particular time is determined by the percent of the current value of the indicator variable to the final value of the indicator variable.
Another known method of aligning batch data from various different batch runs uses a dynamic time warping (DTW) technique, which is a technology barrowed from speech recognition. DTW minimizes the distance between respective process variable trajectories for different variables of the batch runs. In doing so, DTW takes into account all of the process variables in the time warping analysis, and has been determined to be an effective approach in aligning batch data. If desired, DTW can use an indicator variable as described above, or can use an additional variable created and defined as a fraction of the batch completion time. This indicator variable is added to the original process variable set to improve the robustness of the DTW calculation, and to prevent convergence to local minima over an excessive period of time. In any event, the DTW technique, when applied to batch data, generally skews the time scale of the data within a particular batch run based on the total time of the batch run, so as to compress or expand the time scale of the batch data to match a predetermined or a “normalized” time for the batch run. All of the batch runs of a data set are skewed to the normalized time, so as to align the data in each batch run with the data from the other batch runs to a common time scale. The batch model is then created from the batch data scaled to this common or normalized time scale.
Once the statistical model is created, subsequent batch runs can be compared to the model by collecting data for the batch and comparing that measured or collected data to the model data. However, to properly compare the data from each new batch run to the batch model, the new batch data must also be scaled (i.e., compressed or expanded) in time to match the normalized time used by the batch model. It is difficult to time scale batch data received from an on-going or on-line batch until that batch run is complete, as the run time of the on-going batch is unknown until the batch completes execution. Thus, batch data for new batch runs can be compared or analyzed with respect to the created batch model only really only after the batch run has completed execution. It is more useful, however, if the data collected from a batch run can be compared to or analyzed using the batch model while the batch run is still operating, as it is only while the batch run is still operating that changes in control parameters used to perform batch execution can be made to compensate for faults or other quality degradations within the batch. Moreover, it is helpful to be able to know, before the completion of a batch run, if that batch run is likely to result in an end product with unacceptable quality metrics. In particular, if it is known early on in the processing of a batch run that the batch run is unlikely to produce an end product with the desired quality metrics, the batch run can be halted or stopped, and the incomplete batch can be thrown away, to save processing time and energy, and to enable the equipment to be used to process other batches that will result in desired outputs.
Thus, a substantial obstacle in implementing an industrial on-line system for analyzing runs of a batch process arises because of the use of a normalized batch run time within the batch model (to compensate for the different time durations of the batch runs used to create the batch model) without the ability to know how to normalize the batch data collected from the on-line batch process. In an attempt to solve this problem, one DTW on-line implementation predicts the process variable trajectories at every scan of the batch run up to the batch stage end point. However, the prediction of these trajectories normally does not match with future batch runs. Also importantly, this on-line DTW procedure executes every new scan, accounting for the complete trajectory of the variable being analyzed, which makes this technique bulky, expensive in terms of processor usage, and too complex for on-line implementation in a process control system. Thus, the most common approaches being implemented for an on-line batch analysis application either assume that the on-line batch being analyzed and the aligned batches used in the development of the batch model have equal time durations, or use a set of heuristic rules to align batch data during operation of the batch. However, the assumption that the current batch will be the same length in time as the normalized time of the aligned batches used to create the batch model is usually incorrect, and thus leads to poor analysis results. Moreover, the simplified heuristic rules are typically not satisfied for the most applications. As a result, these techniques deliver misleading results.