A variety of industrial as well as non-industrial applications use fuel burning boilers which typically operate to convert chemical energy into thermal energy by burning one of various types of fuels, such as coal, gas, oil, waste material, etc. An exemplary use of fuel burning boilers is in thermal power generators, wherein fuel burning boilers generate steam from water traveling through a number of pipes and tubes within the boiler, and the generated steam is then used to operate one or more steam turbines to generate electricity. The output of a thermal power generator is a function of the amount of heat generated in a boiler, wherein the amount of heat is directly determined by the amount of fuel consumed (e.g., burned) per hour, for example.
In many cases, power generating systems include a boiler which has a furnace that burns or otherwise uses fuel to generate heat which, in turn, is transferred to water flowing through pipes or tubes within various sections of the boiler. A typical steam generating system includes a boiler having a superheater section (having one or more sub-sections) in which steam is produced and is then provided to and used within a first, typically high pressure, steam turbine. To increase the efficiency of the system, the steam exiting this first steam turbine may then be reheated in a reheater section of the boiler, which may include one or more subsections, and the reheated steam is then provided to a second, typically lower pressure steam turbine. While the efficiency of a thermal-based power generator is heavily dependent upon the heat transfer efficiency of the particular furnace/boiler combination used to burn the fuel and transfer the heat to the water flowing within the various sections of the boiler, this efficiency is also dependent on the control technique used to control the temperature of the steam in the various sections of the boiler, such as in the superheater section of the boiler and in the reheater section of the boiler.
The steam turbines of a power plant are typically run at different operating levels at different times to produce different amounts of electricity based on energy or load demands. For most power plants using steam boilers, the desired steam temperature setpoints at final superheater and reheater outlets of the boilers, as well as other settings within the system, are kept constant, and it is necessary to maintain steam temperature setpoints as well as other operating parameter setpoints close to a set of preestablished setpoints (e.g., within a narrow range) at all load levels. These setpoints may, in many cases, be set according to the use of manufacturer reference values and correction curves.
As is known, the efficiency of the operation of power plants, including steam generating or turbine power plants, is based on a number of factors within the plant, including not only the operating state of the equipment, but the type of control being applied at any particular time. In past decades, power plants, and especially power plants coupled to and providing power on the public power grid, were generally run at fairly constant outputs, and thus could be optimized over time using various techniques developed by the ASME. At the present time, however, the power (electricity) market is moving to a deregulated market, which allows for, and in fact encourages, constantly changing the amount of power being placed on the power grid by any particular utility or power plant based on market factors. This change in the marketplace leads to a situation in which the power being generated by a particular plant may be typically in flux or changing. This factor, in conjunction with the fact that the market is moving to ISO types of structures, has led to the increased role of computer control and diagnostic systems, which is rendering previous performance methodologies obsolete. In particular, several key aspects within these previous performance methodologies, including the use of manufacturer based reference values and correction curves, may lead to highly imprecise and inadequate evaluation of plant performance, especially considering operational behavior in a dynamic electricity market in which it is very important to be able to quantify plant performance quickly and accurately, to be able to profitably supply power in changing market conditions.
The plant performance methodology that is currently being used to implement performance monitoring in power plants was developed more than 20 or 30 years ago for power units operating with the expected conditions in the power industry. That methodology was developed based on, and corresponded to, the American and Western Europe standards of the 1960s and 70s, which put a premium on reliability (and not necessarily on efficiency). While this methodology, at the time, brought many significant advantages in the form of an improved quality of performance monitoring, it is outdated by the current dynamic deregulation aspects of the power generation industry. This obsolescence is due to a couple of factors, including (1) the advancement of computer technology that allows for common use of digital automatic control systems and (2) system changes in the power energy market. As a result, using this older performance analysis approach becomes less viable as a true performance index of a plant capability.
Generally speaking, the plant performance monitoring methodology that is currently being implemented to measure plant performance is based on calculating the unit chemical energy usage rate (using ASME power test codes) and then assigning measured loss deviations of the unit chemical energy usage rate from the expected value (i.e., a nominal value resulting from the last design or warranty measurements) as a result of operating the unit at parameters other than at the nominal parameters. The basic parameters whose influence over the unit heat rate is usually taken into consideration include main steam pressure, main steam temperatures, pressure decrease in the superheater (SH), reheat steam temperature (RH), pressure in the condenser, temperature of feedwater, and oxygen content in flue gas and flue gas temperature. While the number of these parameters has been extended many times, the theoretical basis of this method has stayed the same, in which the deviation in unit heat rate [kJ/kWh](BTU/kW) is usually calculated to a value of dollars per hour ($/h) for a more visual presentation of data. Systems such as this, which are based on ASME, TKE or similar methodologies, have been introduced in practically all power plants. With the modernization of automatic control systems, these methods have developed into an on-line system which performs all of the performance monitoring calculations, e.g., every several minutes, and presents the results on an operators' display screens at the distributed control system or at auxiliary computer displays to enable the operators to see the loss in efficiency of the plant and cost due to current operating conditions.
While the ASME performance monitoring methodology is effective when properly implemented, it has drawbacks. In particular, it is apparent, after so many years (and after many platform revisions), that there are basic problems with applying the current performance monitoring applications, due in large part to the use of original equipment manufacturer (OEM) provided “reference values” and “correction curves” that define the controlled (i.e., measured) losses from a particular operating point within the power plant. More particularly, in the current performance measuring system, most performance deviations (losses) are calculated (or are monitored) based on deviations from a set of so-called “reference values” which are usually the nominal values given by the OEM manufacturer. However, for devices that often have a 10-20 year life cycle, and that may have been modernized numerous times during their life, the OEM supplied reference values do not constitute a real reflection of the actual, as found parameters, within a particular power plant. Additionally, the present ASME methodology assigns the influence of operational parameter deviation (deviations in temperature, pressure, etc. during plant operation) from the assumed nominal values (i.e., the assumed achievable, design, or theoretical values) using the manufacturer's so-called “correction curves.” Leaving aside the accuracy of these correction curves in the first place (as there are common problems with obtaining these correction curves), the basis of this theory relies on defining the influence of deviations in the current operating parameters from the nominal or reference value on the unit heat rate (efficiency).
Unfortunately, the manufacturer's data, in the form of both the reference values and the correction curves, does not necessarily correspond to the real, dynamic operation of a particular maintained unit. Instead, this data is, at best, indicative of the average or assumed steady-state performance of a new unit. There is thus a serious theoretical problem with assigning a deviation for a given control value in a particular plant, which may not operate the same as the new unit for numerous reasons, based on these reference values and curves in the first place. Moreover, when building a correction curve, the manufacturer assumes that it is possible to make a clear assignment of the influence of a given operating parameter value on the unit heat rate without considering any other operating parameter. In other words, it is assumed that operating variables such as pressure, temperature, etc., can be treated as independent variables, which allows the method to apply balance calculations using the correction curves to calculate the effect of a change in an individual parameter on the plant efficiency (unit heat rate). In actual practice, however, a strong inter-relationship or interdependence exists between the various plant operating parameters. For example, various operating parameters are known to be highly interrelated in the form of the turbine equation. As a result, while the current performance methodology assumes that it is possible to modify one parameter without changing other parameters, during normal operation of the plant it is not possible to change one parameter without changing a few others. Additionally, the relationships between these parameters is not only dependent on the thermodynamic dependencies (balance), but are also influenced by the operation of the automatic control system that is actually controlling the unit. These relationships are simply ignored in the current methodology. In practice, therefore, when changing one of the main unit operational parameters, the automatic control systems shifts the unit status into a different operating point by also modifying the other parameters.
Because of these factors, deviations assigned using OEM correction curves cease to have any practical significance. For example, if, at a given moment, deviations of a unit heat rate are assigned for a series of main parameters, and a negative deviation for one of the parameters is obtained (resulting from the difference between the current value and the nominal or reference value), and if this difference is cancelled (i.e., the parameter is brought to the nominal or reference value to reduce the deviation), the other parameters will not remain unchanged, even though the performance methodology assumes that the other parameters will remain unchanged. This real life operation results in an entirely different set of parameter values, which will have other differences from the corresponding reference values, resulting in a completely different set of deviations to be corrected.
Still further, there is a problem with applying statistical balance models to assign losses during load following (i.e., dynamic) unit operation using the current ASME performance measurement methodology. In particular, the models used in current performance monitoring methodologies are based on a strictly static approach, i.e., based on the static operation of the plant. As a result, a good thermal status (or quasi-static) isolation of the unit operation is needed to obtain relevant performance monitoring results using these models. In the simplest approach, this static isolation requires a momentary stabilization of unit power and its principal parameters. However, in the power generation conditions associated with the present (ISO or deregulated) market, using a strictly static approach is simply impossible. In fact, the entire theory behind unit operation that actively participates in the power market assumes operation during dynamic (ramping or transitional) states.
Still further, the approach for obtaining good global performance results is to perform diverse processing of static performance data, which averages the results from various sites (considering the normal distribution of calculation errors and influence of dynamic states) thereby canceling momentary error. However, using this methodology for temporary (dynamic) performance monitoring is questionable at best.