The performance of utilities grids—their reliability, safety, and efficiency—can be drastically improved through sensing key parameters and using those results to direct the operations and maintenance of the grid, by identifying faults, directing appropriate responses, and enabling active management such as incorporating renewable sources into electrical grids while maintaining power quality.
Sensor networks are often used to monitor utilities grids. These sensor networks may include smart meters located at the ends of the grid, sensors at grid nodes, and sensors on or around the utilities lines, these sensors measuring grid parameters such as flow rates in water grids, power quality in electrical grids, or pressures in utilities grids. These sensors are transducers, usually outputting analog signals representative of the measured properties. These outputs need to be characterized to map to specific values of those properties, and/or classified so that they may represent particular states of the world, such as a potential leak that requires investigation, or identification of increases in reactive power when incorporating a renewable resource into an electrical grid. Characterization of sensors is usually done through bench testing, while the sensors may have various interferences in the environment surrounding them; in-situ characterization of sensors on a utility grid monitoring network would be preferred, but is difficult for the large numbers of sensors used to monitor a utilities grid and the difficulty in accessing many of those sensors.
The trend in analyzing grid sensor data and directing responses is “big data,” which uses large amounts of grid historical data to build models used for classification and direction of responses. These big data models, however, are limited to correlations, as they mine historical data to build the models, limiting their effectiveness for actively directing treatments or making fine adjustments. Further, these big data models typically require large volumes of data that prevent highly granular understandings of grid conditions at particular grid nodes or locations or that can only achieve such granularity after long operations; some have applied machine learning techniques and improved models to increase speed and granularity, but even these approaches continue to rely on correlations from passively collected historical data.
Signal injections have been used to highlight grid faults, such as discovering nodes where power is being illegally drawn from an AC power grid; these techniques rely on already-characterized high-quality sensors such as “smart meters” and are occasional, grid-wide individual actions, not coordinated to be conducted concurrently or sequentially and thus not suitable for in-situ calibration of a large number of diverse sensors. Signal injections have also been used to test grid-wide response to large changes in high levels of the grid, such as at the HVDC distribution level. Those signal injections have been large, individual, and human mediated, not susceptible to automation, smaller-scale local testing or concurrent or sequential implementation of tests, again inappropriate for calibrating and characterizing the responses of individual local sensors in-situ. To adopt signal injection for regular in-situ characterization of sensors on a highly sensorized grid, there is a need to be able to inject signals concurrently and sequentially to increase sample sizes and enable automation without confounding sensor responses with other signal injections.
Utilities grid management would benefit greatly from real-time cause-and-effect understanding of sensor responses to overcome the issues with big data smart grid approaches and allow for real-time, granular, and fine-tuned grid monitoring and management to more fully capitalize on the potential of smart grid to optimize grid parameters and respond to potential grid pathologies, by enabling such optimization to be done at more local levels across these highly variant systems.