When troubleshooting a system, it may be desirable to identify a failure event—a breakdown in system performance. In order to localize and identify the cause of such an event, it may be desirable to reconstruct the performance of the system at a given time before, during, and/or after the failure event on a small time scale. The farther in time the reconstruction occurs from the failure event, the more difficult the reconstruction may be. A very large number of samples of system characteristic data on a small time scale is very valuable in case of a failure event. On the other hand, due to storage considerations, it is often impractical or impossible to indefinitely store all detailed system characteristic data.
With limitations on how much data may be stored for a given time period, various statistical operations may be performed on the data in order to compress the information. This compressed information may then be used to give some indication of the system's performance over time. However, these operations cannot accurately capture behavior over time of certain system characteristics, such as the polarization mode dispersion of an optical communication system, that change rapidly and unpredictably. As more and more information is lost due to compression of the stored data, the more difficult it may become to troubleshoot a system or to predict an imminent failure event.
As more and more systems operate at higher data rates and reliance on these systems becomes more widespread, it will become increasingly important to be able to accurately reconstruct the state of a system at a certain recent point in time in order to troubleshoot a past failure event, or to accurately predict a future failure event based on information collected over extended periods of time.