Networks used for transferring data often make use of payload compression to reduce the bandwidth requirements of the network. One example of data compression is de-duplication, which is based on shortening repeated data patterns in a data stream. An example method of de-duplication is as follows:                1. The compressor identifies a repeated data pattern (denoted Gi) in the data stream.        2. The compressor replaces each occurrence of Gi with a pointer Ni. The pointer is a reference to the pattern Gi in a compression table (also referred to as a “code book” or state table).        3. The compressor repeats steps 1 and 2 for each repeated data pattern.        4. The compressor sends the data stream to the decompressor with each Gi replaced by the corresponding Ni. The compressor also sends the compression table (or any changes to the compression table) to the decompressor.        5. The decompressor receives the data stream, and replaces each occurrence of a pointer Ni by the corresponding Gi.        6. The data stream is then read by the receiver.        
More advanced compression technology may include transformations to increase the probability that a duplicate can be found for a given sequence. Advanced compression technologies include header compression, e.g. Robust Header Compression (RoHC).
In practical applications, the compressor will not act on the entire media stream at once. Instead, the compressor will continually compress the stream as it is transmitted, identifying and compressing repeated data patterns as it goes along. This allows the stream to be transmitted in real time. However, when the compressor is first initialised, it must “learn” the repeated patterns for the data that flows through it, and so the compression algorithm takes significant time to converge (i.e. approach maximum efficiency). This is shown in FIG. 1.
If the compression software is updated, then the existing compression table may not be usable. In this case, the updated software will be required to generate a new compression table. While the updated software may run more efficiently once converged, this causes the flow of data over the network to be much less efficient until convergence is achieved, as shown in FIG. 2. This is clearly a problem for network operators, who must then balance long-term efficiency over short-term disruptions to the network.
Therefore, there is a need to ensure that software upgrades can be performed seamlessly in a system such as that in FIG. 3. The compression software includes both a compression and a decompression function. In the arrangement shown in FIG. 3, only the compressor is used in the left hand node, and only the decompressor in the right hand node. However, such a system may support bi-directional flows of data.
There are no known solutions for seamless upgrades to compression software. Existing architectures will generate unpredictable system performance during the upgrade such as described above.