FIG. 1 depicts a conventional architecture for exchanging communications over a distributed processing network. An application in a client 100 generates data elements and forwards them to a queue 104. The data elements remain in the queue 104 until a messaging application, such as Java Messaging Service (JMS), packetizes the data elements and transmits the packets over the Wide Area Network 108 to an application in the server 112. The occupancy and residence time in the queue 104 depends on a variety of factors, including the available bandwidth and resources of the WAN 108.
Because of packet acknowledgement delays, buffering capacity issues, and the substantial processing and resource consumption overhead from sending single data elements packet-by-packet over the WAN 108, the messaging application groups or bulks data elements into multi-member sets or bulks and compresses the sets using a suitable compression algorithm. The compressed sets are then placed in the queue 104. Each set is transmitted, in a single packet, to the far end application. In one configuration, a fixed bulk size of a number of data elements, or bulk size, is used to initiate delivery. In another configuration, the bulk size varies in response to a timeout interval. The timeout interval determines the size of, or number of data elements in, the bulk delivery and depends on predetermined characteristics. Bulking and compressing messages can conserve memory space allocated to the queue 104 and reduce consumption of network resources.
Problems arise when connectivity between the client 100 and server 112 is disrupted or worsened due to intermediate node outages or traffic congestion. Current messaging applications use a single fixed or time invariant compression algorithm and either a constant bulk membership size or a bulk membership having predetermined, unchangeable characteristics. They are unable to adjust either the bulk membership size or degree of compression dynamically based on the data rates and outage duration.