1. Field of the Invention
The present invention relates to a communication device and a communication control method using a communication protocol with data loss compensation functions provided at both upper and lower layers.
2. Description of the Related Art
In the case of transmitting data through unstable channels such as radio transmission paths in which errors can occur at a high probability, it is customary to provide functions for data re-transmission, error correction, etc., on a link layer protocol for the purpose of compensating the data loss due to channel errors.
At the link layer adopting the data loss compensation function, when data are re-transmitted for the purpose of compensating the data loss, the data transmission delay will be increased considerably during that time. The data transmission delay is also affected by the change of the redundancy for the purpose of the error correction used at the link layer. For these reasons, when the TCP/IP (Transmission Control Protocol/Internet Protocol) with the data loss compensation function is used as an upper layer protocol, for example, it is known that there is a possibility for a time-out re-transmission of the TCP to occur due to a variation of the transmission delay caused at the link layer. On the other hand, most of the data loss can be compensated at the link layer so that the re-transmission by the TCP is actually unnecessary in many cases, and the re-transmission may cause wasteful consumption of the channel capacity.
The above noted phenomenon becomes prominent in the case where a plurality of TCP connections having relative close RTT (Round Trip Time) values are sharing the same channel having the data loss compensation function. This is the case of utilizing a WWW (World Wide Web) browser such as NETSCAPE NAVIGATOR (registered trademark) or INTERNET EXPLORER (registered trademark) in which a plurality of TCP connections are set up simultaneously in general, for example. The TCP carries out the flow control by changing a window size, and when a TCP transmission terminal receives ACK (Acknowledgement), this terminal immediately transmits packets that have become transmittable upon receiving this ACK. As a result, there is little possibility for a transmission of other packets to interrupt the continuous transmission of packets of a specific TCP connection. Consequently, there is a tendency for packets of the same TCP connection to be transmitted in lumps.
For example, when a frame of the link layer containing packet data of a given TCP connection (which will be referred to as TCP connection A) is lost due to channel errors, the link layer protocol re-transmits the data by raising the error correction redundancy such as FEC (Forward Error Correction). For this reason, the transmission delay is increased considerably during that time.
In addition, when the subsequent frames are also transmitted with the same error correction redundancy until the channel state recovers, the effective bandwidth of the channel will be reduced so that the transmission delay will be increased further. The TCP transmission terminal can receive ACK of the TCP connection A relatively quickly so that it is highly likely for this terminal to be able to deal with this situation without causing the time-out to occur by gradually increasing the time-out value of the TCP connection A.
However, as the channel is occupied for a relatively long period of time by the TCP connection A, the TCP transmission terminal cannot receive ACK from any TCP connections other than the TCP connection A during this period of time. For this reason, it becomes likely for the time-out re-transmission to occur for these other TCP connections.
This situation becomes even more prominent when the following burst error condition is satisfied in addition. Note that the above described link layer protocol that increases the error correction redundancy at a time of the occurrence of an error is designed to deal with the following burst error condition.
In the case of a channel such as radio channel in which bursty errors occur, the wasteful time-out re-transmission by the TCP can occur more easily. Namely, in the case of bursty errors, the transmission delay is small during a relatively long period of time in which no error occurs, so that the time-out value of the TCP that is adaptively set up according to the RTT observed by the TCP transmission terminal remains at small values and it gradually becomes a state in which the time-out can occur more easily. Then, when the bursty errors occur, the transmission delay becomes large abruptly so that the adaptive control of the time-out value of the TCP cannot keep up with the change and the possibility for the time-out re-transmission to occur becomes high.