Frames of data transmitted over telecommunications lines are generally protected with a CRC code. As shown in FIG. 1, frame or message 110 is encoded by transmitting side 100 which adds redundancy under the form of Field Check Sequence (FCS) 105, at the end of frame or message 110, to be transmitted through network 120. Receiving side 130 must then perform a check to accept the frame as valid before any further processing. Protocol frames are generally of variable lengths. This means that there is field 140 somewhere in the protocol header 160 that must be interpreted by the receiver so that the length of current frame 150 is known and the end of frame, and the FCS field, can unambiguously be located in order to be able to perform the checking properly.
CRC checking has long been done ‘1-bit at a time’ with the help of a device referred to as a Linear Feedback Shift Register (LFSR), an example of which is shown in FIG. 2. LFSR is also a convenient means by which the protocols specify how frame encoding and checking must be carried out so as a data frame is encoded by the transmitter and recognized as valid by the receiver according to the protocol specifications. FIG. 2 illustrates an LFSR corresponding to the CRC-32 used by many protocols such as the Adaptation Layer 5 (AAL5) of Asynchronous Transfer Mode (ATM) or Ethernet. This particular LFSR allows the encoding and checking of frames ‘1-bit at a time’ per the following generator polynomial:G(X)=X32+X26+X23+X22+X16+X12+X11+X10+X8+X7+X5+X4+X2+X1+1,
which is the degree 32 generator used by the above protocols for protecting their data transmissions.
When the performance of telecommunications lines used to access a network, such as network 120 shown in FIG. 1, dramatically improves, the simple device of FIG. 2 is no longer capable of handling the increased transmission line speeds. Numerous methods and devices have thus been proposed, and used, to process more bits together. Especially, methods to compute CRC ‘8-bit (i.e., a byte) at a time’ have been devised in order to speed up computation accordingly. One of the earliest reference on a method for computing 8-bit at a time is a paper by A.Perez, “Byte-wise CRC calculations”, published in IEEE Micro, June 1983. Because frame lengths are often, if not always, comprised of an integer number of bytes this did not create any difficulty. Either a logic device is fast enough to compute and it is possible to wait until the length of a current frame is extracted before actually starting the computation, or since it is possible to assume that frame lengths are made of an integer number of bytes, computation can start as soon as the first Most Significant Byte (MSB) is received. Then, when length is later extracted from the frame header, computation can be stopped on time to match the actual frame length, on a byte or 8-bit boundary.
However, because telecommunication line performance has continued its dramatic growth, logic designers have now to deal with line speeds in the 10–40 Gbps (Gigabits per second) range. This corresponds, for the lower value, to e.g., an OC-192 Optical carrier of the Synchronous Optical NETwork (SONET) hierarchy, a North American standard, or to the 10 Gbps version of the Ethernet standard. Because the performance of Application Specific Integrated Circuits (ASIC's) or Field Programmable Logic Arrays (FPGA's) that are commonly used to implement the corresponding logic have not seen, by far, their performance increased accordingly, designers have to consider the use of logic devices in which even more bits or bytes have to be processed together, at each device cycle, in an attempt to reach the necessary level of performance. However, if protocol data frames are indeed an integer number of bytes they are not necessarily a multiple of 4 or 8 bytes, which is now the typical number of bytes that must be processed together, to match such line speeds with a cost-performance technology like Complementary Metal Oxide Semiconductor (CMOS) thus avoiding the use of expensive higher performance technologies like the ones based on Gallium Arsenide (GaAs) that would otherwise be required. An example of the problem encountered is shown in FIG. 3 where a 19-byte frame 300 protected e.g., with the 4-byte CRC-32 discussed in FIG. 2, must be handled, for performance reasons, by a device 310 capable of computing 8-byte at a time storing its intermediate results in a 4-byte FCS register 330. Then, computation cannot start until the actual frame length is known because frames are always transmitted MSB first. Hence, if starting right away, as shown in FIG. 3, the result would be wrong because the computation device must be aligned on the last received bit or byte so as the FCS 320 is indeed the true least significant portion of the frame which is clearly not the case here.
In practice, this requires that alignment (i.e., a left justification) be done prior to the beginning of the computation by padding enough null bytes 400 in front of the received frame, as shown in FIG. 4, so that the last cycle of the computation matches exactly with the FCS field 410. In other words, the received frames must be padded with enough front null bytes so as they become a multiple of the number of bits or bytes i.e., 8 bytes in this example, handled at each cycle of the CRC computing device. Like with ordinary numbers, padding 0's in front does not affect the end result of a division which is basically what CRC is doing on the binary string of bits which forms the data frame.
Again, this implies that lengths of frames be known when starting. This goes against the need of having to begin the computation as quickly as possible in order to cope with line speed increases while ASIC and FPGA technologies in use do not enjoy an equivalent improvement of their performances.