In serial based communications systems, data is transmitted serially between network nodes in a bit stream, which comprises a sequence of separate bits, evenly spaced by a bit period. The speed at which a node can process a bit stream in order to examine the information contained therein is determined by, among other factors, the data rate of the bit stream and the bit stream processing latency. The data rate of a bit stream is the inverse of the bit period, and the bit stream processing latency is the delay from the time the bit stream arrives at the node to the time the individual bits are available for inspection.
In conventional systems, the data rate is limited for a variety of reasons. For example, an incoming bit stream is usually stored for processing in a random access memory (RAM) buffer, such as a dynamic RAM (DRAM) buffer and static RAM (SRAM) buffer. Thus, the data rate of a bit stream can be limited by the speed of these memories.
Furthermore, the entire bit stream is typically stored in she RAM buffer before bits of the bit stream may be inspected. Accordingly, the processing latency depends on the size of the incoming bit stream and the rate at which the bit stream can be handled.
As another example, processor intervention may be required to handle the incoming bit stream. The speed of the processor, therefore, can further limit the data rate of the bit stream. Processors can also be relatively costly and require substantial additional hardware and software resources for functions unnecessary to high-speed bit stream capture. If, however, a processor already dedicated for another task is shared, then processing the bit stream reduces the performance of the shared processor.