1. Field
The present disclosure relates generally to communications and, more particularly, to communications that use convolutional coding.
2. Background
Convolutional encoders encode a packet (or other unit) of information bits serially by running a finite state machine (FSM) with information bits as inputs and coded bits as outputs. An example of a conventional convolutional encoder is shown at 11 in FIG. 1, having an input that receives the input information bits serially, three storage cells D that sequentially store each input bit for use in encode logic operations, and three outputs, coded bits C0, C1 and C2. The encoding complexity is linear in the packet length L. This complexity becomes a performance bottleneck for high-speed applications that require high encoding throughput for long packets.
Existing solutions apply look-ahead techniques that unroll the state machine n steps in time by providing logic that, for every input bit, produces the corresponding coded bits C0-C2 as a function of only the input bits and the initial encode state (i.e., the values initially stored in the storage cells D of FIG. 1). These look-ahead techniques can speed up performance by a factor of n, as indicated by comparing FIGS. 1 and 2. In the encoder of FIG. 1, n clock cycles are required to encode a sequence of n input bits. In contrast, in the look-ahead encoder 21 of FIG. 2, all n input bits are received and encoded in parallel, so all 3n coded bits associated with the n input bits are produced in a single clock cycle. However, for large n, the logic complexity and critical path of a look-ahead encoder increase significantly, so the look ahead technique becomes impractical at some point as n (the degree of unrolling) increases.
It is desirable in view of the foregoing to provide for another approach to increasing encoding throughput in convolutional encoders.