Programmable logic devices (PLDs) exist as a well-known type of integrated circuit (IC) that may be programmed by a user to perform specified logic functions. There are different types of programmable logic devices, such as programmable logic arrays (PLAs) and complex programmable logic devices (CPLDs). One type of programmable logic device, called a field programmable gate array (FPGA), is very popular because of a superior combination of capacity, flexibility and cost. An FPGA typically includes an array of configurable logic blocks (CLBs) surrounded by a ring of programmable input/output blocks (IOBs). The CLBs and IOBs are interconnected by a programmable interconnect structure. The CLBS, IOBS, and interconnect structure are typically programmed by loading a stream of configuration data (bitstream) into internal configuration memory cells that define how the CLBs, IOBs, and interconnect structure are configured. The configuration bitstream may be read from an external memory, conventionally an external integrated circuit memory EEPROM, EPROM, PROM, and the like, though other types of memory may be used. The collective states of the individual memory cells then determine the function of the FPGA.
FPGAs may be configured to provide a Viterbi decoder or to be integrated with a Viterbi decoder. Conventionally, a Viterbi decoder is used for decoding convolutionally encoded data, though a Viterbi decoder may be used with other trellis-like structured transmission schemes. Use of convolutionally encoded data and Viterbi decoding is a known form of Forward Error Correction (FEC). FEC techniques are used in communications systems to improve channel capacity. Viterbi decoding is presently a requirement for third generation (3G) wireless network base stations. However, Viterbi decoding may be used in any of a variety of other known high data rate applications, including but not limited high definition television. For example, in a high data rate application, data bits may overlap causing intersymbol interference (ISI). Data bits may overlap owing to data rate, as well as bandwidth constraints. Moreover, other noise, such as additive white Gaussian noise (AWGN), may interfere with a transmitted data signal.
In order to accurately resolve a received transmission in the presence of noise, FEC is used. Use of FEC facilitates transmission of data in a manner that allows for a lower signal-to-noise ratio (SNR), namely, increased channel capacity, while obtaining an acceptable Bit Error Rate (BER). Thus, though SNR may be relatively small, valid data may still be decoded. Notably, in digital communication systems, SNR is conventionally expressed as Eb/No, which means energy per bit divided by one-sided noise density.
Convolutional codes may be expressed in terms of code rate and constraint length. Code rate, k/n, is the number of bits, k, into a convolutional encoder divided by the number of channel symbols, n, output by the convolutional encoder in an encoder cycle. Two well-known output symbols for Viterbi decoding are I and Q, representing I and Q channels of a signal. These two channels are distinct as they are modulated on respective carrier signals of the same frequency that are orthogonal to one another. Constraint length, M, is convolutional encoder length, namely, the number of k-bit stages available as input to combinatorial logic that produces output symbols. Decisions as to k-bits received may be made with unquantized estimates of received bits (soft decision decoding) or quantized estimates of received bits (hard decision decoding). In the latter embodiment, bits are decoded to form hard decisions, analog-to-digital quantization levels, on a waveform.
An approach to higher code rates is punctured codes. Generating output symbols from a convolutional encoder and then deleting one or more selected output channel symbols creates a punctured code. Accordingly, the number of output channel symbols, n, is reduced while the number of bits, k, into a convolutional encoder is not reduced. Hence, a higher code rate may be provided with puncture codes.
A problem with Viterbi decoding occurs when a Viterbi decoder is not synchronized with received convolutionally encoded data. For example, phase ambiguities may exist in I and Q symbols when a phase lock loop (PLL) locks on a wrong phase, and thus a Viterbi decoder dependent upon such a PLL locking to a correct phase will be out of synchronization. It is important to resolve such phase ambiguities, as an unsynchronized Viterbi decoder will not produce valid data. While a 180-degree phase ambiguity or out-of-phase condition may be resolved with differential encoding and decoding, other phase ambiguities need a synchronization algorithm.
Normalization rate may be used to detect Viterbi decoder synchronization status. A high normalization rate, namely, one exceeding a determined threshold, would indicate a loss of synchronization. Though normalization rate works for all values of Eb/No, normalization rate threshold depends on code rate. As an example, the Qualcomm Q1900 Viterbi Decoder at a code rate of ½ has a normalization rate threshold of 10.2 percent. If a punctured code was used with a code rate of ¾, normalization rate threshold is 1.7 percent. A code rate of ⅞ has a normalization rate threshold of 0.8 percent. Thus, as code rate increases, margin of error decreases making achieving normalization more problematic.
BER may be used to detect Viterbi decoder synchronization status. BER is estimated by comparing Viterbi decoder decisions to channel hard decisions. A BER exceeding a predetermined threshold is used to indicate loss of synchronization of a Viterbi decoder. Though BER works for both punctured and non-punctured codes, BER threshold is dependent on Eb/No. In other words, such a BER threshold does not work for all values of Eb/No. So for example, both unsynchronized and synchronized values of Eb/No exist for such a predetermined threshold.
Accordingly, it would be desirable and useful to provide method and apparatus to detect when a Viterbi decoder is in an unsynchronized state. More particularly, it would be desirable and useful to provide method and apparatus that reduced probability of both synchronized and unsynchronized Eb/No values existing for a threshold and did not reduce margin of error as much for punctured codes.