A relatively large number of digital transmission and recording systems require the use of codes, often called constrained codes, that impose restrictions on the channel's input data sequences. These codes are primarily designed to improve timing and gain control in a receiving unit and shape the spectrum of the transmitted sequences such that it matches the frequency characteristics of the channel. In addition, these codes reduce intersymbol interference.
In digital transmission and storage it is often required for the channel stream to have low power near zero frequency. In other words, the code must reject the low frequency components. If the code has no spectral energy at Direct Current (or DC), it is said to be DC-free or DC-balanced.
In the early sixties, DC-free codes were used to suppress the spectral components of encoded sequences near zero frequency and were instrumental in reducing the effects of Direct Current wander (or DC wander) during data transmission through metallic components of both the transmitting and receiving systems.
In the power spectrum of a DC-balanced code, the frequency region with suppressed components is often characterized by a quantity referred to as the cutoff frequency which is proportional to the redundancy of the coded sequence. Moreover, the power spectral density function of these codes is often characterized by a parabolic shape in the low frequency range from DC to the cutoff frequency. In addition to their spectral shaping properties, codes with a spectral null (DC-balanced) possess distance properties, i.e., they increase the Euclidean distance at the output of partial response channels that are often encountered in digital transmission over wires as well as in digital magnetic or optical recording, and can thus be used to improve the overall reliability of data transmission over relatively noisy channels.
Many other DC-free channel codes have been designed and implemented in a broad range of products with the aim of suppressing the power of the encoded data transmission stream at the lower frequencies. For example, magnetic recording channels have a spectral null at DC and thus substantially high frequency attenuation characteristics. The design of most of these codes is based upon a running digital sum constraint wherein the running digital sum is defined for a truncated binary sequence as the accumulated sum of positive and negative 1's from the beginning of the transmitted code sequence up to the defined point of truncation. However, in some applications such as magneto-optic recording or PRML magnetic recording it is desirable to achieve a higher rejection of the low frequency components than is possible with standard DC-free codes. In order to achieve such rejection a second order DC-free code may be used. That is, a code is implemented having the second derivative of the code spectrum vanishing at zero frequency which results in a substantial decrease of the power at low frequencies for a fixed code redundancy as compared with the more conventional first-order DC-free coding schemes.
In the paper by K. A. Schouhamer Immink and G. F. M. Beenker, "Binary Transmission Codes with Higher Order Spectral Zeros at Zero Frequency", IEEE Trans. Inform. Theory, Vol. IT-33, pp. 452-454, (May 1987), a technique was presented for designing binary channel codes in such a way that both the power spectral density function and its low order derivatives vanish at zero frequency. The performance of the new codes was compared with that of channel codes designed with a constraint on the unbalance of the number of transmitted positive and negative pulses. The power spectral density function of these codes, besides having zero power at zero frequency, has all low order derivatives up to order 2K+1 equal to zero at zero frequency. The minimum distance of such K-th order zero disparity codes is shown to be at least 2(K+1). Furthermore, the added contraints result in a higher rejection of the components in the low frequency range than is generally possible with the first-order DC-free codes. The results of which indicate that the capacity of the second order DC-free channel approaches its limiting value of 1 slowly as is shown in the following Table, wherein the number M of DC.sup.2 -constrained codewords and the rate R of the corresponding code are compared with the codeword length n.
______________________________________ n M R ______________________________________ 4 2 0.250 8 8 0.375 12 58 0.488 16 526 0.565 20 5448 0.621 24 61108 0.662 28 723354 0.695 32 8908546 0.722 36 113093022 0.743 ______________________________________
As a result, second order DC-free codes with rates of more practical interest are generally only possible if the codeword length is relatively large.
In another paper by E. Eleftheriou and R. D. Cideciyan, "On Codes Satisfying Mth-Order Running Digital Sum Constraints", IEEE Trans. Inform. Theory, Vol. 37, No. 5, pp. 1294-1313, (September 1991), multi-level sequences with spectral null of order L at frequency f, where the power spectral density and its first 2L+1 derivatives vanish at frequency f, were characterized by finite state transition diagrams whose edge labels satisfy bounds on the variation of the L-th order running digital sum. Distance properties of this class of codes on partial response channels with a spectral null of order P were examined and a lower bound on the minimum Euclidean distance at the output of the partial response channels was obtained. However, this state diagram based approach for code construction may lead to a relatively low rate in those cases where it is desired to keep the complexity low, e.g., the rate .gtoreq.0.80 requires &gt;140 states in the state diagram.
Thus, existing second order DC-free coding schemes often require relatively sophisticated and complex hardware and/or software for encoding and decoding these sequences which becomes overall prohibitively too complex as the length of the codeword increases. For instance, in a direct encoding scheme which employs one or more relatively large lookup tables for storing the codewords such as that disclosed by Immink, "Coding Techniques for Digital Recorders", New York: Prentice Hall, (1991), the size of the required lookup table(s) grows exponentially with the increased codeword length. For a codeword length of about 28 or more, this kind of the more direct approach becomes impractical since the hardware/software required to implement this technique becomes prohibitively complex.
An alternative enumerative coding scheme, which is also based on lookup table techniques, is disclosed in K. A. Schouhamer Immink, "Spectrum Shaping with Binary DC.sup.2 -Constrained Channel Codes", Doctoral Thesis pp. 49-62, reprinted Philips Journal of Research, Vol. 40, pp. 40-53, (1985), wherein a class of DC-free codes is taught having the second derivative of the code spectrum vanishing at zero frequency, i.e., a subclass of second order DC-free codes. It is shown therein that if the redundancy is fixed then the cutoff frequency of DC.sup.2 -balanced codes is approximately a factor of 2.5 smaller than that of classical DC-balanced codes. However, a substantial decrease of the power at low frequencies for a fixed code redundancy is observed. Although the paper presents an enumeration technique for finding the number of codewords to be used in a DC.sup.2 -balanced code, relatively long codewords have to be used in order for the new codes to have any generally practical rate thereby also making this technique of encoding and decoding prohibitively complex as well. For a codeword of length n, this scheme requires that one pre-compute and store in a table the number of vectors of length t with first order disparity d.sub.x and second order disparity d.sub.y for all t=1,2, . . . ,n and for all possible values of d.sub.x and d.sub.y. Although this technique is better than direct encoding, codeword lengths of for instance n&gt;100 and thus high information rates also remain unattainable because of the complexity.