The present invention relates to a coding method and recording medium, a decoding method, a demodulation method, an error-correcting method, and a recording-medium reproducing apparatus, for subjecting information data to an error-correcting coding and a modulation based on symbol correspondence rules to create channel data, recording the channel data onto a recording medium, and subjecting channel data reproduced from the recording medium to a demodulation based on symbol correspondence rules and an error-correcting decoding to reconstruct the information data.
Among error-correcting methods, the turbo code method has been capturing the spotlight mainly in the communication field by virtue of its having such high performance as to approach the theoretical limit of the transmission rate at which transmission can be achieved without errors (namely, Shannon limit).
Further, studies on applications of the turbo code method to the recording medium field as well, not only the above-noted communication field, have energetically been published.
A recording and reproducing apparatus using this turbo code is explained briefly. FIG. 27 is a schematic diagram of a recording and reproducing apparatus which performs coding and decoding processes of turbo codes. A convolutional coder 1 performs convolutional coding on inputted information data ui to output code data ci. An interleaver 2 performs a pseudo-random substitution on the inputted code data ci to output channel data ai. The channel data ai outputted in this way is transmitted to a partial response (hereinafter, abbreviated as PR) channel 3. This PR channel 3 has a property that adjacent channel data ai interfere with each other. As a result of this, an intersymbol interference occurs to a reproduced signal y′i reproduced from the PR channel 3. Also, the channel data ai, when passing the PR channel 3, undergoes deformation such as noise addition, band limiting or crosstalk. Therefore, the reproduced signal y′i reproduced from the PR channel 3 has errors added thereto.
A logarithmic-likelihood computing unit 4, to which the reproduced signal y′i is inputted, computes logarithmic likelihoods L(y′i|yi), outputting the logarithmic likelihoods L(y′i|yi) of the reproduced signal y′i. It is noted here that yi is a reproduced signal resulting when the PR channel 3 is ideal. The term ‘ideal’ refers to a case where there occurs no deformation due to noise or the like so that transfer characteristics of the PR channel 3 are equal to PR transfer characteristics. The logarithmic likelihoods L(y′i|yi) are inputted to a code input terminal c;I of an a posteriori probability (hereinafter, abbreviated as APP) decoder 5 for the PR channel. It is noted here that a symbol with a prime (′) indicates that the symbol is data reconstructed after reproduction, and a symbol without a prime (′) indicates that the symbol is data before recording.
Generally, an APP decoder has 2-input and 2-output terminals, i.e., an information input terminal u;I into which the likelihood of information data is inputted, a code input terminal c;I into which the likelihood of code data is inputted, an information output terminal u;O from which the likelihood of information data is outputted, and a code output terminal c;O from which the likelihood of code data is outputted. The APP decoder, receiving inputs of an information-data likelihood and a code-data likelihood, updates those likelihoods in compliance with constraints concerning codes. It is noted that a likelihood inputted to the information input terminal u;I is called a priori information. From the information output terminal u;O, a likelihood of updated information data is outputted. From the code output terminal c;O, a likelihood of updated code data is outputted. It is noted that the terms “information data” refers to data inputted to a coder corresponding to an APP decoder, and the terms “code data” refers to data outputted from the coder.
Therefore, in the PR-channel APP decoder 5, logarithmic likelihoods L(y′i|yi) of the reproduced signal y′i are inputted to the code input terminal c;I, and an output L(a′i;I) of an interleaver 11 is inputted to the information input terminal u;I. Further, from the information output terminal u;O, logarithmic-likelihood ratios L(a′i;O) of channel data a′i are outputted. It is noted that the code output terminal c;O, from which the logarithmic likelihoods L(y′i;O) of the PR channel data y′i are outputted, is connected to none.
A subtracter 6 subtracts the output L(a′i;I) of the interleaver 11 from the logarithmic-likelihood ratios L(a′i;O) of the channel data a′i derived from the PR-channel APP decoder 5, outputting a subtraction result thereof as an Lext(a′i). That is, the subtracter 6 calculates a logarithmic-likelihood ratio difference with respect to the channel data a′i.
A deinterleaver 7 performs an inverse substitution of the aforementioned pseudo-random substitution on the Lext(a′i) inputted from the subtracter 6, outputting logarithmic-likelihood ratios L(c′i;I) of the code data c′i. In an APP decoder 8 for convolutional codes, the logarithmic-likelihood ratio L(c′i;I) derived from the deinterleaver 7 is inputted to the code input terminal c;I, while a zero is inputted to the information input terminal u;I. Then, a logarithmic-likelihood ratio L(u′i;O) of the information data u′i is outputted from the information output terminal u;O to information data u′i, while a logarithmic-likelihood ratio L(c′i;O) of the code data c′i is outputted from the code output terminal c;O. Thus, the logarithmic-likelihood ratio L(u′i;O) of the information data u′i outputted from the information output terminal u;O of the convolutional-code APP decoder 8 is binarized by a comparator 9 and outputted as reconstructed information data u′i.
A subtracter 10 receives an input of the logarithmic-likelihood ratio L(c′i;O) of the code data c′i outputted from the code output terminal c;O of the APP decoder 8 for convolutional codes as well as an input of the logarithmic-likelihood ratio L(c′i;I) of the code data c′i derived from the deinterleaver 7. Then, the logarithmic-likelihood ratio L(c′i;I) is subtracted from the logarithmic-likelihood ratio L(c′i;O), and a subtraction result thereof is outputted as an Lext(c′i). That is, the subtracter 10 calculates a logarithmic-likelihood ratio difference with respect to the code data c′i.
The interleaver 11 performs the aforementioned pseudo-random substitution on the Lext(c′i) inputted from the subtracter 10, outputting logarithmic-likelihood ratios L(a′i;I) of the channel data a′i. The logarithmic-likelihood ratio L(a′i;I) of the channel data a′i outputted from the interleaver 11 in this way is inputted to the information input terminal u;I of the PR-channel APP decoder 5 as described above.
The operation of performing iterative decoding by repeatedly delivering logarithmic-likelihood ratios between the two APP decoders of the PR-channel APP decoder 5 and the convolutional-code APP decoder 8 as described above is referred to as turbo decoding. With the use of this turbo decoding, errors of the reconstructed information data u′i can be decreased. In this case, at a first-time decoding operation, the L(a′i;I) to be inputted to the information input terminal u;I of the PR-channel APP decoder 5 is assumed to be zero.
It is noted that the principle of operation of the turbo decoding is described in detail, for example, in Reference 1 “Iterative Correction of Intersymbol Interference: Turbo-Equalization,” European Transactions on Telecommunications, Vol. 6, No. 5, pp. 507–511, September–October 1995,” and Reference 2 “Concatenated Codes and Iterative (Turbo) Decoding for PRML Optical Recording Channels,” Joint International Symposium on Optical Memory and Optical Data Storage 1999, SPIE Vol. 3864, pp. 342–344, July 1999.”
In that case, as described above, information to be inputted to the code input terminal c;I of the PR-channel APP decoder 5 needs to be soft information like the logarithmic likelihoods L(y′i|yi) Each piece of information to be delivered between the two APP decoders 5, 8 also needs to be soft information like L(a′i;O), Lext(c′i), L(a′i;I), L(c′i;O), Lext(a′i) and L(c′i;I)
In the case where the PR channel 3 is a recording medium, i.e., in the case of a system which performs recording and reproduction on media such as magnetic recording, magneto-optical recording and optical recording, there exist constraints such as band limiting of the PR channel 3, intersymbol interference, clock synchronization and the like. Therefore, the Run Length Limited (hereinafter, referred to as RLL) method is usually used for the modulation method. Generally, RLL data is expressed as RLL(d, k). It is noted here that “d” and “k” represents minimum and maximum run lengths of 0's in a channel data train according to the NRZI (non-return-to-zero-inverted) rules.
Referring to the RLL in more detail, polarity inversion intervals of recording waveform trains are limited to a minimum polarity-inversion interval Tmin and a maximum polarity-inversion interval Tmax. That is, inversion intervals T of recording waveform trains are within the limits of Tmin≦T≦Tmax. Generally, the minimum polarity-inversion interval Tmin is expressed as (d+1)×Tw. The maximum polarity-inversion interval Tmax is expressed as (k+1)×Tw. It is noted here that “Tw,” which denotes the width of a detection window for reproduced signals, is Tw=η×Tb, equal to the greatest common measure of polarity-inversion intervals. It is noted that “Tb” denotes a data interval before modulation. The symbol “η,” called code rate, is equal to m/n. That is, pre-modulation m bits are transformed into post-modulation n bits.
These RLL modulation and RLL demodulation are performed generally by logical operation circuits. Otherwise, those modulation and demodulation are achieved by preparatorily storing results of logical operations in ROM (Read-Only Memory) and referring to this ROM as a table. Therefore, input data of the RLL demodulation is hard information, and output data of the RLL demodulation is also hard information.
FIG. 28 is a schematic diagram of a prior-art recording and reproducing apparatus in which the RLL modulation method is applied to transmission and reproduction of information to the PR channel. An error-correcting coder 15 performs error-correcting coding on inputted information data ui, and outputs code data ci. An RLL modulator 16 performs RLL modulation on inputted code data ci, and outputs channel data ai. The channel data ai outputted in this way is transmitted to a PR channel 17.
As described above, a reproduced signal y′i reproduced from the PR channel 17 has errors added thereto. The maximum likelihood (hereinafter, abbreviated as ML) decoder 18 presumes and outputs channel data a′i from the inputted reproduced signal y′i based on intersymbol interference due to characteristics of the PR channel 17 and on the RLL condition. It is noted here that the terms “RLL condition” mean that inversion intervals T of recording waveform trains are within the limits of Tmin≦T≦Tmax. The ML decoding, which is generally calculated according to Viterbi algorithms, is often called Viterbi decoding. It is noted here that the ML decoder 18 in principle outputs presuming results as hard information. That is, the presumed channel data a′i is binary data.
An RLL demodulator 19 performs a demodulation on the reconstructed channel data a′i, outputting reconstructed code data c′i. An error-correcting decoder 20 performs on the inputted code data c′i correction of the errors added in the PR channel 17, outputting reconstructed information data u′i.
The method in which data reproduced from the PR channel 17 is processed for ML decoding in the manner as described above is called PRML (Partial Response Maximum Likelihood) method, and is widely used for recording-medium reproducing apparatus.
FIG. 29 is a block diagram showing the construction of a prior-art RLL demodulator 19. Reconstructed channel data a′i outputted from the ML decoder 18 shown in FIG. 28 is inputted to a p-stage shift register 21. Generally, the number p of stages of this shift register 21 is not less than n. The p-stage shift register 21 shifts data in steps of the interval Tw, outputting parallel data (a′1, a′2, . . . , a′k, . . . , a′p). A logical operation circuit 22, receiving inputs of the parallel data (a′1, a′2, . . . , a′k, . . . , a′p), performs the above-described logical operations, outputting post-demodulation parallel data (c′1, c′2, . . . , c′j, . . . , c′m). An m-stage shift register 23 with a parallel load function performs parallel loading of the post-demodulation parallel data (c′1, c′2, . . . , c′j, . . . , c′m), and shifts the data in steps of the interval Tb, outputting post-demodulation serial data c′i. The above logical operation and parallel loading are performed synchronously every interval (m×Tb).
As described above, the RLL demodulator 19, receiving an input of hard-information (i.e., binarized) channel data a′i, performs demodulation on the data, outputting hard-information (i.e., binarized) post-demodulation code data c′i.
However, the above-described prior-art recording and reproducing apparatus has problems shown below.
As described above, in the case where the RLL modulation method is used, the RLL demodulator 19 outputs code data c′i as hard information. On one hand, channel data a′i to be inputted to the RLL demodulator 19 needs to be hard information. On the other hand, for the turbo decoding method that uses two APP decoders, the PR-channel APP decoder 5 and the convolutional-code APP decoder 8, as shown in FIG. 27, it is necessary to input reconstructed code data c′i as soft information.
Therefore, in making up a recording and reproducing apparatus which adopts the above PR channel 17 as a recording medium, the turbo decoder is unusable and the PRML has to be used since the RLL demodulator 19 is used. Therefore, as compared with the case where a turbo decoder is used, there is a problem that recording density to the recording medium lowers.