Field of the Invention
This invention relates to a video signal decoding apparatus, and more particularly, is preferably used in a recording and reproducing apparatus for recording and reproducing a moving picture signal on a recording medium such as a magneto-optical disc and a magnetic tape, and in a receiving apparatus of a television conference system for sending and receiving the moving picture signal via a transmission line.
Conventionally, in a system of transmitting a moving picture signal to a remote place, such as a television conference system, video telephone system, a broadcasting system or the like, a method is adopted for compressing and coding the video signal by using a line correlation and a correlation between frames to use the transmission line very efficiently. For example, when the line correlation is used, the video signal can be compressed by using orthogonal transform coding processing (for example, discrete cosine transform (DCT)). Furthermore, when the correlation between frames is used, the video signal can be further compressed.
Normally, frame images adjacent to each other in terms of time do not show a large change. In other words, when a difference between the images is calculated, the difference signal becomes a small value. Then the difference signal is coded to compress the code amount. However, the original image cannot be restored only by sending the difference signal. Consequently, a method is adopted for compressing and coding the video signal by converting each frame image into either of three kinds of frame format, an I picture, a P picture, and a B picture.
The coding method is shown in FIGS. 1A and 1B. In the compressing and coding method, a series of frame group is processed in a unit of seventeen frames (frame F1 through F17). The unit of processing is referred to as a group of pictures. The group of pictures is coded into an I picture, a B picture, and a P picture respectively from the frame F1. After that, the frames F4 through F7 after the fourth frame are alternately coded into the B picture and the P picture.
Here, the I picture is a picture obtained by coding one frame portion of video signal as it is. Further, as shown in FIG. 1A, the P picture is basically a picture obtained by coding a difference in the video signal with respect to the I picture located ahead in time or a difference in the video signal with respect to the P picture located ahead in time. In addition, as shown in FIG. 1B, the B picture is basically a picture obtained by coding a difference in the video signal with respect to an average value between the frame located ahead in time and the frame located at the back in time. This coding method is referred to as a bidirectional prediction coding.
For reference, with the B picture, three kinds of coding methods are actually used in addition to the bidirectional coding method. In a first processing method, the original frame F2 is transmitted as it is as a transmission data item. This method is referred to as an intra coding which method is the same as the I picture. In the second processing method, a difference from the frame F3 located at the back in time is calculated to transmit the difference. This is referred to as a backward prediction coding.
Furthermore, in the third processing method, a difference with the frame F1 located ahead in time is transmitted. This is referred to as a forward prediction coding.
Then at the coding time, data coded in a method in which the transmission data reduced to the least amount is adopted as the B picture out of the aforementioned four coding methods.
In the actual coding apparatus, the video signal in these frame formats (I picture, P picture, or B picture) is further converted into a block format signal to be transmitted as a bit stream.
This block format is shown in FIGS. 2A to 2C. As shown in FIGS. 2A to 2C, the frame format video signal comprises a collection of V lines of lines comprising H dots per line.
One frame video signal is segmented into N slices with no definite length by setting sixteen lines as one unit. Each slice comprises M macro blocks. Each macro block comprises a luminance signal corresponding to 16.times.16 pixels (dots). The luminance signal is segmented into blocks Y1! through Y4! having 8.times.8 dots as one unit. Color signals Cb and Cr having 8.times.8 dots correspond to the luminance signal having 16.times.16 dots.
The decoding apparatus is constituted to obtain a video signal by receiving and decoding the bit stream converted into the block format via the recording medium and the transmission line.
By the way, in the case where some errors are present in the received bit stream, the decoding apparatus inserts the error start code D.sub.ES into the bit stream to provide this data to the variable length decoding circuit on the rear stage.
The variable length decoding circuit is operated so that the circuit analyzes and decodes the bit stream normally subsequently input to provide the decoded bit stream to the rear stage circuit, and provides each kind of control parameter to each part on the rear stage. The variable length coding circuit is constituted so that the decoding operation is suspended in the case where such an error start code D.sub.ES is detected.
The variable length coding circuit proceeds to an operation of retrieving the subsequent synchronous code from the bit stream, at the same time as the suspension, to jump the reading of the decoding operation of the bit stream subsequently input up to a position where the synchronous code is detected. Then the variable length coding circuit is constituted so as to resume the decoding operation when the synchronous code is detected.
This state will be explained by using FIG. 3. As shown in FIG. 3, the bit stream of the image data comprises a header part and a data part. Here, the frame header denotes the header of the frame while the slice header denotes the header of each slice which constitutes this frame. Furthermore, a macroblock (MB) header denotes a header of each macro block which constitutes each of these slices. Then the following block data denotes actual data in each block.
The synchronous code is inserted into the slice header and the frame header. The synchronous code is not included in the macroblock header. Consequently, the unit in which the variable length decoding circuit resumes the decoding operation is normally slice or frame.
At this time, the address in the vertical direction of the screen has been inputted in the slice header. Then, the address in the horizontal direction of the screen has been inputted in the macroblock header at the head of the slice. Consequently, when the decoding operation is resumed, the position on the screen of the reproduced image can be correctly judged.
However, in the case where an error is detected in the decoding operation, it is difficult to judge from what position of the bit stream an error is present in the decoding operation. Consequently, it is impossible to judge whether the decoding operation at that time is correct or not. Since the bit stream is a data item whose variable length is coded, an error cannot be detected soon even when an error is generated in the bit because the pattern fits into the variable length.
For example, there is a bit stream of "00111011101." In the case of variable length decoding processing, the bit stream is to be originally decoded in such a manner as "001" "101" and "1101". However, because of the error generated in the bit stream, in the case where the bit stream is given as "0111011101", the bit stream is variably coded in such a manner as "01," "11," "01," "11". . . .
However, even when the bit stream is variably decoded by mistake, it is impossible to judge from which position the variable length decoding processing is wrong. In other words, even when an error is generated, the error cannot be always detected soon.
Consequently, there is a case in which the decoding operation is completed within the time allocated to the data processing of one frame depending on the state of the error position and the detection position. This fact will be explained by using FIGS. 4A to 4D.
In the beginning, as shown in FIG. 4A, in the case where, for example, the error generation is detected as an error, and as shown in FIG. 4B, in the case where the error is detected soon after the error is generated, the image can be reproduced without interrupting processing time when the decoding operation is resumed from the following synchronous code. For reference, in an example shown in FIG. 4B, the case in which "the error is detected soon after the error is generated" means that the image data obtained by the variable length decoding until the error detection belongs to the inside of the slice where an error is present. Consequently, time required for decoding up to this time is within the time required for the decoding of the slice.
However, as shown in FIGS. 4C or 4D, in the case where the pixel number of the image data decoded until it is detected after the error generation exceeds the pixel number which belongs to the original slice, namely in the case where the process has progressed to a place which should be decoded after the position segmented by the subsequent synchronous code, there is a problem in that the image data located at the same position where the decoding operation is resumed from the position of the synchronous code detected next from the error detection is decoded in a repeated manner so that the time required for the decoding operation becomes insufficient.
This one example is shown in FIGS. 5A and 5B. The slice part constituted of the macroblocks numbered 40th to 50th is decoded in such a manner that the macroblocks after 50th are present because of an error of the variable length decoding operation. In the case of this example, the variable length decoding circuit detects the error at the time when the 53rd macroblock is obtained. Then, the synchronous code provided on the subsequent slice header is detected to resume the decoding operation. At this time, in the slice head, the storage of the macroblocks after the 51st macroblock is recorded as the macroblock address. The 51st macroblock, 52nd macroblock . . . are subsequently decoded at the same time when the decoding operation is resumed. In other words, time required for the processing of the 51st macroblock, the 52nd macroblock and 53rd macroblock will be doubled.
However, time required for the decoding of one frame image is originally determined. In this manner, it sometimes happens that the decoding operation is not completed within the time of decoding one frame by overlapping the image in the same part on the screen.