1. Field of the Invention
The present invention relates to a video and audio reproducing apparatus and a video and audio reproducing method, and in particular relates to a video and audio reproducing apparatus and a video and audio reproducing method for reproducing video images and sound (audio) based on video and audio streams such as MPEG (Moving Picture Experts Group) streams.
2. Description of the Related Art
Conventionally, there are known video and audio reproducing apparatuses for reproducing video images and sound based on video and audio streams. For example, there are known video and audio reproducing apparatuses for reproducing video images and sound based on MPEG streams.
Transmission and reproduction of an MPEG stream is performed as follows.
To begin with, a transmitter for transmitting an MPEG stream will be described.
The transmitter encodes a video signal to generate an MPEG video stream that is formed of multiple video frames. The transmitter also encodes the audio signal synchronizing with the video signal to generate an MPEG audio stream that is formed of multiple audio frames.
Subsequently, the transmitter divides the MPEG video stream to generate video data. The transmitter assembles the video data into a PES (packetized elementary stream) packet. The transmitter also divides the MPEG audio stream to generate audio data. The transmitter assembles the audio data into a PES packet that contains no video data. The PES packet contains time information that is to be used for synchronization between video images and sound (audio).
FIG. 1 is an illustrative view showing a PES packet.
As shown in FIG. 1, PES packet 100 includes PTS (Presentation Time Stamp) 100a and DTS (Decoding Time Stamp) 100b. 
DTS 100b indicates the time (decode timing) at which the data in PES packet 100 should be decoded. PTS 100a represents the time (output timing) at which the data in PES packet 100 should be reproduced.
The transmitter assigns the same value to the PTS in a PES packet, which contains data of a certain video frame, and to the PTS in a PES packet that contains the data of the audio frame synchronizing with the video frame. Thus, video and audio can be synchronized.
The transmitter, thereafter, transmits the MPEG stream (video and audio stream) including the PES packet, which contains the video data, and the PES packet that contains the audio data. Accordingly, the MPEG stream contains multiple pairs of video data and audio data having PTSs that represents the same output timing.
Next, a reproducing unit for reproducing video images and sound audio based on the MPEG stream will be described.
FIG. 2 is a block diagram showing a reproducing unit.
In FIG. 2, the reproducing unit includes MPEG decoder 1, display unit 2 and audio output unit 3.
MPEG decoder 1 decodes the MPEG stream (video and audio stream) that has been transmitted from the transmitter to generate video and audio signals. MPEG decoder 1 supplies the video signal to display unit 2. MPEG decoder 1 supplies the audio signal to audio output unit 3.
Display unit 2, upon receipt of the video signal from MPEG decoder 1, displays video image in accordance with the video signal.
Audio output unit 3, upon receipt of the audio signal from MPEG decoder 1, provides sound in accordance with the audio signal.
MPEG decoder 1 includes separating circuit 1a, buffers 1b and 1c, system decoder 1d, video decoder 1e and audio decoder 1f. 
Separating circuit 1a, upon receipt the MPEG stream from the transmitter, separates video data, audio data, PTSs and DTSs from the MPEG stream. Separating circuit 1a stores the video data into buffer lb. Separating circuit 1a stores the audio data into buffer 1c. Separating circuit 1a supplies the PTSs and DTSs to system decoder 1d. 
Buffer 1b stores the video data supplied from separating circuit 1a. 
Buffer 1c stores the audio data supplied from separating circuit 1a. 
System decoder 1d provides a decode command to video decoder 1e at the time that is indicated by the DTS relative to the video data in buffer 1b. System decoder 1d also provides an output command to video decoder 1e at the time that is indicated by the PTS relative to the video data.
Further, system decoder 1d provides a decode command to audio decoder 1f at the same time as indicated by the DTS relative to the audio data in buffer 1c. System decoder 1d also provides an output command to audio decoder 1f at the same time as indicated by the PTS relative to the audio data.
Upon receipt of the decode command, video decoder 1e reads the video data from buffer 1b and then decodes the video data to generate a video signal. Then, upon receipt of the output command, video decoder 1e supplies the video signal to display unit 2.
Upon receipt of the decode command, audio decoder 1f reads the audio data from buffer 1c and then decodes the audio data to generate an audio signal. Then, upon receipt of the output command, audio decoder 1f provides the audio signal to audio output unit 3.
Japanese Patent Application Laid-open 2002-354419 discloses a recording and reproducing apparatus for reproducing video images and sound based on MPEG streams.
This recording and reproducing apparatus is configured to rewrite the DTS values and PTS values in accordance with the playback speed for special playback, in order to be able to provide smooth video pictures with its sound synchronized with the pictures when a special playback mode such as fast forward etc., is selected. This recording and reproducing apparatus rewrites the PTS value of a certain video frame and the PTS value of the audio frame, which is synchronized with the video frame, with the same value.
On the other hand, display devices such as liquid crystal displays or plasma displays use a fixed number of pixels. A display device having a fixed number of pixels performs a video signal process called resolution conversion so as to display an image in accordance with an input video signal even if the resolution of the input signal is different from that of its own.
FIG. 3 is a block diagram showing a reproducing apparatus including display unit 2 that performs resolution conversion. In FIG. 3, the same components shown in FIG. 2 are allotted the same reference numerals.
In FIG. 3, display unit 2 includes video signal processor 2a, drive circuit 2b and display device 2c having a fixed number of pixels. Video signal processor 2a includes memory 2a1 and resolution converting circuit 2a2. Audio output unit 3 includes audio output circuit 3a and speaker 3b. 
Video signal processor 2a, upon receipt of the video signal from MPEG decoder 1, converts the resolution of the video signal based on the resolution of display device 2c. 
Specifically, resolution converting circuit 2a2, upon receipt of the video signal from MPEG decoder 1, stores the video signal into memory 2a1 for a certain period, then reads the video signal from memory 2a1, and then converts the resolution of the video signal.
Drive circuit 2b drives display device 2c in accordance with the video signal whose resolution has been converted by resolution converting circuit 2a2 and displays video image on display device 2c in accordance with the video signal.
Audio output circuit 3a drives speaker 3b in accordance with the audio signal received from MPEG decoder 1 and provides sound from speaker 3b in accordance with the audio signal.
Video signal processor 2a stores the video signal into memory 2a1 for a certain period, and then performs resolution conversion. Accordingly, the display timing of display device 2c is delayed by the amount of time that is required for resolution conversion.
Video signal processor 2a also performs video signal processes other than resolution conversion. For this reason, in actual practice, the display timing of display device 2c is delayed by the amount of time that is required for those video signal processes.
On the contrary, audio output unit 3 does not have any delay as in the video signal processing.
Accordingly, even if, for example, MPEG decoder 1 synchronizes video image and sound based on the DTS values and PTS values, occurrence of the video image is delayed until after sound is generated because of video signal processing that is performed by display unit 2.
FIG. 4 is a timing chart for illustrating the delay of the video. FIG. 4(a) shows the timing at which the video signal is written into memory 2a1. FIG. 4(b) shows the timing at which the video signal is read out from memory 2a1. FIG. 4(c) shows the timing at which the audio signal is provided.
As shown in FIG. 4(a) and FIG. 4(c), prior to the execution of video signal processing, the video frame (video signal) and audio frame (audio signal) having the same PTS value are synchronized with each other.
However, as shown in FIG. 4(b) and FIG. 4(c), after execution of video signal processing, for the video frame and audio frame that have the same PTS value, the video frame is provided after the audio frame (see delay A in FIG. 4(c)).
The recording and reproducing apparatus described in Japanese Patent Application Laid-open 2002-354419 rewrites the PTS value of a certain video frame and the PTS value of the audio frame, which is synchronized with the video frame, with the same value. Accordingly, if this recording and reproducing apparatus is used as MPEG decoder 1, video image delay due to video signal processing will occur.
For this reason, video images and sound are supplied with a timing lag therebetween, or if the video images and sound must be synchronized, it is necessary to provide extra hardware such as a delay circuit for audio signals, or the like.