1. Field of the Invention
The invention relates in general to three-dimensional (3D) display, and more particularly to a synchronization method and associated apparatus for left-eye and right-eye images of a 3D display apparatus.
2. Description of the Related Art
As display techniques continuously progress, displays have evolved from earlier black-and-white televisions and color televisions to the current mainstream high-definition televisions. A common goal of all the displays in different eras is to represent more realistic and natural images for satisfying viewer demands. In the field of display techniques in recent years, 3D images are developed from 2D images to provide viewers with not only planar images and colors but also stereoscopic visual experiences.
A basic principle of the 3D display technique to generate a stereoscopic sensation utilizes parallax of the human eyes by providing two different images to left and right eyes. To generate a 3D effect, current techniques include polarized glasses, red (green) blue glasses, shutter glasses, head-mounted displays and parallax barricades—all of which need synchronization between left-eye and right-eye images. Therefore, in the event of poor synchronization accuracy, not only might there be a failure of forming depth perception in the brain from images perceived by both eyes, i.e., a stereoscopic sensation is not properly produced, but user discomfort or dizziness may even result.
FIG. 1 shows a flowchart of synchronizing a left-eye image and a right-eye image of a 3D image by a conventional solution. When playing a 3D image, stream data to be played is accessed in Step S110, and left-eye data, right-eye data and audio data are then demultiplexed from the stream data by a stream demultiplexer in Step S120. The left-eye data is decoded to obtain a left-eye image frame in Step S132, and a left-eye image frame is selected from a plurality of left-eye image frames according to a system time in Step S142. The right-eye data is decoded to obtain a right-eye image frame in Step S134, and a right-eye image frame is selected from a plurality of right-eye image frames according to the system time in Step S144. In Step S150, the frames selected in Step S142 and Step S144 are outputted.
The system time corresponds to the playback time of the images. Each left-eye frame and each right-eye image frame is respectively marked by a presentation time stamp (PTS), which indicates a time point at which each image frame is to be played. The PTS is referred to as a left-eye presentation time stamp (LPTS) in a left-eye image frame and a right-eye presentation time stamp (RPTS) in a right-eye image frame. Each LPTS has a closest corresponding RPTS, and vice versa, and the two corresponding frames containing the above LPTS and RPTS are referred to as the left-eye and right-eye image frames of the same set. To synchronize the left-eye and right-eye image frames, the left-eye and right-eye image frames of the same set are sequentially or simultaneously played at a particular time point. In practice, since a deviation often exists between the LPTS and RPTS of the left-eye and right-eye image frames of the same set, synchronization of the left-eye and right-eye image frames needs to be first performed to identify the left-eye and right-eye image frames of the same set. In a conventional solution, in the process of selecting a left-eye image frame in Step S142, in a search interval forwards and backwards along a system time T, a left-eye image frame marked with the LPTS closest to the system time T is searched. Similarly, in the process of selecting a right-eye image frame in Step S144, in a search interval forwards and backwards along the system time T, a right-eye image frame marked with the RPTS closest to the system time T is searched.
It is known from the above flowchart of image synchronization and associated descriptions that the conventional solution, by respectively selecting the LPTS and the RPTS closest to a system time, determines the left-eye image frame and the right-eye image frame corresponding to the LPTS and the RPTS, and assumes that the left-eye image frame having the selected LPTS and the right-eye image frame having the selected RPTS are image frames of the same set. FIG. 2 shows a schematic diagram of synchronizing a left-eye image and a right-eye image of a 3D image by a conventional solution. On a time axis t, according to the marked PTS, (N−1)th to (N+2)th left-eye image frames and (N−1)th to (N+2)th right-eye image frames are depicted, with the (N)th left-eye image frame and the (N)th right-eye image frame being frames of the same set, and so forth. It is observed from FIG. 2 that, on the time axis t, the left-eye and right-eye image frames of the same set are unaligned. The unaligned left-eye and right-eye image frames are due to a slight time difference between the left-eye and right-eye image frames at the time when recording the 3D image. According to the conventional image synchronization process, for a system time T, the (N)th left-eye image frame closest to the system time T is selected from the left-eye image frames, and the (N+1)th right-eye image frame closest to the system time T is selected from the right-eye image frames. Therefore, through the conventional synchronization solution, the (N)th left-eye image frame and the (N+1)th right-eye image frame are regarded as the same set and played synchronously. However, the (N)th right-eye image frame should be in fact synchronized with the (N)th left-eye image frame, and the (N+1)th right-eye image frame should be in fact by synchronized with the (N+1)th left-eye image frame. That is to say, in certain situations, the prior art is incapable of correctly synchronizing the left-eye and right-eye image frames.