1. Field of the Invention
The present invention relates to a disk reproducing apparatus having the capability of reproducing data from a storage medium such as an optical disk, and more particularly to a caching process during a data reproducing operation.
2. Description of the Related Art
Optical disks serving as a storage medium such as a CD (compact disk) and a DVD (digital versatile disk/digital video disk) suitable for use in multimedia applications are known in the art.
In apparatus designed to reproduce data from these types of optical disks, a track on a disk is illuminated with a laser beam and data is read by detecting light reflected from the disk.
In a reproducing apparatus for reproducing data from a disk in response to a read command (data transfer request) issued by a host computer or the like and transferring the reproduced data to the host computer, the reproducing apparatus is required to rapidly transfer the data in response to the read command. In general, the read command includes in addition to read command itself, data representing the data start position and the data length from the data start position. That is, the host computer specifies particular sectors to be reproduced.
The operations performed by the reproducing apparatus in response to a read command basically include an operation of seeking an optical head to a particular point on a disk (an accessing operation) and an operation of reading specified data. The obtained data is transferred to the host computer. The reproducing apparatus includes a cache memory and the data is output via the cache memory.
After reading data from data sectors specified by the host computer, the reproducing apparatus first stores (buffers) the data into the cache memory and then transfers it from the cache memory to the host computer. In the above operation, the reproducing apparatus also reads data from the sector following the specified data sector and stores it into the cache memory. This operation is called a look-ahead reading operation.
When the reproducing apparatus receives another data request from the host computer after that, if the requested data is stored in the cache memory, the reproducing apparatus transfers the data from the cache memory to the host computer without having to obtain access to the disk. This allows a reduction in the effective access time.
In the case where data requests are successively issued by the host computer for successive sectors (such a type of data request is referred to as a sequential data request and the operation of reading such data is referred to as a sequential reading operation), the look-ahead reading into the cache memory is a very effective method to achieve rapid data transferring.
Buffering data into the cache memory is generally performed using the cache memory as a ring memory. That is, data with successive LBAs (logical block addresses, addresses of data sectors on a disk) is stored in the cache memory so as to reduce the access time during the sequential reading operation performed in response to a sequential data request.
The operation of buffering data into the cache memory is described in further detail below with reference to FIGS. 22-25.
For simplicity, the cache memory is assumed to have a size of 8 blocks (or 8 sectors each of which is assigned a particular LBA). The operation is described below by way of an example for a particular case in which the host computer issues a first data transfer request for 3 blocks (3 sectors) having LBAs of "N" to "N+2" and subsequently issues a second data transfer request for 3 blocks (3 sectors) having LBAs of "N+6" to "N+8".
Furthermore, it is assumed herein that before the first data transfer request was issued, data sectors having LBAs "M" to "M+4", were stored in an 8-block cache memory with areas #0-#7 as shown in FIG. 22. In the reproducing apparatus, the cache memory is controlled, as shown in FIG. 22, using internal parameters representing a start address LBAm, a pointer PLBAm, the number of valid sectors VSN, and the number of transferred sectors.
The start address LBAm refers to the LBA value of the start sector designated in the previous data request issued by the host computer. The start address LBAm has the minimum LBA value of the valid sectors stored in the cache memory.
The pointer PLBAm is a pointer pointing to the area of the cache memory in which the start address LBAm is stored.
The number of valid sectors VSN indicates how many successive data sectors starting at the start address LBAm are held in the cache memory.
The number of transferred sectors TSN indicates the number of data sectors which have already been transferred from the cache memory to the host computer.
In the specific example shown in FIG. 22, the start address LBAm=M, the pointer PLBAm=0, and the number of valid sectors VSN=5, and thus these parameters indicate that successive sectors having LBAs from "M" to "M+4" are stored in successively areas starting from area #0 as shown in the figure.
Herein, if the first data transfer request for 3 sectors having LBAs from "N" to "N+2" is issued, the following operation is performed as described below.
The data transfer request issued by the host computer includes a requested start address reLBA indicating the first address of the requested data sectors and also includes a requested data length rqLG indicating the number of sectors counted from the start address.
For example, when a data transfer request for 3 sectors having LBAs from "N" to "N+2", includes a requested start address reLBA=N and a requested data length rqLG=3.
When the above-described data transfer request is issued, data having LBAs "N" to "N+2" are not found (are not cache-hit) in the cache memory in the state shown in FIG. 22, and thus the cache memory is purged. That is, the entire data stored in the cache memory is made invalid and data having successive LBAs starting with "N" are buffered in the successive areas of the cache memory starting from the first area #0. The buffering operation is performed by accessing the disk to read data having successive LBAs starting from "N" and then storing the obtained data into the cache memory after performing the required decoding.
Although the data requested herein to be transferred are those having LBAS "N" to "N+2", a sector having an LBA "N+3" and the following sectors are also buffered in preparation for future data transfer requests for these sectors.
FIG. 23 illustrates the process of buffering data having LBA "N" and also data with LBAs following that into the purged cache memory. In a particular state shown in FIG. 23, data having LBAs "N" to "N+4" are kept in the cache memory in FIG. 23. The state of the cache memory is controlled by the parameters including the start address LBAm=N, the pointer PLBAm =0, and the number of valid sectors VSN=5.
After being buffered, the data having LBAS "N" to "N+2" are transferred to the host computer.
The operation of buffering the data into the cache memory is continued until a next data transfer request is issued or until the cache memory becomes full.
If the buffering is performed until the cache memory becomes full, the cache memory includes data having LBAs "N" to "N+7" as shown in FIG. 24.
After that, if a second data transfer request for 3 sectors having LBAs "N+6" to "N+8" is issued, that is, if a command including a requested LBA="N+6" and a requested data length rqLG=3 is issued, then an operation is performed as described below.
Of the three sectors requested to be transferred, two sectors having LBAs "N+6" and "N+7" are kept (cache-hit) in the cache memory as shown in FIG. 24, and thus these two sectors can be directly transferred from the cache memory to the host computer.
Thus, as shown in FIG. 25, the reproducing apparatus transfers the cache-hit data and reads data having a LBA "N+8" which is not held in the cache memory (by getting access to the disk).
In the above process, upon receipt of the second data transfer request for sectors having LBAs from "N+6" to "N+8", the parameters are updated such that the starting address LBAm=N+6, the pointer PLBAm=6, and the number of valid sectors VSN=2, and the stored data of LBAs "N" to "N+4" are made invalid. The data of the immediately previous sector with a LBA "N+5" is still held. The holding of the data of the immediately previous sector is achieved by setting the number of transferred sectors TSN to 1.
Thus, one block before the area #6 and two blocks after the area #6 in which the data of LBA "N+6" is stored are made valid.
The data of a LBA "N+8", which is also requested by the host computer to be transferred but which is not stored in the cache memory, is transferred after being buffered in the folded-back area (that is, area #0).
Data with an LBA "N+8" and data with LBAs following that are also buffered into the cache memory until the cache memory becomes full in preparation for future data transfer requests.
The number of valid sectors VSN is counted up when error correction is completed after reading and buffering data (that is, when certain sector data becomes valid after being buffered).
At the time when data having LBAs up to "N+10" have been buffered as shown in FIG. 25, the number of valid sectors VSN becomes 5.
By buffering data in the cache memory in the above-described manner, it is possible to increase the probability that data can be transferred in response to a sequential data transfer request without having to get access to the disk thereby reducing the average access time.
In practice, in addition to a sequential data request, other types of data transfer requests are also generated depending on the application executed on the host computer. The types of data transfer requests include a two-point reading request in which data sectors at two distant points on a disk are alternately requested and a random reading request in which data sectors at random locations on the disk is sequentially requested. There is a tendency that a reduction in the access time is also required for these data requests.
The problem of the above-described buffering technique is that the look-ahead reading into the cache memory is useless for the reduction in the access time for two-point reading requests or random reading requests because no data is buffered in the cache memory in the two-point reading or random reading mode.