1. Field of the Invention
The present invention relates to a method for defect management of an optical disk while an optical disk drive writes data onto or reads data from the optical disk. More specifically, a method of reducing seeking optical disk trace times while reading or writing data by pre-reading a plurality of spare packets as a cache is disclosed.
2. Description of the Prior Art
Optical disks, because of their low price, small volume, light weight, and large storage capacity, have become one of the most common data storage media used in modern society. More specifically, the development of rewritable optical disks has made the optical disk the most important portable personal data storage media that can store user's data depending on the user's demand. How to access disk data with higher efficiency and reliability is the main point of modern information development.
A drive accesses data in a corresponding optical disk. Please refer to FIG. 1, which illustrates a block diagram of a conventional drive 10 accessing an optical disk 22. The drive 10 comprises a housing 14, a motor for driving the housing rotation, a pickup head 16 for accessing the data in the optical disk, a controller 18 for controlling the operation of the drive 10, and a memory 20 (such as volatile random access memory) for storing the data in the operation duration of controller 18. The optical disk has a track 24 for recording the data.
The controller 18, depending on the command from the host 26, can access the data on the track 24 with the optical pickup head 16 skipping the track 24 of the optical disk 22 set on the housing 14 driven by the motor 12. The host 26 could be a computer system of a PC (Personal Computer).
In order to record the data to the optical disk in a more reliable and durable manner, some progressive specifications of the optical disk are regulated with certain defect management mechanisms.
One of the most common strategies used is to allocate a part of spare record areas for recording the data due to the damage of the optical disk, resulting in original allocated data record areas being useless. Please refer to FIG. 2A and FIG. 2B, which illustrate the charts about the allocation of the spare record area and general record area in the two different kinds of optical disk specifications respectively; FIG. 2A is for CD-MRW (Compact Disk—Mount Rainier rewritable) and FIG. 2B is for DVD (Digital Versatile Disk)+MRW.
Referring to FIG. 2A, the track 24 on the optical disk 22 for recording data is divided into several groups of LI (Lead-In Area) for marking the beginning of the track 24, PA (Program Area) for recording the data and LO (Lead-Out Area) for marking the end of the track 24. The LI is also divided into a field for MTA (Main Table Area) used to store a DT (Defect Table). The PA is also divided into P0 (pre-gap), GAA (General Application Area), STA (Secondary Table Area) for storing the copy of DT, a plurality of DAs (Data Area), and a plurality of SAs (Spare Area). The different DAs were marked as DA(1), DA(2) . . . DA(N). The PA also has a plurality of SAs, corresponding to each DAs respectively, marked as SA(1), SA(2) . . . SA(N). Each DA is further divided into a predefined plurality of Pd (data packet) having a plurality of Bd (user data block) used to record data. Similarly, each SA(n) is further divided into a plurality of Ps (spare packet) having a plurality of Bs (spare data block). Bd and Bs have identical data capacity. In CD-MRW, for example, each DA has 136 Pds; each Pd has 32 Bds. Each SA has 8 Ps; each Ps has 32 Bs. Each Bd and Bs is used to record 2 k (kilo) bytes. Referring to FIG. 2A, the optical pickup head 16 passes through all the blocks (including DA and SA) in turn as the track 24 on the optical disk 22 skips the optical pickup head 16. For example, while the optical pickup head 16 skips the track 24 in the direction of A1 in FIG. 2A, it is going to pass through each spare block of SA first, then each data block of DA and the following is another SA.
Please refer to FIG. 2B. According to similar allocation principle in DVD+MRW, the track 24 also has a Ll2 for marking the beginning of the track 24, a DZ (data zone) for recording the data, and a LO2 for marking the end of the track 24. There is a MTA2 for storing a defect table in the Ll2. The DZ is also divided into a GAA2 (General Application Area), a STA2 (Secondary Table Area) for storing the copy of the defect table, a UDA (User Data Area), and two SA1, SA2 (Spare Area). Similarly, the UDA has a plurality of Bd0 (like 139218 ECC blocks), and both SA1 and SA2 have a plurality of Bs0 (like 256 and 3840 ECC blocks respectively).
No matter which form the optical disk 22 has in FIG. 2A and FIG. 2B, the basic principle of the defect management is identical. Data from the host 26 is prior written to the DA; if failure is due to damage of the optical disk 22, the track 24 will seek a replacing spare block for writing data. All the spare blocks and data blocks have their own addresses (as PBN, Physical Block Number). Both the addresses of the defect data area and the corresponding spare block for replacing the defect data area, and their corresponding relationship are recorded in the DT of the optical disk 22. The drive 10 depends on DT to find the replaced spare block and reads the data stored in the replaced spare block while reading a defect data area. According to stated above, the optical disk 22 can record data by the settlement and the usage of the spare blocks to carry out defect management even though there is partial damage (perhaps caused from scrape or slight dust) on the optical disk 22.
The following will take the CD-MRW to be an example to introduce the implementation of the prior art. Please refer to FIG. 3 (also FIG. 1). FIG. 3 is a flowchart illustrating the conventional optical disk write flow 100. To implement the defect management mechanism, the former optical disk write flow occurs as follows:
Step 102:Start;
Step 104:
The drive 10 receives the write command from the host 26 and prepares to write the data from the host 26 into the optical disks 22. The drive 10 begins to execute the command from the host 26 to write the assigned data from the user into the optical disk 22. Data from the host 26 is held in the memory 20 initially;
Step 106:
Determine whether a defect data block is encountered in the duration of the data write process; if yes, go to step 108; if not, go to step 112. In this embodiment, the unit which data is written to each time is one packet instead of one data block. The host 26 can control the drive 10 writing data to the assigned packet; the drive 10 also can analyze which packet is used during the write process by the reader 16 detecting and recognize whether encountering a defect data block in the packet based on the defect table;
Step 108:
The prior art, if encountering a defect data, halting the write process, and writing the data initially written to the defect data block to another replacing spare block, is disclosed. Based on defect table, the drive 10 can find out the address of the corresponding spare block of the defect data block and make the optical pickup head 16 seek to the location of the spare block for the data being written. On account of the feature that one time for one packet in the duration of data written to the optical disk, the drive 10 would read data within other spare blocks belonging to the packet where the appointed spare block is from the memory 10;
Step 110:
Data within the spare blocks attached to the packet is written back to the optical disk 22 from the memory 20, so that it alters the data written to the defect data block initially to the spare block to maintain the function of the optical disk 22 recording the data;
Step 112:
Keep doing general write process; in other words, writing data into the data block that the host 24 assigns. Step 110 followed by step 112 represents that the pickup head 16 is moved to the corresponding data block to keep doing the general write process after the pickup head 10 had written the data into the spare block;
Step 114:
Determine if any new write request is received; If yes, go back to step 104 to continue next data write process. If not, go to step 116;
Step 116:End;
The following further illustrates the above write process 100. Please refer to FIG. 4A, FIG. 4B and FIG. 4C (and also FIG. 1 and FIG. 2A). FIG. 4A, FIG. 4B and FIG. 4C are allocation charts about the data stored in the track 24 and the memory 20 during the conventional write process 100. As FIG. 4A illustrates, the controller 18 makes data from the host 26 being held in the memory 20, then the data from the memory 20 is written onto the track 24 by the pickup head 16. Assuming that the host 26 transfers data within the packets Pd1, Pd2 and Pd3 to the drive 10 successively, the pickup head 16 of the drive 10 moves to the corresponding location of the Pd1, Pd2 and Pd3 to write the data into the Pd1, Pd2 and Pd3 in turn. Assuming that all the data blocks of the Pd1 are non-defects, the drive 10 would write data into the Pd1 successfully. Assuming that the data blocks of the Pd2, such as Bd2a, Bd2b, are not defects except Bd2c, it will go to step 108 to alter the data originally written into the Bd2c into a spare block. As FIG. 4B illustrates, assuming that using the spare block Bs1c of the spare packet Ps1 to take place of the defect data block Bd2c in step 108, pickup head 16 seeks to the location of the Ps1, reads all data within the spare blocks of the Ps1 to memory 10 (as step 108(1) marked in FIG. 4B), and adds the data written into the Bd2c initially to the Ps1 that has been read to memory 20 (as step 108(2) marked in FIG. 4B). As FIG. 4C illustrates, the pickup head 16 would write back data within the Ps1 stored in the memory 20 to the Ps1 on track 24. In step 112, the pickup head 16 seeks to the corresponding location of the Pd2, and continues to write the data into non-defect data blocks (like Bd2a and Bd2b) of the Pd2 and the following normal packets (like Pd3).
Some spare blocks of the Ps1 have replaced other defect data blocks (like Bs1a and Bs1b in FIG. 4B and FIG. 4C), and the drive 10 writing the data into the Ps1 has to follow the principle that one time for one packet instead of one block. So the original data within the Ps1 (like data within Bs1a and Bs1b) has been read to the memory 20.
Accordingly, referring to step 110 showing in FIG. 4C, when all the data is written back to Ps1, the initial data within the Ps1 and new data written to the Bd2c initially are written to Bs1c together.
From the conventional write process flow 100 stated above, while encountering a defect data block, and suspending the general data-writing, the pickup head 16 seeks to the spare block to replace the defect data block. Then, the pickup head 16 moves to the interrupted place to keep the general data-writing. If encountering other defect data blocks during the write process, the pickup head 16 has to repeat the above defect management flow again. Needless to say, the more defect data blocks are encountered, the more frequently the pickup head 16 seeks to compensate the defect data blocks. The more seeking across a plurality of packets results in the more time for waiting the stability of the pickup head 16. Therefore, the frequent seeking causes the lower efficiency of the write flow 100 and the burden of the machine of the drive 10.
In contrast to the write flow 100, please refer to FIG. 5 (also refer to FIG. 1). FIG. 5 illustrates the conventional flow 200 for the optical disk data-reading. The flow 200 is as follows:
Step 202:Start;
Step 204:
Check whether the read command from the host 26 is received; if yes, go to step 206; if not, go to step 214;
Step 206:
The pickup head 10 of the drive 10 depending on the read command moves to the corresponding data blocks, and reads data within the corresponding data blocks to obtain the data that host 26 requests. The data read by the pickup head 16 is held in the memory 20;
Step 208:
According to DT (defect table) within the optical disk 22, the drive 10 can check whether encountering defect data blocks in step 206. If yes, go to step 210; if not, go to step 212;
Step 210:
The drive 10 finds the address of the replacing spare block based on DT during the read process. At this moment, the pickup head 10 moves to the corresponding place based on the address to read data within the replacing spare block. Then the data is held in the memory 20;
Step 212:
Transfer the data held in the memory 20 to the host 26. Step 212 following step 210 represents that drive 10 has read all the data, which the host 26 requests, within the spare blocks replacing the defect data blocks and has transferred the data to the host 26;
Step 214:
Determine whether all the data that the host 26 requests has been transferred. If yes, go back to step 204; if not, go to step 206;
Step 216:End;
The following further illustrates the above read process 200. Please refer to FIG. 6A, FIG. 6B and FIG. 6C (and also FIG. 1 and FIG. 2A). FIG. 6A, FIG. 6B and FIG. 6C are allocation charts about the data stored in the track 24 and the memory 20 during the conventional read process 200. As FIG. 6A illustrates, assume that the host 26 requests the drive 10 to read data within Pd1, Pd2 and Pd3 successively, therefore the pickup head 16 moves to the corresponding place of Pd1 to read data within Pd1 into the memory 20. Assuming that there is no defect data block in Pd1, the data read from the Pd1 is transferred from the memory 20 to the host 26 successfully in step 210 followed step 206 and step 208. Assuming that all the data blocks of the Pd2, such as Bd2a, Bd2b, are no defect except Bd2c, the conventional flow 200 will suspend the data-reading and go to step 210 while the drive 10 begins to read data within Pd2. As FIG. 6B illustrates, based on the defect table pointing that the spare block of Bd2c is Bs1c, the drive 10 controls the pickup head 16 to seek to the corresponding place of Bs2c (marked as step 210(1)), read data within Bs1c to the memory 20(marked as step 210(2)), and add the data into the Pd2 stored in the memory 20 (marked as step 210 (3)). After obtaining the whole data within Pd2, the drive 10 starts to transfer the data to the host 16. Because the drive 10 keeps reading data within Pd3, as FIG. 6C illustrates, the pickup head 16 is going to seek across multiple packets to read each data blocks of Pd3.
From the conventional read process flow 200 stated above, while encountering a defect data block and suspending the general data-reading, the pickup head 16 seeks to the spare block in place of the defect data block to read data within the spare block. Then, the pickup head 16 moves to the interrupted place to continue the general data-reading. If encountering other defect data blocks during the read process, the pickup head 16 have to repeat the above defect management flow again. Needless to say, the more defect data blocks are encountered, the more frequently the pickup head 16 seeks between spare blocks and data blocks. Therefore, The frequent seeking causes the lower efficiency of the write flow 100 and that of the read flow 200 and the burden of the machine portion of the drive 10 to decrease the drive's life.