1. Field of the Invention
The present invention relates to a method and apparatus for reading and writing from/to an information recording medium with a defect management function. The present invention can be used particularly effectively in an optical disc drive for reading and writing from/to a rewritable optical disc on which information can be rewritten a number of times.
2. Description of the Related Art
Recently, various removable information recording media with huge storage capacities and disc drives for handling such media have become immensely popular.
Examples of known removable information recording media with big storage capacities include optical discs such as DVDs and Blu-ray Discs (which will also be referred to herein as “BDs”). An optical disc drive performs a read/write operation by making tiny pits on a given optical disc using a laser beam, and therefore, can be used effectively to handle such removable information recording media with huge storage capacities. Specifically, a red laser beam is used for DVDs, while a blue laser beam, having a shorter wavelength than the red laser beam, is used for BDs, thereby making the storage density and storage capacity of BDs higher and greater than those of DVDs.
However, as an optical disc is a removable information recording medium, there will be some defect on its recording layer due to the presence of dust or a scratch. That is why it has become a more and more common measure to take for an optical disc drive for reading and writing from/to an optical disc to carry out a defect management to ensure the reliability of the data read or written (see Patent Document No. 1 (Japanese Patent Application Laid-Open Publication No. 2003-346429), for example).
FIG. 1 illustrates a normal layout of various areas on an optical disc. The disklike optical disc 1 has a huge number of spiral tracks 2, along which a great many subdivided blocks 3 are arranged.
In this case, those tracks 2 may have a width (i.e., a track pitch) of 0.32 μm in a BD, for example. Blocks 3 are not only units of error correction but also the smallest units of read/write operations. As for a DVD, one block is called an “ECC” with a size of 32 kilobytes. As for BDs, on the other hand, one block is called a “cluster” with a size of 64 kilobytes. Converting them into sectors, which are the smallest data units for an optical disc, one ECC is equal to 16 sectors and one cluster is equal to 32 sectors. It should be noted that when a “cluster” is mentioned in the rest of the description, it will always be synonymous with the block 3.
Also, the recording area on the optical disc 1 is roughly classified into a lead-in area 4, a data area 5 and a lead-out area 6. User data is supposed to be read from, and written on, the data area 5. The lead-in area 4 and the lead-out area 6 function as margins that allow the optical head (not shown) to get back on tracks even if the optical head has overrun while accessing an end portion of the data area 5. That is to say, these areas 4 and 6 function as “rims” so to speak.
FIG. 2 shows the arrangement of respective areas on a rewritable optical disc with only one recording layer.
The data area 5 is made up of a user data area 14 from/on which user data is read or written and a spare area 15, which is provided in advance as clusters that will replace clusters with defective sectors in the user data area 14. The clusters of the former type will be referred to herein as “replacement clusters”. This spare area 15 is provided for the only recording layer (L0 layer) of the disc and is located closer to the inner edge of the disc, and therefore, called “Inner Spare Area Layer 0 (which will be referred to herein as “ISA0”).
As areas to store defect management information for defective blocks on the optical disc 1, the lead-in area 4 has first and second defect management information areas 10 and 11 (which will be referred to herein as “DMA #1” and “DMA #2”, respectively) and the lead-out area 6 has third and fourth defect management information areas 12 and 13 (which will be referred to herein as “DMA #3” and “DMA #4”, respectively). DMA #1 through #4 are arranged in their own areas and store quite the same pieces of information redundantly, which is done to prepare for a situation where any of the DMA #1 through #4 has gone defective itself. That is to say, even if information can no longer be retrieved from one of these four DMAs properly, the defect management information can still be acquired as long as there is at least one DMA from which information can be retrieved properly.
Each of these DMAs #1 through #4 includes a disc definition structure 20 (which will be abbreviated herein as “DDS”) and a defect list 21 (which will be abbreviated herein as “DFL”).
The DDS 20 contains various kinds of information including location information (such as information about the location of the DFL 21) and information about the spare area 15 (such as information about its size).
FIG. 16 illustrates the data structure of the DFL 21 of a rewritable optical disc with only one recording layer.
The DFL 21 consists of a DFL header 30, zero or more defect entries 31 (a situation where there are (n+1) defect entries 31 (where n is an integer that is equal to or greater than zero) is shown in FIG. 16), and a DFL terminator 32. That is to say, if no defective clusters have been detected, the DFL 21 consists of only the DFL header 30 and the DFL terminator 32.
The DFL header 30 contains a DFL identifier 40, which is identification information indicating that this piece of information is DFL, a first piece of number of times of update information 41 indicating how many times this DFL 21 has been updated so far, and number of defective entries information 42 indicating how many defective entries 31 there are in this DFL 21.
The DFL terminator 32 contains a DFL terminator identifier 50 indicating that this is a piece of information about the terminal location of the DFL, and a second piece of number of times of update information 51 indicating how many times this DFL 21 has been updated so far. The first and second pieces of number of times of update information 41 and actually have the same value. The same piece of information is stored in this manner at the head and tail of the DFL 21 in order to keep the DFL 21 retrievable safely even if the DFL 21 could not be updated properly due to instantaneous disconnection of power or any other unexpected event that could happen while the DFL 21 is being updated.
The defect entries 31 provide information about the defective clusters that have been detected in the data area 5. Each of these defect entries manages defective clusters according to multiple types (or attributes)(see Patent Document No. 2 (Japanese Patent No. 3858050), for example).
FIGS. 17(A) through 17(C) show an exemplary makeup for the defect entry 31 and also show the attributes of defects to be managed by the defect entry 31. As shown in FIG. 17(A), the defect entry 31 is made up of a first status field 31a, a first address field 31b, a second status field 31c, and a second address field 31d. It should be noted that this is just an exemplary makeup for the defect entry 31 and any other arbitrary field could be included in the entry 31 as well.
As will be described later, the first and second status fields 31a and 31c indicate the attribute (or the type) of their defect entry 31. In the first and second address fields 31b and 31d, stored are information about the locations of the defective clusters or replacement clusters and other pieces of information according to the attributes of the first and second status fields 31a and 31c. For example, the first address field 31b may store the physical address of the top sector of a defective cluster and the second address field 31d may store the physical address of the top sector of a replacement cluster.
The first status field 31a may be flag information of four bits, for example. FIG. 17(B) shows what the first status field 31a may define in some instances.
Specifically, if the first status field 31a has a value “0000”, it means that a replacement cluster has been allocated to a defective cluster and that the user data that should have been written on the defective cluster has already been written on the replacement cluster instead (such an attribute will be referred to herein as “RAD0”).
On the other hand, if the first status field 31a has a value “1000”, it means that a replacement cluster has been allocated to a defective cluster and that the user data that should have been written on the defective cluster has not been written on the replacement cluster yet (such an attribute will be referred to herein as “RAD1”).
Furthermore, if the first status field 31a has a value “0001”, it means that a replacement cluster has not been allocated to a defective cluster yet (such an attribute will be referred to herein as “NRD”).
Furthermore, if the first status field 31a has a value “0010”, it means that this defect entry 31 provides no significant information about the location of a defective cluster (such an attribute will be referred to herein as “SPR”). Nevertheless, the sector address specified by the second address field 31d of this defect entry 31 means that a cluster headed by that sector (i.e., a cluster in the spare area 15) is usable as a replacement in the future.
Furthermore, if the first status field 31a has a value “0100”, it means that this area could be defective clusters (such an attribute will be referred to herein as “PBA”). In other words, such an area has not yet been recognized to be, but could be, defective clusters. And this is an attribute to be generated mainly by physical reformatting as will be described later. In that case, the first address field 31b of the defect entry 31 indicates the physical address of the top sector of the first one of the potential defective clusters in that area and the second address field 31d indicates the size (e.g., the number of clusters) of those potential defective clusters.
Furthermore, if the first status field 31a has a value “0111”, it means that this is a defective cluster in the spare area 15 (such an attribute will be referred to herein as “UNUSE”).
In this case, unless the attribute is SPR attribute or UNUSE attribute, the information about the location of the defective cluster is usually specified by the first address field 31b of the defect entry 31. On the other hand, if the attribute is SPR attribute or UNUSE attribute, the information about the location of the defective cluster is usually specified by the second address field 31d thereof.
In the foregoing description, the defect entry 31 is supposed to contain location information about defective clusters. However, the clusters indicated by the defect entry 31 do not have to be defective ones. More specifically, the RAD0 attribute, for example, indicates that a replacement cluster has been allocated to a certain cluster and a replacement write operation has been performed on that replacement cluster. Thus, even if non-defective, a certain cluster may be intentionally replaced with a replacement cluster for some reason. Meanwhile, the NRD attribute indicates that no replacement cluster has been allocated to the defective cluster. However, this is an attribute indicating that no valid data has been written on (or can be retrieved from) the cluster with the NRD attribute. That is why a cluster on which no valid data has been written for some reason may be managed as having the NRD attribute.
The second status field 31c may provide flag information of four bits, for example. As shown in FIG. 17(C), if the second status field 31c is 0000, it means that that field is not used. However, if the second status field 31c is 0100, it means that the cluster specified by the first or second address field 31b or 31d has been subjected to physical reformatting as will be described later. This means that the defect that should be present in the cluster according to the first or second address field 31b or 31d may have already been repaired by cleaning during that physical reformatting and also means that there is no significant user data in either the defective cluster or the replacement cluster.
FIG. 18 shows some typical combinations of the first and second status fields 31a and 31c in the defect entry 31.
As for the PBA attribute for use to manage a defective cluster in the user data area 14 and for the SPR attribute for use to manage a cluster in the spare area 15, a defect entry 31, of which the second status field 31c indicating that the defect may have been repaired by physical reformatting is 0100 (such a status will be referred to herein as “RDE status”), may be generated.
Every attribute of the defect entry 31 but the PBA attribute is managed on a cluster (or block) basis. On the other hand, the PBA attribute can be used to manage an area that covers more than one cluster (or block), i.e., multiple clusters (or blocks).
The defect entries 31 included in the DFL 21 are managed while being sorted. More specifically, the defect entries 31, except the most significant bit of their first status field 31a, may be sorted and managed in the ascending order, for example. That is to say, the management is supposed to be made collectively on a defect attribute basis (but RAD0 and RAD1 are regarded as having the same attribute) and then each group of defect entries 31 with the same defect attribute are sorted in the ascending order according to the physical addresses of their clusters to be managed (i.e., the clusters indicated by the first and second address fields 31b and 31d).
Hereinafter, the physical reformatting will be described.
To perform a write operation on the optical disc 1 for the first time, initialization formatting is carried out in order to determine the arrangement of the user data area 14 and the spare area 15 in the data area 5. However, an optical disc 1 on which some data has already been written may also be formatted separately. Such formatting is called “physical reformatting”.
The greater the number of defective clusters, the more frequently replacement clusters need to be accessed. As a result, the read/write rate (or performance) could drop so steely that some inconveniences could occur when a moving picture is recorded or played back, in particular. Also, the spare area 15 including replacement clusters needs to be secured in the data area 5. That is why if too many replacement areas were provided to prepare for a situation where replacement needs to be made frequently, then the size of the user data that could be stored (i.e., the space left in the user data area 14) would decrease significantly. In that case, the physical reformatting (or re-initialization) would be carried out after the dirt on the surface of the disc has been cleaned up. This is because defects to be subsequently produced on the disc (which will be referred to herein as “subsequent defects”) are often caused by fingerprints, dust or any other dirt that has been deposited on the disc surface. That is why by cleaning such dirt up, most of those subsequent defects could possibly disappear. The physical reformatting could be done by determining whether a defective cluster, registered with the DFL 21, actually has a defect or not by performing a test write operation called “certify operation” on the entire surface of the disc. The physical reformatting could also be done by changing the defect attributes of a defect entry 31 on the DFL 21 into some attribute indicating that the defect may have been repaired (e.g., changing the second status field 31c of the defect entry 31 into “0100”). Or the physical reformatting could even be done by initializing the DFL 21 (i.e., changing it to a state in which no defective clusters have been registered yet) when the disc 1 is subjected to initialization formatting to perform a write operation thereon for the first time. It should be noted that once the physical reformatting has been carried out, all of the user data stored in the data area 5 will become invalid data except in some special situations. Such a “special situation” could occur if the physical reformatting function of changing only the sizes of the spare area 15 were provided with the user data stored in the data area 5 kept valid.
Hereinafter, it will be described how to manage those clusters that are usable as replacement clusters from the spare area 15. Such clusters that can be allocated as replacement clusters may be managed in the following manner, for example.
First of all, in a write-once information recording medium, those clusters may be managed using pointer information that indicates the location (i.e., physical address) of the next available cluster in the spare area 15 (see Patent Documents No. 3 (U.S. Pat. No. 5,715,221) and No. 4 (Japanese Patent Application Laid-Open Publication No. 2006-344375), for example). Also, to get such management done, those clusters in the spare area 15 need to be used in some restricted order. Such restriction may be laid down so that those clusters in each spare area 15 should be used in the direction in which the track path is scanned (i.e., the clusters should be used in the ascending order according to their physical addresses) or that a number of spare areas 15 should also be used in the ascending order according to their physical addresses.
Alternatively, as already described with reference to FIG. 17, available clusters and unavailable clusters in the spare area 15 could be managed as a defect entry 31 on a cluster-by-cluster basis (see Patent Document No. 2 (Japanese Patent Publication No. 3858050). In that case, all clusters included in the spare area 15 are managed on the DFL 21 as belonging to a defect entry 31, of which the first status field 31a is 0010 (SPR), and a defect entry 31, of which the first status field 31a is 0111 (UNUSE). According to such a method, it can be seen at once that in a location indicated by the defect entry 31 with the SPR attribute, which is used to manage the locations of clusters that are usable as replacement clusters, there is a cluster usable as a replacement cluster. Also, as for the spare area 15, the replacement cluster may be selected anywhere as long as the cluster is managed using the defect entry 31 with the SPR attribute.