1. Field of the Invention
The present invention relates to a cache apparatus and its control method, and more particularly to a cache apparatus including a cache lock device and its control method.
2. Description of the Related Art
In recent years, for the purpose of improving the utilization efficiency, especially the hit ratio, of a cache, a cache lock device which disables and controls data rewrite of a way which is holding specific data is drawing attention.
Before proceeding further, a brief description about a cache will be presented below. As shown in FIG. 17, a high speed memory 3 with small capacity which is interposed between an external memory 1 having a large capacity but a low data transfer rate, and a CPU which is an arithmetic unit part that processes data is generally referred to as a cache.
The main memory 1 with a large capacity generally has a low speed of the so-called access, which is the process of sending out necessary data to the CPU 2 after the output of an address from the CPU. It is the object of providing the cache 3 to copy frequently accessed data from the memory 1 to the cache 3 having a high access speed, at the timing of a first access, in order to shorten the response time in the subsequent accesses.
In the following, referring to FIG. 5(A), the configuration of a conventional cache 3 will be described first.
The cache 3 is composed of a selector which selects an entry based on the address of an index part of input data, and a memory array formed of a plurality of entries. The memory array consists of a tag memory array TagA, a data memory array DatA, a valid bit array VBA, an LRU bit array LRUBA, and a lock bit array LBA.
In such a cache 3, various arrays combined constitutes a unit called a way (constitution shown by the range surrounded by the broken line in FIG. 5(A)). In this example, however, ways that have the lock bit array LBA are limited to ways W0 and W1. The cache 3 shown consists of four ways, namely, way 0 (W0) to way 3 (W3), and is generally referred to as a 4-way cache.
An example of the constitution of data 4 used in such a cache apparatus 3 is shown in FIG. 5(B). The data 4 is divided into, for example, an address part 5 and a data part 6, and the address part 5 is further subdivided into a tag part 7, an index part 8, and an offset part 9.
The data sizes of various parts are, for example, 20 bits for the tag part 7, 6 bits for the index part 8, and 5 bits for the offset part 9, and the data part 6 consists of, for example, 256 bits (64 bytes).
A row having the same index number among various arrays is called collectively an entry, and one entry 10 comprises a tag part Tag, a data part Dat, a valid bit VB, an LRU bit LRUB, and a lock bit LB (a lock bit LB is given only to the entries within the ways W0 and W1).
As will be described in detail later, the tag part Tag stores tag data TAG which are high order bits of an address stored in the tag part 7 of data to be stored, and the data part Dat stores data DA0 stored in the data part 6 of the data to be stored.
The valid bit VB represents validity/invalidity of data being stored, and the LRU bit LRUB represents the entry 10 where the data read from the memory 1 may be overwritten.
The lock bit LB is used for designating the entry 10 which is desired not to be overwritten.
For an arbitrary memory address, the entry which is going to store the data at the address is selected as the entry 10 which has a row number matching the index value stored in the index part 8 of the data to be stored in the cache apparatus 3.
In other words, since in the cache apparatus 3, there exists one entry 10 having the same index value in each of the ways W0 to W3, there exist 4 entries, namely, 10-0, 10-1, 10-2, and 10-3 in the case of the 4-way cache.
As a result, in storing data of an arbitrary address in a cache, or retrieving data from the cache, it is only necessary to make access only to 4 entries 10-0, 10-1, 10-2, and 10-3.
In making access to data of certain memory address, first, the 4 entries 10-0, 10-1, 10-2, and 10-3 are retrieved in order to check whether there exists the data of the address in the cache 3.
When there exists the data, the entry 10-n storing the data is accessed, whereas when there does not exist the data, the data are read from the memory 1 to store the data in an appropriate entry among the 4 entries 10-0, 10-1, 10-2, and 10-3, then access is made to the entry 10-n.
In storing data to a prescribed entry 10-n, the index value in the index part 8 of the data to be stored is extracted, and an entry 10-n having row number identical to the index value is selected. Then, tag data TAG of the tag part 7 which are high order bits of the address part 5 in the data to be stored are stored in the tag part Tag of the entry 10-n, data DA in the data part of the data to be stored are stored in the data part Dat of the entry 10-n, and the valid bit VB is assigned a value 1.
Here, the valid bit VB is a bit showing validity/invalidity of the data DA of the data part Dat in the entry 10-n.
In retrieving data of a designated memory address from within the cache, first, as shown in FIG. 5(A), each of data of the tag part (Tag0 to Tag3), data part (Dat0 to Dat3), valid bit VB (VB0 to VB3), LRU bit LRUB (LRUB0 to LRUB3), and lock bit (LB0 to LB3) of the 4 entries, 10-0, 10-1, 10-2, and 10-3 are read from each of the ways W0 to W3.
Next, as shown in FIG. 6, these values are compared with the tag data TAG of the designated memory address in the data to be stored, and cache hit signals W0hit to W3hit and cache miss signals W0miss to W3miss are generated.
The hit/miss decision circuit shown in FIG. 6 comprises four comparator circuits 601 to 604 which compare TAG of 20 bits with each Tagn (n=0 to 3) of 20 bits, four AND gates 605 to 608 each of which receives the output of each comparator circuit at one end and receives each of the valid bits VBn (n=0 to 3) at the other end, and four inverters 509 to 512 each of which receives the output of each AND gate.
The cache hit signals Wnhit (n=0 to 3) are signals which have the value 1 when TAG=Tagn and VBn=1, and it means that an entry of a way with the value 1 for the signal is storing data of the designated memory address.
The cache miss signals Wnmiss (n=0 to 3) are signals which have the value 1 when TAGxe2x89xa0Tagn.
From what is described in the above, either one of W0hit to W3hit has the value 1 at cache hit, and can be used for controlling the connection between the data buses on the CPU side and the data buses (data0 to data3) of the ways, as shown in FIG. 7(A).
Further, all of the W0miss to W3miss have the value 1 at cache miss, and in combination with select signals W0sel to W3sel that will be described later these signals can be used for controlling the connection between the data buses on the memory side and data0 to data3, as shown in FIG. 7(B).
The bus selection circuit shown in FIG. 7(B) comprises an AND gate 701 which receives the cache miss signals Wnmiss (n=0 to 3), and four AND gates 702 to 705 whose respective one ends are connected in common to the output of the gate 701 and the way selection signals Wnsel (n=0 to 3) are connected to the other ends.
When there does not exists the data of the designated memory address in the four entries 10-0, 10-1, 10-2, and 10-3 retrieved, one entry is selected from among the four entries, the data is read from the memory, and stores the data in the selected entry, namely, the data is written over the currently stored data.
For the selection of the entry, use is made of an LRU. LRU stands for xe2x80x9cleast recently usedxe2x80x9d, and it is represented by a lock bit LRU bit which holds the order of acception of access among the entries having identical row number of respective ways in order to make the least recently accessed entry as an object of rewriting.
The smallest value of the LRU bit LRUB is 0 which means that it is the entry which is most recently accessed among these entries.
The largest value of the LRU bit LRUB is 3 which shows that it is the least recently accessed entry among these entries.
In other words, the LRU bits LRUBs of various entries are updated as needed when access is made to each entry, so as to show either one of the values 0 to 3 in order to keep the timings of the access.
Accordingly, when data of a certain memory address is accessed and the data did not exist in the cache, the data are read from the memory, and the data are written over the entry having the oldest access time, namely, the entry with the maximum LRUB value, which is 3 in this example, of the LRU bits.
In order to determine a way having an entry to be rewritten, a way selection circuit 801 shown in FIG. 8 inputs LRUBn (n=0 to 3) and the way selection signal Wnsel (n=0 to 3) corresponding to the maximum value among them is given the value 1.
The reason for using the LRU bit array for the selection of an entry for overwriting of data is that it is known empirically that data which has once started to be accessed less frequently has a smaller possibility of being accessed again later.
What has been described in the above is the operation of a general cache using LRU bit array.
Note, however, that the generation logic of W0sel to W3sel is somewhat different for a cache configuration having a lock bit array. Namely, as shown in FIG. 12, since the entry 10 with the lock bit LB value of 1 shows that overwriting of data is inhibited, Wnsel (n=0 and 1) should not be equal to 1 when LBn (n=0 and 1) equals 1.
Because of this, the bit value of the LRU bit is masked with the inverted logic of the lock bit, and W0sel to W3sel are generated using the largest value of the masked result. As a result, when LB0 is 1, the masked result with LRUB0 is 0, and cannot be the largest value, so that W0sel cannot take on the value 1 and the way 0 will not be selected. Similarly, when LB1 is 1, W1sel cannot be 1 and the way 1 will not be selected.
Moreover, as shown in FIG. 7(B), from W0miss to W3miss and W0sel to W3sel, control signal lines are generated between the data buses of the memory 1 and the data buses data0 to data3 of various ways 10.
Based on these control signals, the data buses of the ways designated by W0sel to W3sel and the data buses on the memory side are connected, and the data read from the memory are written to the selected entry of the designated way.
Here, referring to a simple specific example, the case of executing cache lock using the conventional cache apparatus 3 will be described.
First, referring to FIG. 9, where each data within the memory 1 are stored in the cache 3 will be described.
As an example, assume that data stored in address 0xdffcaabb in the memory 1 are 0x88. Hereafter, in order to avoid confusion, values of the hexadecimal system will be preceded by the symbol 0x.
In the above, the address is given in the hexadecimal system which will be shown in binary numbers as given below.
1101 1111 1111 1100 1010 1010 1011 1011
(d) (f) (f) (c) (a) (a) (b) (b)
Of such an address, 20 bits (for example, from the 31st bit to the 12th bit) of high order bits are the tag data TAG, 6 bits (for example, from the 11th bit to the 6th bit) of intermediate order bits are the index (Index), and the remaining low order five bits (the fifth bit to the 0th bit) are the offset.
Accordingly, in the above address, the tag data, index, and offset are given respectively by; tag data: 1101 1111 1111 1100 1010 (=0xdffca) index: 1010 10 (=10 1010=0x2a) offset: 11 1011 (=0x3b)
The storage location of each data are determined by the value of LRU bit (LRUB), and the values of the index and the offset of the address in the cache 3.
In the above example, since the index is 0x2a (40 in the decimal system), it is stored in address 40 of the ways W0 to W3 of the cache 3 as shown in FIG. 9.
As shown in FIG. 9, the entry of each way W0 to W3 consists of Tag part of 20 bits, data part Dat of 256 bits (=64 bytes), 1 bit of valid bit VB, and 1 bit of modify bit Mo. From the value of the offset, it is determined where in the data part (Dat) of 64 bytes the data are to be stored.
Since in this example, the offset is 0x3b (59 in the decimal system) as shown in FIG. 9, the data (0x88) of 8 bits at the address 0xdffcaabb are stored at the 59th byte.
In this case, if a specific address is accessed, it is known empirically that the frequency is high that the adjacent addresses are accessed later. Therefore, in storing data from the memory 1 to the cache 3, data are stored with the size (64 bytes in FIG. 9) of the data part (Dat) of one entry 10 of the cache 3 as the unit.
Accordingly, in the above example, in storing data of 0xdffcaabb in the cache, data with addresses that have the same tag and the same index as those of 0xdffcaabb, namely, data with address 0xdffcaa80 (with the offset 0x00) to 0xdffcaabf (with the offset of 0x3f) are collectively stored in address 40 of the cache 3. The tag data TAG (0xdffca in the above example) of the address are written to the tag part (Tag) of the entry storing the data as shown in FIG. 9, and 1 is written to the valid bit VB of the entry 10 storing the data. This is to show that valid data are stored in cache entry 10 of the cache 1.
Moreover, in order to decide whether data of address which becomes the object of a load/store instruction are stored in the cache, the index and the tag data will be used.
For example, in order to decide whether data of address 0xdffcaabb are stored in the cache, the data should be stored in the entry of address 40 of the cache 3 if they are stored in the cache 3 since the index of 0xdffcaabb is 0x2a (40 in the decimal system). Accordingly, as shown in FIG. 10, the value of the data part (Dat) of address 40 of the entry 10 of the cache 3 is read to confirm whether the value is equal to the data value of 0xdffca in a comparator 1011 of 20 bits. Moreover, the output of the comparator 1101 is needed to be masked with valid bit using an AND gate 1102 in order to confirm whether the data are valid.
The valid bit is for preventing, at resetting, for example, when the tag field of address 40 of the cache is found at 0xdffca by chance, a misjudgment that data exist in the cache, also for storage check at the first load/store instruction to addresses 0xdffcaa80 to 0xdffcaabf.
This is due to the fact that no initialization is executed for the reason that the scale of the circuit for initializing all the internal bits of the cache becomes large, and even when they are initialized, it misjudgment occurs when the same address as that of the initialization is input. The role of the valid bit is to avoid such a misjudgment. At the time of turning on a power supply, all the valid bits are cleared to 0, and the valid bit of the entry to which data are transferred to the data part from the memory is set at 1.
By so doing, it is possible to guarantee that the tag part and the data of the entry with the valid bit 1 have effective values (different from those at the initial state).
In a cache lock device having a configuration described as in the above, the replacement method of entry data based on the LRU bit is a method to retain data that are frequently accessed within the cache as much as possible. However, there are cases, like data used by the instructions of an operating system (OS) in which, although data are not accessed frequently, it is desired to be transferred as fast as possible once accessed.
In order to handle such a case, there is needed a mechanism by which entries for storing data of specified addresses designated by the user are excluded from the objects of data replacement. This is the cache lock mechanism. In FIG. 11 and FIG. 12 are shown the configuration of the cache provided with the cache lock mechanism.
As in the above, lock bit LB is provided in each entry of the way W0 and way W1 in the cache apparatus. This lock bit LB can be written by giving the addresses of the way and the cache in a lock instruction. Accordingly, if it is desired to make data resident in the cache, what needs be done is to write 1 to the lock bit of the entry storing the data using the lock instruction.
An entry with lock bit 1 is excluded from the overwrite object of new data. FIG. 11 illustrates the case in which data with address 0xdffcaa80 to address 0xdffcaabf are stored in the cache.
Since the index is 0x2a (40 in the decimal system), data will be stored at address 40 of either one of the ways W0 to W3. In this example, since the LRU bit value of address 40 of the way W0 is the largest, data from the memory will be stored in address 40 of the way W0 if the cache lock mechanism did not exist. Since, however, the lock bit of address 40 of the way W0 is 1, address 40 is excluded from the object of replacement of new data, and address 40 of the way W3 which has the next largest LRU bit value LRUB is selected as the storage destination. This can be realized by selecting the maximum value obtained by masking the LRU output values LRUBs from various ways with the inverses of the output values of the lock bits LBs as shown in FIG. 12.
However, according to the conventional cache lock mechanism, data in the entry with written lock bit 1 are excluded from the object of data replacement regardless of the value of the valid bit.
In other words, when the lock bit of an entry with the valid bit 0 (storing invalid data) is given the value of 1, the entry cannot store new data sent from the memory, and continues to store the invalid data, which reduces the effective capacity of the cache.
In order to avoid this situation, it is necessary to transfer data desired to be made resident from the memory and to store them in the entry to be locked at the timing of setting the lock bit to 1 as shown in FIG. 13.
That is, after issuing an instruction to make access to data desired to be locked in the cache, it is necessary to issue a lock instruction while the data remain within the cache. Accordingly, the timing for issuing the lock instruction is a delicate problem to determine.
For this reason, in locking data, it is necessary to transfer data desired to be locked to the cache from the memory irrespective of whether the data will actually be accessed during the program. This results in a drawback that a transfer which could be unnecessary to begin with might have to be executed.
Moreover, it is difficult to specify in advance data that may actually be accessed during the program. Since such a situation arises frequently, and the transfer of data from the memory to the cache is slow, deterioration in the performance caused by the need for executing transfers which are theoretically unnecessary to begin with, is substantial.
In order to resolve such problems, a method of enhancing the cache hit ratio by updating an invalid data using a means effective to an idle cycle has been disclosed in, for example, Japanese Unexamined Patent Applications Laid Open No. Hei 4-324547. However, neither disclosure nor suggestion is given in such a publication as to a cache lock device constructed in such a way that data which is desired to be made resident in the cache is protected against easy overwriting on the cache apparatus. Moreover, a technique for obtaining a cache memory having a high bit ratio in the initial state by reducing the capacity and area of the tag memory is disclosed in Japanese Unexamined Patent Applications Laid Open No. Hei 6-243045, but neither disclosure nor suggestion is provided as to a cache lock device constructed in such a way that data desired to be made resident in the cache is protected against easy overwriting on the cache apparatus.
Furthermore, a memory device provided with an address comparison means which decides whether the contents of a tag register matches the block number of an access address is disclosed in Japanese Unexamined Patent Applications Laid Open No. Hei 8-339331, but neither disclosure nor suggestion is provided as to a cache lock device constructed in such a way that data desired to be made resident in the cache is protected against easy overwriting on the cache apparatus.
It is therefore an object of the present invention to provide a cache apparatus including a cache lock method, and its control method, by which the access speed to data desired by the user is enhanced by preventing an invalid data to be locked and retained in the cache, that is, by enhancing the cache efficiency.
In order to achieve the above object, this invention adopts the following basic technical constitution. Namely, a first mode according to this invention is a cache lock device including a main memory, a cache memory, a CPU which outputs a first address of a corresponding data on the main memory to a bus in response to a cache lock instruction, and outputs a corresponding second address on the main memory to the bus in response to an access instruction, a first entry selection circuit which selects a first entry on the cache memory upon receipt of the first address, a write circuit which writes tag data of the first address in a tag part of the first entry, a lock bit change circuit which sets active a lock bit of the first entry, a second entry selection circuit which selects a corresponding second entry on the cache memory in response to the second address, a tag comparator circuit which compares the value TAG of a tag part of the second address with the value tag of the tag part of the second entry, a lock bit detection circuit which detects the lock bit state of the second entry, an LRU output conversion circuit which changes the value of an LRU output so as to make the second entry to be an object of rewriting when the lock bit of the second entry is active, the comparison result of the tag values is in agreement, and a valid bit is inactive, and an LRU control circuit which causes to execute writing to the entry of rewrite object determined by the LRU output conversion circuit.
By adopting such a constitution, an invalid data can be prevented from being locked and retained within the cache and enhance the efficiency of the cache, and hence it is possible to obtain a cache apparatus including a cache lock device which enhances the access speed to data desired by the user.
Moreover, a second mode according to this invention is, in a cache lock device including a main memory, a cache memory which is formed by arranging a plurality of ways composed of a collection of a plurality of entries consisting of at least a tag memory array, a data memory array, a valid bit array and an LRU bit array, where a lock bit array is provided in each entry constituting at least a part of the ways, and a CPU which outputs address of data to be transferred to a corresponding cache memory on the main memory in response to a cache lock instruction, a load/store instruction or the like, the cache lock device which includes a way selection circuit which selects a way that has an entry on the cache memory having an entry number the same as the index value included in an index part of the data and also has a lock bit, upon receipt of the address in response to the cache lock instruction, an entry selection circuit which selects an entry on the cache memory having the entry number identical to the index number included in the index part of the data in the selected way upon receipt of the address, a write circuit which stores the tag data of the tag part included in the address of the data to be made resident in the selected entry, and a lock bit change circuit which sets the lock bit value to a second value in response to the storage of the tag data.
In a cache lock device with such a constitution, it is possible to determine easily and quickly the place of residence of the data desired to be made resident in the cache memory.
Furthermore, a third mode according to this invention is, in a cache lock device including a main memory, a cache memory formed by arranging a plurality of ways composed of a collection of a plurality of entries consisting of at least a tag memory array, a data memory array, a valid bit array, and an LRU bit array, where a lock bit array is provided in each entry constituting at least a part of the ways, and a CPU which outputs the address of data to be transferred to a corresponding cache memory on the main memory in response to a cache lock instruction, a load/store instruction or the like, the cache lock device which is composed of an entry selection circuit in which, in transferring a desired data from the memory means to the cache memory, the cache lock device selects from each of the plurality of ways an entry having the same entry number as an index value of an index part constituting the data desired to be transferred which exists within the memory means, in response to a load instruction or a storage instruction, a lock bit detection circuit which detects whether the lock bit array is provided for each of the plurality of selected entries, and outputs a discrimination signal, a valid bit decision circuit which decides when the discrimination signal from the lock bit detection circuit in a specified entry indicates the presence of a lock bit and its value is a second value, whether the valid bit in the entry is the second value, a data overwrite disabling circuit which inhibits the overwrite of data in a data part constituting the data desired to be transferred to the data part of the entry when the valid bit in the entry is the second value, a valid bit rewrite circuit which enables the overwrite of data of the data part constituting the data desired to be transferred to the data part of the entry when the value of the lock bit in the specified entry is the second value, the value of a tag part of the selected entry matches the tag constituting the data to be transferred, and the value in the valid bit array is the first value, and rewrite the bit value of the valid bit in the valid bit array of the entry to the second value, and an LRU output conversion circuit which changes the value to the first value.
By adopting such a constitution, when a new data desired to be made resident in the cache memory are generated against a plurality of data that are already stored in the cache memory, it is possible to judgment exactly, easily, and quickly as to which data of the plurality of data stored in the cache memory needs be replaced by the data desired to be newly made to be resident.
A fourth mode according to this invention is, in a cache lock device including a main memory, a cache memory formed by arranging a plurality of ways composed of a collection of a plurality of entries consisting of at least a tag memory array, a data memory array, a valid bit array, and an LRU bit array, where a lock bit array is provided to each entry constituting at least a part of the ways, and a CPU which outputs to a bus the address of data to be transferred to a corresponding cache memory on the main memory in response to a cache lock instruction, a load/store instruction or the like, the cache lock device includes a control consisting of a means for extracting data in a tag part and an index in an index part in the address of a data desired to be made resident in the cache memory in response to a cache lock instruction, a means for selecting a way having the lock bit array from among a plurality of ways, a means for selecting an entry having the same entry number as the index value of the index part in an address part of the data desired to be made resident in the cache apparatus, a means for storing data of a tag part in the data desired to be made resident in the tag part of the selected entry, and a means for setting the value of the lock bit in the entry where the tag part data of the data desired to be made resident, to a second value.
By adopting such a constitution, it is possible to execute the locking operation of data easily and quickly when there are generated data which need be made resident in the cache memory in the early stage after a reset.
A fifth mode according to this invention is, in a cache lock device including a main memory, a cache memory formed by arranging a plurality of ways composed of a collection of a plurality of entries consisting of at least a tag memory array, a data memory array, a valid bit array, and an LRU bit array, and a CPU which outputs to a bus the address of a corresponding data on the main memory in response to a cache lock instruction, a load instruction, a store instruction or the like, a cache lock method which consists of a first step of selecting data to be transferred to the cache memory and extracting an index value of designated address when transferring the data to be made resident in the cache memory to the cache memory, a second step of selecting from cache way an entry having the entry number that is the same as the index value of the data to be transferred to the cache memory, a third step of discriminating for each of the selected entries whether it possesses a lock bit LB, a fourth step of judging whether the lock bit LB is a second value or a first value when there exists an entry having the lock bit LB in the selected entries, a fifth step of comparing the value of a tag part of the entry with the tag value of the designated address when it is found in the fourth step that the lock bit LB of the selected entry is the second value, a sixth step of judging whether the value of a valid bit of the entry is the second value or the first value when it is found in the fifth step that the lock bit LB of the entry is the second value and the result of comparison of the tag values is in agreement, a seventh step of inhibiting the overwrite to the entry by masking the LRU output of the entry to the first value when it is found in the sixth step either that the comparison result of the tags is in disagreement though the value of the lock bit is the second value or that the value of the valid bit is the second value, an eighth step of making the entry to be an object of overwrite by setting the LRU output of the entry to the second value and masking the LRU outputs of the other selected entries to the first value when it is found in the sixth step that the value of the lock bit is the second value, the comparison result of the tag values is in agreement, and the value of the valid bit is the first value, a ninth step of selecting the entry with the largest LRU output from among the selected entries, and a 10th step of overwriting the data and writing the second value to the valid bit to the entry selected in the ninth step.
By adopting such a method, it is possible to exactly, easily, and quickly judge as to which data out of a plurality of data already stored in the cache memory needs be replaced by new data desired to be made resident when there are generated data desired to be made resident newly in the cache memory.
Finally, a sixth mode according to this invention is, in a cache lock device including a main memory, a cache memory formed by arranging a plurality of ways composed of a collection of a plurality of entries consisting of at least a tag memory array, a data memory array, and an LRU bit array, and a CPU which outputs to a bus the address of a corresponding data on the main memory in response to a cache lock instruction, a load instruction, a store instruction or the like, the cache lock method consisting of a first step of executing the cache lock instruction, selecting data to be transferred to the cache memory, and extracting an index value of a designated address when the data to be made resident in the cache memory is transferred first to the cache memory following initialization of the cache lock device, a second step of selecting a way having a lock bit LB from among the ways, a third step of selecting from each way an entry having the entry number the same as an index value of the data to be transferred to the cache memory, a fourth step of selecting either one of the entries when it is found in the second step that there exist entries having the lock bit LB, and a fifth step of storing tag data of the data to be made resident in the cache memory in a tag part of the selected entry, and setting the value of the lock bit LB to the second value.
By adopting such a method, it is possible to execute easily and quickly the locking operation of the data when there are generated data needed to be made resident in the cache memory in the initial stage following a reset.