A cache memory is provided between a central processing unit and a main memory of a computer system in order to speed up the operation speed, the cache memory functioning as a high speed buffer memory and storing a portion of programs and data to be stored in the main memory.
Virtual addresses are used with recent central processing units so that address translation is required between virtual addresses and real addresses of the cache memory and main memory. Since the size of a translation table becomes large as the address space becomes large, the table is generally structured hierarchically. It takes some time to refer to the hierarchical table and find a real address. In order to obtain a real address at high speed, tables having an associative function called a Table Look aside Buffer (TLB) are provided in parallel as hierarchical tables.
TLB is therefore required to execute address translation at high speed and also at high hit probability while using a small circuit scale.
Two types of associative schemes, full associative and set associative, are used for TLB. With the former scheme, an input address is compared with all data stored in TLB to check coincidence/non-coincidence therebetween. If there is coincident data stored in TLB, a signal indicating a data presence and the stored data are output.
With the latter set associative scheme, candidates of coincident data are selected, and an input data is compared with these candidate addresses to check coincidence/non-coincidence therebetween. If there is coincident data, a signal indicating a data presence and a real address corresponding to the coincident data are output.
As above, since the full associative scheme compares all data, the number of comparators increases and the circuit area becomes large. In order to suppress an increase in the circuit area, a simple circuit having a small area is used as the comparator. Therefore, a time required for data comparison becomes long, and because of a number of comparators, power consumption becomes great. Although there are such disadvantages, a data coincidence probability becomes high because the comparison is executed for all stored data.
In the case of TLB of the set associative scheme, the number of comparators is as small as two to four sets because candidates for compared data are selected and the coincidence/non-coincidence check is performed only for these candidates. Accordingly, a high speed comparator circuit can be used and coincidence detection can be performed at high speed, although the comparator circuit becomes complicated. However, a restriction of candidate selection lowers a data coincidence probability. Therefore, a coincidence probability generally equal to the full associative TLB cannot be obtained unless the scale of the TLB storage circuit is increased by about a fourfold. This expansion of the circuit scale increases the number of operating circuits, leaving some issues of an increased power consumption and an increased circuit area.
An example of a coincidence-detecting circuit for the full associative scheme is described in JP-A-59-231789 in which a coincidence-detecting circuit is provided independently for each memory cell for the comparison between search data and stored data. An example of a coincidence-detecting circuit of this type for higher speed operations is described in IEEE Journal of Solid State Circuits Vol. 28, No. 11, pp. 1078-1083. According to this report, a reference signal line is provided in parallel with a coincidence-detecting signal line and also with a current supply line, and a differential type NOR gate is formed by coincidence-detecting MOSFETs, for the purpose of high speed detection. This approach has a restriction of the circuit area because of a need of three wiring lines, although high speed operation is realized.
An example of TLB of the set associative scheme is described in JP-A-60-117495 in which a circuit for the comparison with search data utilizes a sense amplifier for reading memory cell data.