1. Field of the Invention
The present invention relates to a data processing system having virtual memory addressing, and more particularly to such a system in which a central processor contains an address register whose low-order portion accepts a real class address and whose higher-order portion accepts a virtual page address, in which an address translation memory translates virtual page addresses into real page addresses, and in which a buffer memory is arranged between the central processor and a working memory, which contains a first address register for the real page address and a second address register for the real class address and which further comprises a data buffer subdivided into a plurality of banks of identical size and a plurality of tag/flag memories respectively assigned to the banks for storing the page address, a respective comparison circuit being provided at the outputs of the tag/flag memories for comparing the translated page address to the page address existing under certain conditions in the tag/flag memory, and in which, given equality of the two addresses, a control signal is emitted for the appertaining data buffer.
2. Description of the Prior Art
Larger data processing systems often work with virtual memory addressing. This results in the fact that, given access to the working memory, every virtual address must previously be translated in a real, physical address. In order to keep the expense as low as possible, this translation occurs in a known manner in that the virtual memory and the physical memory are subdivided into pages of, for example, 2 k byte size and a virtual page address is assigned to each physical page address by way of translation tables which, for example, can be stored in the working memory. In order to keep the number of read accesses to the address translation tables as low as possible, as is known from the British Pat. No. 1,153,048 and the German published application No. 26 05 617, a rapid, partially associative address translation memory is provided in the central unit in which a portion of the translation tables is temporarily duplicated. Because of its smallness, such an address translation memory is preferably similarly constructed and organized like a buffer memory or, respectively, cache often provided in larger systems between the central processor and the working memory, so that memory accesses in such systems can be executed in a particularly time-saving manner.
Buffer memories and working memories are generally organized according to the congruence class principle, i.e. are subdivided into pages, whereby a distinction according to classes is again made within each page. For the manner of operation of the buffer memory, subdivided into a plurality of banks of page size, the determination is important that although data words may be entered from the working memory into any bank of the buffer memory, they may only be entered within each bank in that class from which the data word was taken from the working memory. In the search operation for a specific entry in the buffer memory, this offers the advantage that both the banks of the data buffer and the tag/flag memories assigned to the banks in which the page addresses of the individual entries are contained can be directly and immediately selected with the class address, because the class address portion of a user address remains unchanged in the address translation. If, however, as already mentioned, the address translation memory is built up and organized analogous to the buffer memory, then this can be driven with the virtual page address at the same time as the tag/flag memories and the data buffer banks respectively driven with the real class address, so that all three aggregates simultaneously offer the selected contents at their outputs after termination of the access time.
Finally, the translated, real address is compared in a comparator circuit to the content of the tag/flag memory, and given address equality, the bank selection multiplexer post-connected to the data buffer banks is appropriately set. The read access is thus terminated, so that a new read access can be immediately initiated.
This simultaneous access to the tag/flag memory and to the data buffer banks is always possible when the capacity of the individual data buffer banks coincides with the page size provided for the address translation. The continuing development in the area of memory technology, however, now leads to the fact that the capacity of the memory modules from which the data buffer banks are constructed is constantly increasing. Among other things, this also makes a more complex format of the memory possible, which in turn benefits the access time. As is known, the capacity of a data buffer bank is determined from the product of the width of access of the central processor during "read" and the bit capacity of the selected memory modules, so that, given a width access of 8 byte and a module capacity of 1024 bits, for example, a capacity of 8 k byte occurs. In contrast thereto, the page size fixed in the operating systems at present amounts to only 2 k byte and will remain fixed in value in the forseeable future.
The difficulty therefore exists that the advantageous direct addressing with the respectively appertaining class address can only be retained when the page division adapted to the new technical conditions and if it were to be likewise increased to 8 k byte in accordance with the example selected. The change of the operating systems required in this regard, however, cannot be expected in the immediate future.