An ability to store large amounts of data (or records), which data can be efficiently accessed, modified, and restored, is generally a necessary function for large, intermediate and, in some instances, even small computing systems. Data storage is typically separated into several different levels, or hierarchically, in order to provide efficient and cost effective data storage. A first, or highest level of data storage involves electronic memory, usually dynamic or static random access memory (DRAM or SRAM). Electronic memories take the form of semiconductor integrated circuits wherein millions of bytes of data can be stored on each circuit, with access to such bytes of data measured in nano-seconds. The electronic memory provides the fastest access to data since access is entirely electronic. The amount of electronic memory that can be realistically provided in a data processing system is limited due to the higher cost of such memory.
A second level of data storage usually involves direct access storage devices (DASD). DASD storage, for example, can comprise magnetic and/or optical disks, which store bits of data as micrometer sized magnetic or optical altered spots on a disk surface for representing the "ones" and "zeros" that make up those bits of the data. Magnetic DASD, includes one or more disks that are coated with remnant magnetic material. The disks are rotatably mounted within a protected environment. Each disk is divided into many concentric tracks, or closely spaced circles. The data is stored serially, bit by bit, along each track. An access mechanism, known as a head disk assembly (HDA), typically includes one or more read/write heads, and is provided in each DASD for moving across the tracks to transfer the data to and from the surface of the disks as the disks are rotated past the read/write heads. DASDs can store giga-bytes of data with the access to such data typically measured in milli-seconds (orders of magnitudes slower than electronic memory). Access to data stored on DASD is slower due to the need to physically position the disk and HDA to the desired data storage location.
A third or lower level of data storage includes tape and/or tape and DASD libraries. At this storage level, access to data is much slower since a robot is now needed to select and load the needed data storage medium. The advantage is reduced cost for very large data storage capabilities, for example, tera-bytes of data storage. Access to data stored on tape and/or in a library is presently on the order seconds.
Generally, faster access to data is desirable, and hence different solutions have been presented to improve the access to data amongst the different hierarchical levels of data storage. One widely used method to improve access to data uses a specialized electronic memory, known as cache, to temporarily hold data and/or instructions being transferred from DASD to a host processor, for example. Increasing the size of cache memory provides improvements in data access rates to data stored in lower levels of the hierarchy. Cache sizes, once relatively small and measured in kilo-bytes, are now much larger being measured in megabytes or even giga-bytes of data storage. Such large caches exacerbates the need for an intermediary, for example, a memory controller, to interface the cache memory to the host processor. In such a system, a host processor may simply send a request to the memory controller, for example an address and an amount of data, and the memory controller translates the address, retrieves the data, and performs all other memory maintenance functions, including refresh. Thus the host processor is free to perform other processing tasks.
An example of a data processing system, may take the form of a host processor, such as an IBM System/360 or IBM System/370 processor for computing and manipulating data, attached to an IBM 3990 storage controller having a memory controller and one or more cache memory types incorporated therein. The storage controller is further connected to a group of direct access storage devices (DASDs) such as IBM 3380 or 3390 DASDs. While the host processor provides substantial computing power, the storage controller provides the necessary functions to efficiently transfer, stage/destage, convert and generally access large databases.
Referring to FIG. 1, a generalized block diagram of a memory controller is shown. The memory controller, as part of a storage controller, receives commands from a host processor to store or fetch data from cache memory. The object of interfacing the memory controller between the cache memory and the host processor is to improve the bandwidth of data transfer therebetween, that is, to move more data in the same time span. To implement the increased data bandwidth, the memory controller uses an arbitration or program manager to try to predict what steps will follow the request currently being executed. This is accomplished by looking ahead to the next operation from the host processor. Another common function of the memory controller is to perform data error checking and correcting for ensuring data integrity for data being stored in and retrieved from DASD. Still further the memory controller is necessary to synchronize communication between the host processor and the cache memory since electronic memory typically operates at a slower speed (for example, 25 Mhz) than the host processor (for example, 50 Mhz).
Cache memory may take several forms, including, for example, both volatile and non-volatile semiconductor memory. Volatile memory is that memory which loses its contents after power is removed, for example, DRAM or SRAM. Non-volatile memory is that memory which can retain its contents even after power is removed, for example, read only memory (ROM), electrically alterable random access memory (EARAM), or battery backed-up DRAM. DRAM is typically the least expensive of the electronic memories but suffers from the disadvantage of increased functional operating requirements. DRAM must be refreshed, that is, its memory contents is written back to itself on a regular basis in order to ensure its data contents is maintained. Such refresh interferes with regular data access and requires proper timing. SRAM is faster and less complex than DRAM but has an increased cost because fewer bits of data can be stored for a given area. Non-volatile memory may have lower or higher costs than volatile memory depending upon the density of bits and the number of processing steps required for manufacture.
Computer systems continue to increase in complexity as higher performance characteristics and greater storage capacities are pursued. Conversely, product lifetimes are decreasing. Hence, product development cycle times must decrease accordingly, even though more sophisticated and complex systems are being designed. The design of the memory controller is complex and time consuming and since the memory addressing and control commands are integral to the memory controller, the type of memory used in cache must be specified at the beginning of the design cycle.
Several drawbacks result from having to determine the memory type at the beginning of the design cycle. First, new memory types may become available after the design cycle begins. For example, synchronous dynamic random access memory (SDRAM) is a newly available memory type. Second, memory having similar storage capacities but differing addressing schemes may be desired after product announce. Third, customers needs may change after using the data processing system, for example, it may be desired to replace DRAM with SRAM or SDRAM as funds become available.
One attempt to alleviate the rigidity of being locked into a single memory type has been to provide an address generator capable of addressing different types of memory. The address generator, given a memory select signal, could address either DRAM or SRAM. Such a solution, however, still requires that the memory controller still be able to provide memory specific signals such as refresh, instruction decoding etc. Hence, accessing a new memory type such as SDRAM requires a entirely new memory controller.
Accordingly it is desired to provide a method and apparatus for providing the flexibility in memory controller design to allow multiple memory types to be used in cache memory or to allow the choice of cache memory type to be chosen late in the design cycle or to be changed after product development without requiring a new memory controller design.