This invention relates generally to data storage systems, and more particularly to data storage systems adapted to interface with a host computer system.
As is known in the art, large mainframe or open system (i.e., host) computer systems require large capacity data storage systems. These host computer systems generally includes data processors which perform many operations (i.e., functions) on data introduced to the computer system through peripherals including the data storage system. The results of these operations are output to peripherals, including the storage system.
One type of data storage system is a magnetic disk storage system. Here a bank of disk drives and the host computer system are coupled together through an interface. The interface includes CPU, or "front end", controllers and "back end" disk controllers. The interface operates the controllers in such a way that they are transparent to the computer. That is, data is stored in, and retrieved from, the bank of disk drives in such a way that the host computer system merely thinks it is operating with one host computer memory. One such system is described in U.S. Pat. No. 5,206,939, entitled "System and Method for Disk Mapping and Data Retrieval", inventors Moshe Yanai, Natan Vishlitzky, Bruno Alterescu and Daniel Castel, issued Apr. 27, 1993, and assigned to the same assignee as the present invention.
As described in such U.S. Patent, the interface may also include, in addition to the CPU controllers and disk controllers, addressable cache memories. The cache memory is a semiconductor memory and is provided to rapidly store data from the host computer system before storage in the disk drives, and, on the other hand, store data from the disk drives prior to being sent to the host computer. The cache memory being a semiconductor memory, as distinguished from a magnetic memory as in the case of the disk drives, is much faster than the disk drives in reading and writing data.
The CPU controllers, disk controllers and cache memory are interconnected through a backplane printed circuit board. More particularly, disk controllers are mounted on disk controller printed circuit boards. CPU controllers are mounted on CPU controller printed circuit boards. And, cache memories are mounted on cache memory printed circuit boards. The disk controller, CPU controller and cache memory printed circuit boards plug into the backplane printed circuit board. In order to provide data integrity in case of a failure in a controller, the backplane printed circuit board has a pair of busses. One set the disk controllers is connected to one bus and another set of the disk controllers is connected to the other bus. Likewise, one set the CPU controllers is connected to one bus and another set of the CPU controllers is connected to the other bus. The cache memories are connected to both busses. Thus, the use of two busses provides a degree of redundancy to protect against a total system failure in the event that the controllers, or disk drives connected to one bus fail.
In one system, the communication to the controllers and the cache memories is through a pair of bi-directional lines. Typically one bi-directional line is for data and the other bi-directional line is for control signals. As noted above, each of the controllers is connected to only one of the busses and, therefore, only one pair of bi-directional lines are electrically connected to the controllers; however, because each one of the cache memories is connected to both busses, each cache memory has two pairs of bi-directional lines.
Thus, in transferring data between the host computer and the bank of disk drives the controllers operate to pass such data through the cache memory. The data transfer is in accordance with a timing protocol. One such protocol is described in co-pending patent application Ser. No. 08/701,868 filed Aug. 23, 1996, entitled "Data Storage System Having Row/Column Address Parity Checking", inventors Eli Leshem and John K. Walton, assigned to the same assignee as the present invention, the entire subject matter thereof being incorporated herein by reference. As described in such co-pending patent application, the basic process used to effect a data transfer includes the following four sequential steps: (1) a controller selects one of the addressable memories with an address and provides a command, such as a read command or a write command, and if a write command, the data to be written into the addressed memory is placed on the bus; (2) the data is transferred between the bus and the memory; (3) a error status signal (i.e., whether there were any errors, such as parity check detecting errors in the data transfer, a correctable error, or an un-correctable error, for example) is presented by the addressable memory to the bus; and (4) the controller determines from the error status signal whether the data transfer was performed correctly. If the data transfer was not performed correctly, the controller goes through the cycle again. As described in the co-pending patent application, there is a delay in transferring data between the bus and the random access memory (RAM) included in the addressable memory. Thus, there is a delay between the time the data on the bus is transferred to the addressable memory and the time the data is stored into the RAM of such addressable memory.
As is also known in the art, the data to be transferred may be, for example, 4.096 Kbytes. However, the 4.096 Kbytes of data is transferred in bursts (or batches) having a fixed number of bytes, for example bursts having 256 bytes. Thus, to complete the 4.096 Kbyte data transfer, a series, or sequence, of 16 bursts are required. It is also known in the art, different ones of the controllers may be transferring data between the host computer and the bank of disk drives through the memory. Further, the bus through which the data is transferred only allows one controller to have access to the bus at any one time. Thus, any one of the addressable cache memories may be process requests from one controller while interleaved with such requests the same memory may process requests from other ones of the controllers. Thus, the bursts processed at the request of one controller are interleaved with bursts processed at the request of other ones of the controllers.
In transferring the data in each burst between the bus and the addressed addressable memory, the four step protocol summarized above is used. As noted, there is a delay in transferring data between the bus and the random access memory (RAM) included in the addressable memory. Thus, there is a delay, .DELTA., between the time the data on the bus is transferred to the addressable memory and the time the data is stored into the RAM of such addressable memory. It follows then, that if there are sixteen bursts for the data transfer, there is a 16.DELTA. delay included in the data transfer process. It is also noted that if there is an error in the data transfer of any one of the bursts in the data transfer, the controller must process the data again (i.e., repeat the, up to, 16 burst processing.