This invention relates generally to data storage systems, and more particularly to data storage systems having redundancy arrangements to protect against total system failure in the event of a failure in a component or subassembly of the storage system.
As is known in the art, large host computers and servers (collectively referred to herein as xe2x80x9chost computer/serversxe2x80x9d) require large capacity data storage systems. These large computer/servers generally includes data processors, which perform many operations on data introduced to the host computer/server through peripherals including the data storage system. The results of these operations are output to peripherals, including the storage system.
One type of data storage system is a magnetic disk storage system. Here a bank of disk drives and the host computer/server are coupled together through an interface. The interface includes xe2x80x9cfront endxe2x80x9d or host computer/server controllers (or directors) and xe2x80x9cback-endxe2x80x9d or disk controllers (or directors). The interface operates the controllers (or directors) in such a way that they are transparent to the host computer/server. That is, data is stored in, and retrieved from, the bank of disk drives in such a way that the host computer/server merely thinks it is operating with its own local disk drive. One such system is described in U.S. Pat. No. 5,206,939, entitled xe2x80x9cSystem and Method for Disk Mapping and Data Retrievalxe2x80x9d, inventors Moshe Yanai, Natan Vishlitzky, Bruno Alterescu and Daniel Castel, issued Apr. 27, 1993, and assigned to the same assignee as the present invention.
As described in such U.S. Patent, the interface may also include, in addition to the host computer/server controllers (or directors) and disk controllers (or directors), addressable cache memories. The cache memory is a semiconductor memory and is provided to rapidly store data from the host computer/server before storage in the disk drives, and, on the other hand, store data from the disk drives prior to being sent to the host computer/server. The cache memory being a semiconductor memory, as distinguished from a magnetic memory as in the case of the disk drives, is much faster than the disk drives in reading and writing data.
The host computer/server controllers, disk controllers and cache memory are interconnected through a backplane printed circuit board. More particularly, disk controllers are mounted on disk controller printed circuit boards. The host computer/server controllers are mounted on host computer/server controller printed circuit boards. And, cache memories are mounted on cache memory printed circuit boards. The disk directors, host computer/server directors, and cache memory printed circuit boards plug into the backplane printed circuit board. In order to provide data integrity in case of a failure in a director, the backplane printed circuit board has a pair of buses. One set the disk directors is connected to one bus and another set of the disk directors is connected to the other bus. Likewise, one set the host computer/server directors is connected to one bus and another set of the host computer/server directors is directors connected to the other bus. The cache memories are connected to both buses. Each one of the buses provides data, address and control information.
The arrangement is shown schematically in FIG. 1. Thus, the use of two buses B1, B2 provides a degree of redundancy to protect against a total system failure in the event that the controllers or disk drives connected to one bus, fail. Further, the use of two buses increases the data transfer bandwidth of the system compared to a system having a single bus. Thus, in operation, when the host computer/server 12 wishes to store data, the host computer 12 issues a write request to one of the front-end directors 14 (i.e., host computer/server directors) to perform a write command. One of the front-end directors 14 replies to the request and asks the host computer 12 for the data. After the request has passed to the requesting one of the front-end directors 14, the director 14 determines the size of the data and reserves space in the cache memory 18 to store the request. The front-end director 14 then produces control signals on one of the address memory busses B1, B2 connected to such front-end director 14 to enable the transfer to the cache memory 18. The host computer/server 12 then transfers the data to the front-end director 14. The front-end director 14 then advises the host computer/server 12 that the transfer is complete. The front-end director 14 looks up in a Table, not shown, stored in the cache memory 18 to determine which one of the back-end directors 20 (i.e., disk directors) is to handle this request. The Table maps the host computer/server 12 addresses into an address in the bank 14 of disk drives. The front-end director 14 then puts a notification in a xe2x80x9cmail boxxe2x80x9d (not shown and stored in the cache memory 18) for the back-end director 20, which is to handle the request, the amount of the data and the disk address for the data. Other back-end directors 20 poll the cache memory 18 when they are idle to check their xe2x80x9cmail boxesxe2x80x9d. If the polled xe2x80x9cmail boxxe2x80x9d indicates a transfer is to be made, the back-end director 20 processes the request, addresses the disk drive in the bank 22, reads the data from the cache memory 18 and writes it into the addresses of a disk drive in the bank 22.
When data is to be read from a disk drive in bank 22 to the host computer/server 12 the system operates in a reciprocal manner. More particularly, during a read operation, a read request is instituted by the host computer/server 12 for data at specified memory locations (i.e., a requested data block). One of the front-end directors 14 receives the read request and examines the cache memory 18 to determine whether the requested data block is stored in the cache memory 18. If the requested data block is in the cache memory 18, the requested data block is read from the cache memory 18 and is sent to the host computer/server 12. If the front-end director 14 determines that the requested data block is not in the cache memory 18 (i.e., a so-called xe2x80x9ccache missxe2x80x9d) and the director 14 writes a note in the cache memory 18 (i.e., the xe2x80x9cmail boxxe2x80x9d) that it needs to receive the requested data block. The back-end directors 20 poll the cache memory 18 to determine whether there is an action to be taken (i.e., a read operation of the requested block of data). The one of the back-end directors 20 which poll the cache memory 18 mail box and detects a read operation reads the requested data block and initiates storage of such requested data block stored in the cache memory 18. When the storage is completely written into the cache memory 18, a read complete indication is placed in the xe2x80x9cmail boxxe2x80x9d in the cache memory 18. It is to be noted that the front-end directors 14 are polling the cache memory 18 for read complete indications. When one of the polling front-end directors 14 detects a read complete indication, such front-end director 14 completes the transfer of the requested data which is now stored in the cache memory 18 to the host computer/server 12.
The use of mailboxes and polling requires time to transfer data between the host computer/server 12 and the bank 22 of disk drives thus reducing the operating bandwidth of the interface.
In accordance with the present invention, a direct memory access (DMA) transmitter is provided adapted to transfer data from a random access memory to an output. The data is generated by a central processing unit coupled to a bus. The generated data is initially stored in a local cache memory connected to the central processing unit. The DMA transmitter is also coupled to the bus. The DMA transmitter includes: (a) a data register; and (b) a transmitter state machine. Requested data at an address provided by a source is read from the random access memory then transferred for storage in the data register. The central processing unit also sends a control signal to the transmit state machine. The control signal indicates to the transmit state machine whether the read data is a most recent copy of the requested data in random access memory or whether the most recent copy of the requested data is still resident in the local cache memory. In response to the control signal, if the most recent data is in the local cache memory, the transmit state machine inhibits the data that was read from random access memory and now stored in data register from passing to the transmitter output. Transmit state machine then performs a second data transfer request at the same address, the second requested data being transferred from the local cache memory to the random access memory. The transmit state machine reads the second requested data from the random access memory. The second requested data is the most recent data available in the random access memory. The transmit state machine then stores such second requested data into the data register, such stored second requested data then being transferred to the transmitter output.
In accordance with one embodiment, a direct memory access (DMA) transmitter is provided adapted to transfer data from a random access memory to an output, such data having been generated by a central processing unit coupled to a bus. Such generated data is initially stored in a local cache memory connected to the central processing unit. The transmitter is operative in response to a transmit write enable signal produced from an external source requesting the data. The external source also sends to the DMA transmitter a random access memory address. The DMA transmitter is coupled to the bus. The transmitter includes an address register, a data register, and a transmitter state machine. The transmit state machine placing the address in the address register in response to the write enable signal. The requested data at such address is read from the random access memory then transferred for storage in the data register. The central processing unit also sends a control signal to the transmit state machine. The control signal indicates to the transmit state machine whether the read data is a most recent copy of the requested data in random access memory or whether the most recent copy of the requested data is still resident in the local cache memory. In response to the control signal, if the most recent data is in the local cache memory, the transmit state machine inhibits the data that was read from random access memory and now stored in data register from passing to the transmitter output. The transmit state machine then performs a second data transfer request at the same address. The second requested data is transferred from the local cache memory to the random access memory. The transmit state machine reads the second requested data from the random access memory, such second requested data being the most recent data available in the random access memory, the transmit state machine then storing such second requested data into the data register. The stored second requested data is then transferred to the transmitter output.
In accordance with another embodiment, a method for performing a direct memory access (DMA) transmitter transfers data from a random access memory to an output is provided. The method includes generating the data in a central processing unit coupled to a bus, such generated data being initially stored in a local cache memory connected to the central processing unit. A random access memory address is received from an external source, for storage in an address register. A bus request signal is sent to a bus arbiter connected to the bus. The arbiter performs a bus arbitration and, when appropriate, such arbiter grant access to the bus. When granted such bus, the address currently in the address register is placed on an address portion of the bus for the random access memory along with appropriate read control signals on a control signal portion of the bus. The control signal indicates whether the read data is a most recent copy of the requested data in random access memory or whether the most recent copy of the requested data is still resident in the local cache memory. The data at the address on the bus is read from the random access memory. The read data is and passed, via a data portion of the bus, to the data register. If the control signal indicates that the most recent data is in the local cache memory, the data that was just read from random access memory, and which has been stored in data register, is inhibited from passing to the transmitter output. A second data transfer request at the same address location is made. The second requested data is transferred by the central processing unit from the local cache memory to the random access memory. A re-arbitration is made for the bus; and after the bus is granted, the second requested data is read from the random access memory. The second requested data is the most recent data available in the random access memory. The second requested data is loaded into the data memory in response to the assertion of the signal produced by the transmit state machine. The stored second requested data then being transferred to the transmitter output.