1. Technical Field
The present invention relates, in general, to an improved data storage method and system to be utilized with heterogeneous computing systems. In particular, the present invention relates to an improved data storage method and system to be utilized with heterogeneous computing systems and which allows the heterogeneous computing systems direct access to the same data storage areas in a shared data storage subsystem. Still more particularly, the present invention relates to an improved data storage method and system to be utilized with heterogeneous computing systems and which allows the heterogeneous computing systems direct access to the same data storage areas in a shared data storage subsystem by creating a shared data storage subsystem controller which insures that any request for data is interpreted and responded to in a manner appropriate to the operating system of the computing system making the request for data.
2. Description of Related Art
Traditionally every computer system vendor (e.g. IBM, SUN, CONVEX) has made or utilized vendor specific data storage subsystems specifically designed to be compatible with the computing equipment and operating system being utilized by the vendor. Consequently, when the computing equipment read or wrote data to its assigned data storage subsystem the operation was generally successful because the storage system was specifically designed to work with the computing equipment in that they "spoke the same language."
The advantage of such vendor-specific data storage subsystems was that the computing equipment and data storage subsystems were generally compatible and worked well together. There were, however, multiple disadvantages, a few of which were (1) users were barred from buying other computer equipment from other vendors utilizing different operating systems due to compatibility problems; (2) smaller suppliers experienced difficulty due to an inability to manufacture data storage subsystems for the various different vendor hardware-operating system configurations; and (3) data managers often had difficulty explaining to non-technical managers the foregoing compatibility issues.
In an effort to alleviate the foregoing noted disadvantages, vendors have responded and today the industry is moving to an environment where the open system concept (meaning, among other things, the option of having other than vendor specific data storage devices attached to certain vendor's systems) is becoming critical. That is, today customers are buying different pieces of hardware from different vendors and they don't necessarily like to "throw away" their hardware if they (the customers) decide to obtain new computing equipment from a different vendor than used previously. Instead, today's customers expect and desire that hardware purchased "work together" irrespective of the purchased hardware's vendor of origin. In practical terms, this means that a vendor stands a better chance of remaining viable if it can provide compatibility with multiple different systems.
With respect to data storage subsystems, the foregoing means that if a customer has an IBM system, a SUN system, and a Hewlett-Packard system, and if the customer only has data storage needs requiring the capacity of ten data storage disks, the customer does not want to have to have ten disks the IBM system, ten disk or the SUN system, and ten disks for the Hewlett-Packard system because the data accessing schemes of the systems are incompatible. Instead, the customer prefers to have one set of storage devices which can be accessed by all systems and which allows the sharing of data on the data storage subsystems by all attached computing systems.
In the abstract sense, the foregoing can be posed as the problem of sharing data storage subsystems between heterogeneous computing systems. The problem of sharing data storage subsystems can be viewed as three separate "cases of sharing."
Case 1 is where there is physical sharing of a data storage device but no actual data sharing. This is generally the current state of the industry. This case is demonstrated by the previous example of three systems each requiring ten disks. In actuality, all the storage that is needed could theoretically be supplied by ten disks. However, since the different computing systems use different operating systems, it is necessary to buy separate storage subsystems for each computing system. These three different storage subsystems are then put into one cabinet, or one physical location. Thus, externally (that is, viewing the data storage subsystem cabinet) it may look "as if" the three different systems are using one data storage subsystem, and it may appear "as if" all systems can access and are sharing the same data, but from a logical and storage requirement point of view the system still uses data storage resources equal to thirty separate disks. The reason for this is that the three different systems still are not reading and writing to the same data storage device. Thus, although from the outside it may look like the three systems are using the same storage system, in reality three times the capacity actually required is being used (since separate data store subsystems still exist for each heterogeneous computing system), so this case (Case 1) doesn't really solve, or address, the problem of sharing data storage subsystems.
Case 2 is where all applications on every system share all data within a common data storage subsystem. This is theoretically the ideal case. However, due to the difficulties in managing the meta-data required for different operating systems and the middleware required for different vendor systems, this case is practically unachievable. There is no solution for this case in the industry today.
Case 3 is a subset of Case 2. Case 3 is the situation where one application program (such as a database program) on every system has direct access to all data in a shared data storage subsystem. In this level of sharing, the same application running on two or more heterogeneous systems can directly access all data within the shared data storage subsystem. This means that an application running on two or more heterogeneous systems can actually read and write to the same physical locations within a shared data storage subsystem, such that when the application on one system requests data from logical Linear Block Address twenty it will retrieve the same data that a copy of the same application running on another heterogeneous system will retrieve if the same application on the other system requests the data from logical Linear Block Address twenty (likewise for writing to certain Linear Block Areas). There is no solution for this level of sharing in the industry today.
The reason that there is no solution for this (Case 3) level of sharing within the industry today is that while the same application program running on heterogeneous computing systems might be requesting access to the same LBA, the da storage subsystem is receiving those requests via the various different operating system commands of the heterogeneous computing systems. At present, there are no known solutions within the prior art which empowers the data storage subsystem to allow heterogeneous computing systems direct access to the same storage areas within a data storage subsystem. Thus, it is apparent that a need exists for a method and system which will allow direct access to the same storage data storage areas in a shared data storage subsystem.