Data processing systems typically contain a central processing unit, or host processor, to manage the movement of information, or data, between different peripheral data storage devices. The host processor is often connected to the data storage devices through a storage controller. The storage controller and attached storage devices comprise a data storage subsystem within the data processing system. The host processor typically executes control programs, or host applications, which generate data records to be stored on the data storage devices, and access data records previously stored on the data storage devices. The storage controller manages the data storage subsystem, and directs the transfer of the data records between the host processor and the data storage devices.
Data processing systems may consist of a single data storage subsystem, or may include multiple data storage subsystems. Each data storage subsystem typically includes a single group of data storage devices, such as direct access storage devices (DASDs), magnetic tape drives, or optical disk drives. Each group of data storage devices is connected to the host processing system through a separate storage controller. The storage devices are connected to the storage controller along data and communication links. A string of storage devices can be connected serially along the same data and communication link. Data storage subsystems currently connect storage devices with the same characteristics, such as storage capacity, storage density, transfer speed, and data rate, along the same string.
When the data storage subsystem needs to be updated with more advanced data storage devices, either because more storage capacity is required or a faster data rate is desired, an entire string of storage devices is currently replaced. In some instances, a string can consist of up to sixty-four storage devices, requiring an expensive and time-consuming modification to the data storage subsystem. It is therefore desired to intermix storage devices of the same group, such as DASD, but with different device characteristics, particularly DASDs with differing transfer speeds and data rates, along the same data and communication link in a data storage subsystem. Intermixing these storage devices allows for a subset of the devices along the same device string, connected to the same data and communication link, to be replaced with devices having a faster data rate and transfer speed within the data storage subsystem, thereby reducing the expense and resources involved in increasing the performance of the data storage subsystem. This affords greater flexibility in configuring data storage devices within the data storage subsystem.
Information is typically transferred between the storage controller and the data storage devices by sending data through the data and communication link, which consists of transmission lines. Information consists of communication signals and/or data bytes and may be transferred bi-directionally, in either direction, over the transmission lines. Synchronization signals, or clocking signals, often accompany the data bytes sent from the storage controller to the data storage device to validate the data on the transmission lines and coordinate the timing of the transmitted information between the storage controller and the data storage device. These signals notify the receiving device that the data bus contains valid information and cause the receiving device to latch the information from the data bus. These clocking signals correspond to the transfer speed and data rate of the data storage device. Thus, the storage controller must generate different clocking, or timing, signals when communicating with data storage devices having differing data rates.
Recently, advances have been made in the techniques used to clock the communication signals and data bytes between the storage controller and the data storage devices. A copending, commonly-assigned patent application, Method for Enhancing Data Transmission in Parity Based Data Processing Systems, Ser. No. 08/780,570, filed Jan. 8, 1997 (01/08/97), describes using a parity bit to clock the data byte when sending information from a transmitting device to a receiving device. Using the parity bit location to strobe, or clock, the data byte transmitted between the storage controller and the data storage device minimizes the time skew between the clock signal and the data occurring during data transmission, allows the data transfer to run at a faster transfer speed more closely approaching the system clock in the storage controller, and accommodates longer transmission lines between the storage controller and the data storage device.
Accordingly, an apparatus is needed to allow devices with different characteristics, such as differing data rates, to be connected to the same data and communication link, and share the same transmission lines, within a data storage subsystem. The present invention provides a storage controller with a data transfer component that uses conventional timing schemes to clock information to a first device, and also uses a parity bit location within a data bus to clock information to a second device, having a data rate substantially greater than the first device, along the same data and communication link. The data transfer component also uses an alternate means for detecting data transmission errors to the second device, since the parity bit is no longer used to detect these errors.