Data processing systems rely upon stored information for processing data and performing applications that the data processing systems are required to perform. The stored information can include an application program which instructs the data processing system, for example, on how to manipulate data, or the information can be the data itself. In any event, in all but the smallest applications, it is usually necessary to move the information from a storage medium to the data processing system's internal memory before processing can proceed. There exists many different forms of storage media for storing the information needed in the data processing systems.
Three typical forms of storage media include magnetic disks, optical disks and magnetic tape. A common form of magnetic disk storage media is direct access storage devices (DASD). DASD offers high storage capacity while providing relatively fast access to the information stored thereon. Optical disks are capable of storing larger amounts of information than DASD but require more time to access the stored information. Magnetic tape presently has the ability to store the largest amount of information but data access is much slower than DASD or optical disks. Magnetic tape's combination of high storage capacity, slow access speed, and low cost make it well suited for information back-up purposes.
Regardless of the storage medium used, the stored information must be read therefrom and converted into a form the data processing system can recognize. In the case of DASD and optical storage media, the information is stored on rigid discs that spin at a given speed while a read head, stationed just above storage medium, recognizes the bits of 1's and 0's that make up the stored information and converts them into electrical signals. Magnetic tape is typically stored on a reel or cartridge and the tape is forced to travel across a tape read head at a given speed while the tape read head detects the information stored on the tape as it passes by the read head. When very large amounts of information are required to be retrieved from the storage medium, the time required to access that information is an important factor. The access time is a function of the speed at which the storage medium travels past the read head. However, increasing the storage medium speed generally results in increased read errors.
A common read error problem occurs as a result of time domain failures. Each bit of information read from the storage medium must occur within a specified time window. If the position of the storage medium, for a given bit, is outside the expected time window, the data processing system will be receiving erroneous information. Hence, it is vital that the storage medium be synchronized to a data clock. Synchronization requires that the data clock be adjusted frequently by estimating a phase error between the bits of information read from the storage medium and the data clock. The phase error estimate is then used to adjust the data clock.
Numerous other signal applications require that phase adjustments be made between a first fixed signal and a second signal subject to drifting from the timing of the first fixed signal (i.e., phase changes between the two signals). For example, Muratani, et al., in U.S. Pat. No. 4,862,104, describe a system for receiving a first information signal having a carrier that is compared to both a first carrier of a fixed oscillator and a second carrier of the fixed oscillator which is orthogonal to the first carrier for determining phase and frequency errors. Scordo, in U.S. Pat. No. 4,633,193, describes a system for synchronizing a local timing signal to a reference timing signal. The local and reference timing signal frequencies are estimated by comparing each to a fixed oscillator signal. Another common application requiring phase estimation includes recovering clock timing from a transmitted signal. Roux teaches a clock recovery circuit in U.S. Pat. No. 4,744,096 which relies on an in-phase and a quadrature phase received signal for recovering clock timing. Roux's clock recovery circuit includes sampling the received signals, converting the samples to digital samples, and determining a phase error therefrom.
A method for adjusting sample-timing phase in storage systems, including samples taken from magnetic tape, is described in "Fast Timing Recovery For Partial-Response Signaling Systems," F. Dolivo, W. Schott, and G. Ungerbock, IEEE International Conference on Communications, June 1989, pp. 573-577. The described method uses a hysteresis effect to reduce the length of a synchronization burst required for initially synchronizing the data samples to a data clock. The hysteresis effect is provided by using past data-signal estimates for setting present decision threshold levels.
Several methods of writing data to a storage medium are available. For example, the data may be encoded according to many well known methods. The ability to accurately read data from a magnetic storage medium is affected by the method chosen for writing the data to the storage medium. Information stored on magnetic tape, for example, may be modulation coded to improve the accuracy of reading the data from the magnetic tape. A (l,k) modulation code describes a data format requiring that each binary 1 be followed by at least one but not more than k binary 0's. Therefore, a (1,1) modulation code would consist of alternating binary 1's and 0's. Furthermore, it is common to require that every other peak representing a binary 1 have opposite polarities. A (1,1) modulation code is often used as a synchronization burst for initially synchronizing the storage medium and the data clock. The synchronization burst works well given its predictability.
A prior method of synchronizing data read from magnetic tape to a data clock includes sampling the data on each clock cycle and storing three successive samples. When a peak is detected, the samples on each side of the peak are compared to expected values of the modulated data that would exist given perfect synchronization, i.e., ideal values. Based on the slope of the modulation code at the sampled times and the differences in magnitudes, a phase error can be estimated and the data clock can be adjusted accordingly. While this method is satisfactory for a synchronization burst having a known data pattern, it is less effective when the data patterns are arbitrary such as would occur, for example, in a (1,7) modulation code. When a data pattern is arbitrary, the slope may change depending upon whether more than one binary 1 follow a binary 0. Using an incorrect slope in the calculation of a phase error, of course, leads to an erroneous result since the ideal values could not be known. A more accurate determination of the slope requires that additional data be considered on each side of a detected peak.
Thus, what is needed is a phase error estimator for synchronizing data read from a storage medium to a data clock that uses data information from two time periods on each side of a detected peak which makes possible an accurate determination of the respective slopes and ideal magnitudes of a modulation code for accurate phase error estimation in arbitrary data patterns.