The Information Technology (IT) industry currently faces a huge problem as the year 2000 approaches. The problem is that programs and applications that represent dates using only two digits (i.e. `96` rather than 1996) to represent the year will break down when they try to perform arithmetic operations, comparisons, or sorting of dates that lie beyond the year 2000. Incorrect results will be obtained when programs try to process dates that lie outside the range of Jan. 1, 1900 to Dec. 31, 1999. The scope of the year 2000 date change is very wide and includes both hardware and software from microcode on up to application program code on both mainframes and desktop computers. It also includes both programs and database files and spans all computer platforms.
The year 2000 problem stems mostly from application programs and their corresponding data that use two digits to represent the year rather than four digits. A large number of these programs were written by application programmers back in the 1960's and 1970's who never expected that their software would still be used 20 or 30 years later in the year 2000. In addition, these programs were written at a time when every byte of storage memory space was valuable. Every opportunity to reduce the storage requirements of programs and databases was taken advantage of. As a result, two digit field to represent year data was widely used and quickly became the customary practice of the trade. Rather than being tossed out, as time passed, new features were added and updates made to these legacy applications, mostly written for mainframe and minicomputers, in order to keep them current.
The year 2000 problem is compounded when one considers the possible ramifications of using two digits to store the year. Any arithmetic calculation that subtracts year data runs the risk of being off when using years 2000 and beyond. Two-digit date information cannot be compared correctly if the dates are in different centuries. In addition, sorting algorithms will not sort correctly records that include years 2000 and beyond. For example, the year 2003, represented by 03, will be sorted before the year 1996, represented by 96. In addition, leap year and other day/week month calculations will be off. For example, the year 2000 is a leap year whereas 1900 is not. In addition, the dates of the week and month are not the same in, for example, the years 2001 and 1901.
Several solutions to this problem have already been proposed. One solution is to expand the date field so it stores a 4-digit year field. This appears to be the appropriate solution for making the software system year-2000 compliant. No change to the logic of the program is required, only obvious changes to date related portions. Although elegant, a disadvantage of this approach is that changing the data field length requires simultaneous conversion of both program and data files. All software programs that reference the converted data file must also be converted to handle the 4-digit year format. This is either impossible or extremely difficult to implement when considering that the typical data processing shop may have a combined number of modules and database files in the range of 15,000 to 30,000. Thus, the link between program and data files makes it necessary to build encapsulated clusters of programs and data files that can be converted and then implemented, independent of other programs and files in the system. Due to the interrelated nature of most systems, these clusters tend to become extremely large and may contain hundreds and hundreds of programs and data files.
Another solution is to use a windowing technique which externalizes the 2-digit year into a 4-digit representation. This technique uses a fixed or rolling window of 100 years to determine whether the 2-digit year is 19xx or 20xx. A number of years to be used for each century is specified. For example, numbers 00 to 20 might represent years 2000 to 2020. Thus the numbers 21 to 99 represent the years 1921 to 1999. An advantage of this technique is that there is no need to expand the 2-digit year to a 4-digit format. A disadvantage is that a problem exists if years beyond the 100 year range are needed. In addition, any databases that use the 2-digit year as an index will not work because of the discontinuous step from 20 to 21. And from 99 to 00.
A third solution is to encode and compress the 2-digit date field to hold 4-digits. An advantage of this solution is that there is no need to expand the 2-digit year data format to 4-digit year format. A disadvantage of this solution is that all programs that access this data file must be modified to handle the compressed/encoded 4-digit year format.
Illustrated in prior art FIG. 1 is a diagram showing the potential problems involved in using some of the prior art solutions discussed above. Depicting a small group of programs, Programs A 12, B 14, C 16 and D 18 are shown accessing one or more related data Files A 20, B 22, C 24 and D 26. Program A, for example, is shown with links to Files A, B and C. If Program A is converted to 4-digit year format, before it can be released into production, data Files A, B and C must also be converted, otherwise Program A will not work. Program B, however, accesses Files A, C and D, thus changing Program A also affects the operation of Program B. Therefore, it can be seen that in a typical scenario, the conversion of one program or data file cannot be done independently of other programs and/or data file, since they typically are interdependent among one another.