Conventionally, mission-critical systems typified by online systems for financial institutions and transportations are expected to achieve both “high reliability” and “high speed”.
Of these two, as a technology for achieving high reliability, a database-duplexing technology is generally used, such as cluster technology and replication technology.
FIG. 6 is a diagram of a conventional database cluster technology. As depicted in FIG. 6, the cluster technology is a technique of using cluster software to make a database redundant. With hardware being made in a redundant structure (“node #1”, “node #2”, “node #3”, . . . depicted in FIG. 6), the database is made highly reliable. This technology is adopted in, for example, a social infrastructure system. In this cluster technology, data maintenance is ensured by a shared disk (“disk” depicted in FIG. 6).
FIG. 7 is a diagram of a conventional database replication technology. As depicted in FIG. 7, the replication technology is a technique of conveying only the update result in a copy-source database (“node #1” in FIG. 7) to a copy-destination database (“node #2” in FIG. 7) for application, thereby making a replica of the database. This technology is adopted in, for example, a disaster control system. In this replication technology, File Transfer Protocol (FTP) transfer is used as a technique of transferring data to the copy destination.
On the other hand, as a technology for achieving high speed, an in-memory database has attracted attention in recent years. The in-memory database is a database that achieves an increase of the speed of accessing from an application to data and also achieves load distribution by storing data not in a disk but in a main memory (for example, refer to “Oracle TimesTen In-Memory Database” retrieved on Feb. 15, 2005 from the Internet <URL: http://otn.oracle.co.jp/products/timesten/>).
FIG. 8 is a diagram for explaining an in-memory database. As depicted in FIG. 8, in the in-memory database, user data stored in a hard disk resides in a memory. A task application updates the user data in the memory. Since an access is made onto the memory, high speed can be achieved.
In such an in-memory database, in addition to high speed, reliability as a database can be achieved by writing a log for insuring a transaction process into a disk. Such a technique of achieving reliability by using a log is used not only for the in-memory database but also for a conventional database with a disk as a storage medium. Examples of a log for use in this technique generally include a Before Image (BI) log and an After Image (AI) log.
The BI log is a log retaining the contents of the database before update, and is used mainly at the time of rolling back for restoring the contents of the database to a state before updating a transaction. By contrast, the AI log is a log retaining the contents of the database after update, and is used mainly for insuring the updated contents of the database about a transaction completed at the time of down recovery.
Here, down recovery is explained. In conventional down recovery, a transaction is recognized at the time of rebooting the database after the system goes down, and whether this transaction is valid or invalid is selected depending on the state of the transaction when the system goes down.
Specifically, in the conventional down recovery, a transaction in which a commit process had not yet been completed when the system went down is taken as invalid, and a data update performed during that transaction is also taken as invalid. On the other hand, a transaction in which a commit process had been completed when the system went down is taken as valid, and a data update performed during that transaction is also taken as valid.
FIG. 9 is a diagram for explaining conventional database down recovery. For example, as depicted in (1) of FIG. 9, it is assumed that a task application performs a commit process after updating user data “A” to “B”. At this time, as depicted in (2) of FIG. 9, a log indicating that the user data “A” has been updated to “B” is retained in a hard disk.
Then, as depicted in (3) of FIG. 9, it is assumed that a server goes down. In this case, when the database is rebooted, the user data in the database is restored to the latest state based on the information in the log. For example, as depicted in (4) of FIG. 9, the user data in the database is restored from “A” to “B”.
As such, in the conventional technology, high speed is achieved by using an in-memory database. Furthermore, a log indicating the updated contents of data (hereinafter, referred to as “update log”) is retained in a hard disk so as to allow data to be restored at the time of occurrence of a failure, thereby achieving high reliability.
In the conventional technology explained above, an update log is written in a disk so as to achieve high reliability. However, an access to the disk at the time of writing the update log disadvantageously impairs high-speed access to the database. To solve this problem, any access to the disk in the in-memory database has to be completely avoided.
However, to completely avoid any access to the disk, in place of the technology performed by using a disk for achieving high reliability, a new technology for achieving high reliability is required without using a disk. This requirement poses a serious problem for pursuing higher speed in mission-critical systems in the next generation.