A logical standby database is a logical replica of a source or primary database, which is kept, synchronized with the source database. Synchronization may be accomplished in a variety of ways. For example, the SQL statements may be extracted out of the redo log stream generated by the source database, and the extracted SQL statements may be re-executed in the same order that they were executed in the source database. Other techniques for synchronizing a logical standby database are also possible.
In order to synchronize a logical standby database using statements extracted from a redo stream, two main functions must be performed. First, a log analysis component must analyze the redo stream to generate transactions that were executed in the source database and their order of execution. Second, an apply component must re-execute these extracted transactions in the given order to synchronize the logical standby database with the primary database. The log analysis component processes the redo records to extract the equivalent of original data manipulation language (DML) statement that produced the records, DMLs belonging to the same transaction are grouped together and committed transactions are returned to the apply component.
Only committed transactions can be re-executed by the apply component on the standby database, whereas the redo stream contains data relating to both committed and uncommitted transactions. Thus, although at the standby database all committed transactions may have been extracted and applied on the standby database, there can also be uncommitted transactions that originated quite far back in time. This means that data relating to these uncommitted transactions may be present in a large number of redo log files. A system crash at this point will require re-processing all these redo log files in order to extract the data related to the uncommitted transactions. This can cause a long delay in crash recovery. Even worse, if there is a “rogue” transaction that has a small number of DMLs associated with it, but is long in-duration, such a re-processing will cause long and inefficient delay in crash recovery.
One solution is to checkpoint the uncommitted transactions periodically, so that a system crash will not necessitate re-processing of the redo logs. However, a problem arises in determining how often such a checkpoint needs to occur. There is a cost/benefit tradeoff between frequency of checkpointing, performance of the system at steady state, and crash recovery time. The higher the frequency of checkpointing, the lower the time to recover from a crash, but the worse the steady state performance.
Thus, a need arises for a technique to determine the frequency of checkpointing of transactions based on a cost/benefit tradeoff analysis.