1. Field of the Invention
The present invention relates to database caching and more particularly to IMDG caching in a database management system.
2. Description of the Related Art
Memory cache technologies have formed an integral part of computer engineering and computer science for well over two decades. Initially embodied as part of the underlying hardware architecture of a data processing system, data caches and program instruction caches store often-accessed data and program instructions in fast memory for subsequent retrieval in lieu of retrieving the same data and instructions from slower memory stores. Consequently, substantial performance advantages have been obtained through the routine incorporation of cache technologies in computer designs.
An in-memory data grid (IMDG) is a data structure that resides entirely in random access memory (RAM), and is distributed amongst multiple servers. Recent advances in 64-bit and multi-core systems have made it practical to store terabytes of data completely in RAM, obviating the need for electromechanical mass storage media such as hard disks. Of note, an IMDG can support hundreds of thousands of in-memory data updates per second, and an IMDG can be clustered and scaled in ways that support large quantities of data. Specific advantages of IMDG technology include enhanced performance because data can be written to, and read from, memory much faster than is possible with a hard disk. Further, an IMDG can be easily scaled, and upgrades can be easily implemented.
To achieve optimal performance for database driven applications, database driven applications typically institute a caching layer between the end user and the underlying database. Further, to achieve particular performance advantages, the caching layer can utilize an IMDG. The IMDG can be hydrated—that is populated with data—in one of two ways, preload or at runtime.
In a preload scenario, prior to runtime it can be determined which data in the database is most likely to be requested and that data can be preloaded into the IMDG. For instance, the preloaded data can be based upon a selection of columns and tables that are considered most active by the administrator configuring the caching policy. In doing so, however, the selection is based upon estimated usage and not any verifiable data.
In contrast, a runtime cache policy is typically performed as an item is requested from the cache and the entry is not found. Once the item is found in the database, the cache is populated with the entry and is evicted at some point based upon some preset policy. Of course, in the runtime scenario, the entry in the cache may never be requested again and will remain in the cache until evicted.