In data grids, in-memory data caching across multiple machines is intended to eliminate performance bottlenecks and minimize data access latency. To address the increasingly sophisticated needs for high data availability and reliable data storage and processing in mission-critical environments, robust and agile applications that provide timely cache distribution decisions and compensate for machine and network failures are desired.
A traditional approach is to attempt to address the dynamic cache distribution for in-memory data grids through the use of heuristics. Different building blocks that constitute the problem (e.g., assignment, load balancing, clustering, and resiliency) have not been used together in modeling and solving any data grid problem. Furthermore, static versions of the problem have been solved in hopes for the eventual correctness of the solution in a dynamic setting. However, when the sizes of the caches change over time (a dynamic aspect of the problem), the solution drastically loses on quality and the approach becomes obsolete.