Various database applications exist, such as data warehouses used for reporting and analysis. In a database application, an aggregation of a data states may be generated, and may need to be updated from time to time. When the aggregated data is a snapshot of the final state of some detailed data, the detailed data is updated in an on-going manner, and a detailed log of changes to the detailed data is not fully available, typical techniques calculate the aggregation of the data states from scratch, which is a waste of resources. As the volume of detailed data grows larger and larger, this calculation takes more and more time to be performed. In some cases, servers performing the calculation may encounter a lack of memory resources, and the calculation fails.
Another technique for determining the aggregation of data states is to periodically update the aggregation data state by some delta values. For instance, delta values may be determined by counting data state changes (e.g., −10, 8, 5, etc.) that occur during some time period, and the counted data state changes may be added an older version of the aggregation of data states to generate the final version of the aggregation of data states. However, this technique applies to a situation where a complete log of data changes is available. Such a complete log maintains full track of every change that the user(s) made, including new events, deleted events, and updated events. However, in some situations, such a complete log cannot be maintained, or is not accessible, and therefore cannot be used to determine the aggregation of data states.