Computer software that is used for financial forecasting and scientific modelling is well known. Typically software of this type includes calculating a series of formulas on a large series of data sets. Often, extremely sophisticated models and formulas are used and the calculation of the resultant formulas by such modelling software will typically constitutes the bulk of the memory requirements used by the software or computer model.
For example, consider a model calculating one hundred quantities (columns) over six hundred iterations of the model, with the result for each iteration being stored in a cell of the column If each cell requires ten bytes to stores its entry then the memory requirements of the modelling software will be around 600 Kb.
Although, these days memory is inexpensive, the optimisation of memory usage when running such models or using large data sets is still a priority. This is especially the case when model complexity or storage requirements increase dramatically with problem size.
Clearly there is a need for an alternative method of memory optimisation for use by modelling packages. Especially since memory access times can be orders of magnitude slower if the data values are not available in primary memory.