Machine-to-machine scenarios typically comprise one or more M2M applications which require M2M data in regular time intervals. The M2M application therefore provides M2M data subscriptions to an M2M backend which in turn contacts an M2M frontend to obtain the data. The M2M backend then provides the received M2M data from the M2M frontend to the M2M application. However the communication channel between the M2M frontend and the M2M backend may have limited bandwidth which in turn limits the possibility of the M2M backend to obtain fresh data on behalf of the M2M application.
To address this problem data is usually cached in the M2M backend. In M2M usually the backend systems are running on powerful machines in a cloud environment and have enough memory available so that the cache size is not a practical problem. As shown in FIG. 1 the bottleneck in such M2M systems is rather the M2M frontend and its corresponding communication channels. Due to the communication bandwidth, energy constraints and/or processing constraints the rate at which data can be retrieved from the M2M frontend by the M2M backend is limited. This is even more a problem since typical M2M data values become outdated after some time period and thus cannot be served from the cache forever.
Conventional caching strategies are dedicated to the situation where under a limited cache size the hit ratio, i.e. the relative number of data requests, that can be served from the cache, is maximized. When the request data cannot be served from the cache the data needs to be retrieved from elsewhere, for example from the main memory, a disc, network resources or the like. This increases the waiting time for the application in need for the data and thus making the application slower. However all these conventional caching strategies do not capture the situation in an M2M scenario. As described above the cache size is not a problem but the rate which the data can be retrieved from the M2M frontend.
In the non-patent literature of Liaquat Kiani, Saad, et al., “Context caches in the clouds,” Journal of Cloud Computing: Advances, Systems and Applications 1.7 (2012) a method is shown for caching of context data in the cloud taking in account data becoming outdated. However there it is still assumed that the cache size is limited rather than the connection between the M2M backend and M2M frontend.
In the non-patent literature of Sazoglu, Fethi Burak, et al. “Strategies for setting time-to-live values in result caches,” Proceedings of the 22nd ACM international conference on Conference on information & knowledge management, ACM, 2013 a method is shown to cache results from search engines. The number of accesses to the core search engine per time is considered to be the bottleneck rather than the cache size. However queries to search engines are issued by humans and are therefore sporadic one-time queries. M2M applications however require up-to-date real-world information in regular time intervals and exhibit therefore totally different access patterns. Another difference is that humans do not take notice of outdated search engine query results but M2M applications need to rely on receiving up-to-date information.