Whilst large software systems often appear to work efficiently at a fine granularity, sequences of calls made across software interfaces are often very inefficient and this can be particularly problematic in distributed systems where each such call can incur network communications delays that can seriously degrade performance.
Currently, it is up to the programmer to avoid this kind of inefficiency for example by batching a series of calls together into a single call, or by re-using the result from a first call so as to avoid making a second equivalent call; however, this kind of technique can mean breaking abstraction boundaries in a program to manually re-factor code and in some cases this may not be possible (e.g. where the repeated work is a security check that client code cannot be trusted to perform). Additionally, it often necessary to understand how an application will be deployed before trying to optimize its performance. This can mean that optimization opportunities are lost where it is not known exactly how a set of modules will be used together or that an application may be optimized for a first deployment but the performance of subsequent, different deployments may be impaired as the optimizations which were made are not appropriate.