In running a computer program, a piece of computing work may take an indefinite amount of time to complete. For example, completion of a network request depends on various factors, including latency, available bandwidth, whether the network communication link is down, whether the server is down or operating slowly, and so on.
In a synchronous programming model, if a call to something is made that takes awhile, the program code blocks and waits for the call to complete (although the program code eventually may time out if too much time transpires). Waiting for completion is generally undesirable because other parts of the program code may be able to perform useful work while waiting for the slow task to complete.
One way to solve this problem is using multiple threads of execution. However, multiple threads of execution are not always available; e.g., certain programming environments are single threaded. Further, using multiple threads of execution is not always efficient in some scenarios, e.g. because thread switching consumes resources.
Another way to solve the waiting problem is to make the call and get the result back asynchronously, at a later time when the work is complete. There are many different variations of this basic asynchronous idea (e.g., Futures, Promises, Tasks, Channels . . . , referred to herein as “async tasks” or “async work”). Among the benefits of asynchronous calling include that it is thread agnostic. However, there are also potential complications with running async tasks that need to be considered so as to provide robust program code.