Before the days of multi-core processors, programming for a single processor environment was fairly simple and straight forward. At that time, a programmer may not have needed to worry about processing order due to the single core completing all of the work. However, with the implementation of multi-core processors, programmers had to manage the use of the cores, programmers became more concerned with the processing order of application threads since the threads could be processed in parallel using the separate cores. The necessity to track the threads was partially due to a lack of multi-core runtimes that could manage the processing order of jobs. This lack of order management results in high overheads.
The early multi-core used parallel and logical queues for scheduling jobs. Scheduling in this fashion consumes overhead because the application may be required to track the order in which jobs are processed, as well as control access to shared resources requiring the use of mutex locks. The use of such locks consumes more overhead when a thread happens to spin on a lock. Additionally, threads that require processing in a precise order may potentially be processed out of order if they are not queued or tracked properly, which may result in lock-up. Therefore, a multi-core runtimes that allows programmers to write code as if programming for a single-core processor while utilizing the full parallel processing of the multi-core processor may be desirable.