In a conventional sequential computer, processing is channelled through one physical location. The success, rapid development and widespread use of sequential computers can be attributed to the existence of a central unifying model, namely the von Neumann computer. Even with rapidly changing technology and architectural ideas hardware designers can still share the common goal of realizing efficient von Neumann machines, without the need for too much concern about the software that is going to be executed. Similarly, the software industry in all its diversity can aim to write programs that can be executed efficiently on this model, without explicit consideration of the hardware. Thus the von Neumann model is the connecting bridge that enables programs from the diverse and chaotic world of software to run efficiently on machines from the diverse and chaotic world of hardware. By providing a standard interface between the two sides, it encourages their separate, rapid development.
In a parallel machine, processing can occur simultaneously at many locations and consequently many more computational operations per second should be achievable. Because of the rapidly decreasing cost of processing, memory, and communication it has appeared inevitable for at least two decades that parallel machines will eventually displace sequential ones in computationally intensive domains. This, however, has not yet happened.