A computer program that has been loaded into memory and prepared for execution is called a "process." A process comprises the code, data, and other resources, such as files, that belong to the computer program. Each process in a data processing system has at least one thread which is known as the main thread. A thread comprises a pointer to a set of instructions, related central processing unit (CPU) register values, and a stack. A process can have more than one thread with each thread executing independently and keeping its own stack and register values. When a process has more than one thread, the process is said to be multi-threaded.
When a process performs multi-threaded processing, the threads can each perform a discrete unit of functionality. One advantage to multi-threaded processing is that threads can transfer information back and forth without having to cross process boundaries, which is expensive in terms of CPU processing time. Another advantage of multi-threaded processing is that when transferring data between threads of a single process, a reference to the data can be transferred instead of a copy of the data. Otherwise, if each discrete unit of functionality were implemented as a process, the data would typically have to be copied before being transferred to the destination process and this requires a significant amount of CPU processing time.
Two models used for performing multi-threaded processing by conventional systems are the free threading model and the lightweight process model.
As shown in FIG. 1, the free threading model comprises a process 100 containing a number of objects 104, 108, 112, 116 and shared data 102. The term "object" refers to a combination of code ("Methods") and data. The code of the object typically acts upon the data of the object. The shared data 102 is data that can be accessed by any object within the process 100. Each object in the free threading model has a lock 106, 110, 114, 118. The locks 106, 110, 114, 118 on the objects 104, 108, 112, 116 serialize access to each object so as to prevent contention problems. The shared data 102 has a semaphore 103 that serializes access to the shared data. In the free threading model, there are multiple threads and each thread can access any object 104, 108, 112, 116. Thus, the locks 106, 110, 114, 118 are necessary to prevent contention problems that may arise when more than one thread attempts to access an object.
One problem with the free threading model is that it is difficult to provide concurrency management within the free threading model. The term "concurrency management" in this context refers to managing the objects so that each object may concurrently execute in a reliable and robust manner. Providing concurrency management is difficult since a lock must be implemented for each object. In addition, the implementation of each lock is further complicated since each lock must play a part in preventing process-wide deadlock. "Deadlock," in this context, refers to when two or more objects are blocked while waiting on each other to perform an operation. For example, if a first object is blocked while waiting to invoke a method on a second object and the second object, in turn, is blocked while waiting to invoke a method on a first object, each object is waiting on the other and therefore deadlock has occurred. Since each object must have a lock, additional code is necessary for each object and this code must be developed and tested. Further, if an object is created and inadvertently a lock is not put on the object, the object and the process can behave in an undesirable manner. A second problem with the free threading model occurs when more than one object within a process is performing user interface operations. This problem arises because when an object is performing an operation on a portion of the user interface such as a window, the window is usually locked. Consequently, if a second object then tries to perform an operation on the window while the window is locked, the second object will receive an error and will not be able to perform the operation. Thus, it is difficult to perform user interface operations with the free threading model.
FIG. 2 depicts a diagram of the lightweight process ("LWP") model for performing multi-threaded processing. In the LWP model, a process 200 contains a number of lightweight processes 202, 204, 206. Each lightweight process executes independently and contains procedures, data and variables. Therefore, a lightweight process is very similar to a thread. In the LWP model, each lightweight process can perform a discrete unit of functionality and can thus take advantage of the benefits of multi-threaded processing. However, the LWP model does not provide shared data and therefore when one lightweight process wants to communicate to another lightweight process, the data is typically copied before it can be sent. Performing a copy of the data before sending the data requires a significant amount of CPU processing time. In addition, since there is no shared data, data that all lightweight processes would like to access has to be maintained by a lightweight process. Such information includes location information for the procedures in the lightweight processes. Therefore, for example, whenever a lightweight process wants to determine the location of a procedure so that it may invoke the procedure, the lightweight process must communicate to the lightweight process that maintains the location information. This communication is costly in terms of CPU processing time.