Data processing systems, including computers and data routing/forwarding devices, typically implement multiple threads that operate upon multiple, shared resources. Each thread may include an independent thread of execution, such as an independent concurrently running task, that may utilize one or more resources of the shared resources. The resources may each include any type of software or hardware resource that either performs a function, or which can be used by a thread to perform a function. In a computer system, for example, a resource may include a region of memory or an object stored in the memory. In a data routing system, for example, a resource may include a packet (e.g., packet header and packet payload stored in memory).
In systems where multiple threads of execution share resources, having one of the multiple threads crash may cause the resources managed or owned by the thread to be left in an undefined state. This often results in a loss of those resources (e.g., a memory leak) or a larger system re-start (e.g., a re-boot) to return the system and all of its resources to a known state. Such a larger system re-start increases the system down time from the system user's standpoint. As the number of threads (or process instances sharing resources) in execution grows, this becomes an ever increasing problem. Furthermore, the use of multi-core architectures in existing data processing systems may necessitate the increased use of different threads of execution to take full advantage of the number of available Central Processing Units (CPUs). This increased use of different threads of execution may increase the risk of down time due to any one of the threads crashing. Larger platforms having more CPUs, to handle more traffic, will only make the problem worse in the future.