Processes executed in a computer system may be configured to execute different parts of the process concurrently. Where these different parts of the process may access the same data concurrently, the accesses to the data are typically synchronized. For example, when an execution context (e.g., a thread, fiber, or child process) of a process accesses data, it generally invokes a lock or other synchronization technique to ensure that no other execution context of the process performs a conflicting access to the data. The synchronization prevents data from being corrupted but adds processing overhead to each data access and may serialize the access to the data by different execution contexts. This serialization may inhibit the performance and scalability of a process, particularly where there are many independent processing resources that execute execution contexts.
A process may wish to perform concurrent operations on a collective set of data. In doing so, different execution contexts of the process may add data to or remove data from the collective set of data in an arbitrary order. The process may wish to remove elements of the set of data at some point in the execution. While various synchronization mechanisms may be used to allow elements of the set of data to be removed, the synchronization mechanisms may inhibit the performance and scalability of the process.