In a traditional object storage system, typically a large amount of data is being stored and handled, which generally makes storage space efficiency critical. Data in an object storage system may contain duplicate copies of data, and conventional object storage systems generally remove data redundancy through deduplication techniques. In a traditional deduplication process, unique chunks of data, or byte patterns, are identified and stored during an analysis process. As the analysis continues, other chunks of data are compared to the stored copies of unique data and/or byte patterns. Whenever a match occurs, the redundant chunk of data is replaced with a reference that points to the stored chunk of data. The data may be stored in different storage devices based on capacity of the storage devices, bandwidth of servers, network traffic, etc., and a deduplication system generally analyzes the data on the various storage devices to identify and store the unique chunks of data, and then analyzes the data on the various storage devices to locate data that matches the stored unique chunks of data. In a distributed object storage system, most deduplication techniques require each storage device to look up a global content indexing table to find duplicated content, as well as, update the global content indexing table to reflect any duplicated content's ownership change. Since both the table look up and the table update generally should be exclusive and atomic, the look up and update operations by each storage device significantly slow down I/O (input/output) performance, use a large amount of resources, and can limit the scalability of an object storage system.