Data deduplication is applied during data backup operations, in order to conserve storage space. A data segment that is shared by many files need only be stored once in backup storage. Typically, a data deduplication system maintains a list of fingerprints of data segments. Newly arriving data segments have their fingerprints compared with the fingerprints on the list, in order to determine whether or not a copy of a data segment is already stored in the backup storage. If the deduplication system does not find a match to a fingerprint, the newly arriving data segment is then stored in the backup storage, and the new fingerprint is added to the fingerprint list that contains fingerprints which represent data segments stored. If the deduplication system finds a match to a fingerprint, the newly arriving data segment is discarded, i.e., not again stored in the backup storage, and a reference is added to the corresponding existing segment. One critical function of a deduplication system is to track how segments are referenced by different files and backup images. Some data segments in the backup storage are popular and are widely referenced by many files and backup images. These so-called “hot” segments may come from system files, virtual machines, static files, database blocks, etc. Over time, the popularity of some segments may change, e.g., file system patch updates on backup clients may make popular segments become obsolete (not hot anymore). In some systems, the list of fingerprints used in deduplication is frequently updated, so that unused data segments can be deleted from the backup storage in order to free up storage space. However, frequent updates to the list of fingerprints consume system time, and slow down reference processing. Therefore, there is a need in the art for a solution which overcomes the drawbacks described above.