Counters are used to keep track of various parameters in a computer system. For instance, counters are used to count the number of packets entering a system, number of packets dropped due to errors, number of sessions to a particular website established, etc. The counter values can be used for different purposes. For instance, counter values such as the number of packets received, number of packets dropped, etc., may be used for defining and verifying the service-level agreement of a service contract because they provide factual evidence that the service provider is billing for quality of service agreed for, and that the consumer is not using services that he is not billed for. For example, if company X charges $10 for 10G of data per month, counters provide a way to verify that the consumer is not consuming more than 10G of data per month, and X is actually providing 10G of data. As another example, counter values are commonly used for debugging purposes since, by adding counters to the entry and exit points of key modules within a system, it is relatively easy to narrow down a bug to a single module and further track the issue within it.
In hardware based networking solutions and certain software based solutions, the memory allocated for counters is hard carved. Updating a counter involves just writing to a fixed memory location. This approach is not feasible if the number of counters to be updated are unknown, such as in the case of the number of interfaces in a virtual networking appliance.
Some software implementations use atomic_adds or locks when incrementing the counters (if there is a possibility of parallel access). A software lock is a synchronization mechanism for enforcing limits on access to a resource in an environment where there are many threads of execution. Any lock uses extra resources, like the memory space allocated for locks, the CPU time to initialize and destroy locks, and the time for acquiring or releasing locks. The more locks a program uses, the more overhead associated with the usage.
Atomic operations (atomic_add here), on the other hand provide the same functionality in hardware. However on a multiprocessor system, this means locking other processors from accessing a variable.
Although these implementations are functionally straight forward, they are typically not centralized (there is typically no central infrastructure to query all the counters, update, or save them; but rather has to be done manually for each counter) and they suffer degraded performance in case of high access (e.g., locking access to a counter that gets updated many times a second severely degrades performance; using atomic_adds invalidates the cache line and nullifies any performance gain obtained through locality of reference).