Typically, during the development of a software application, performance tests are performed on one or more parts of the application. This testing often involves measuring an amount of time a processor spends (i.e., consumes) executing one or more portions of code (e.g., a function, a procedure, or other logical component) of the software application. For example, such an amount of time may be determined by recording a time at which execution of the code portion begins and a time at which execution of the code portion ends. These times are often recorded by including probes at locations within the software application (e.g., the beginning and end of the code portion), the execution of which results in the time values being recorded.
Determining the amount of execution time based solely on the time at which execution of the code portion begins and ends is not an accurate representation of the actual amount of time a processor consumes in executing the code portion. Recording the time values itself consumes time, including the time required to read the time value and the time required to write (i.e., record) the time value to a recording medium such as, for example, a volatile memory or a non-volatile storage medium. The time consumed to acquire (including reading and recording) time values is referred to herein as “overhead” or “overhead time.”
For example, begin and end time measurements may indicate that a code portion consumed 800 processor cycles. However, it may have taken three processor cycles to record the begin time. Accordingly, the actual amount of time the processor consumed in executing the code portion (assuming no other variables such as context switches discussed below) is 800−3=797 processor cycles. It should be appreciated that the processor cycles consumed in acquiring the end time does not impact the accuracy of the measured execution time of the code portion. This is because the acquisition time for the end time occurs after the recorded end time itself.
Another problem arises in multi-tasking operating systems (OS). A multi-tasking OS simulates concurrent operations of different processing threads on a processor (e.g., a central processing unit (CPU) or microprocessor) by alternating or interleaving execution of the different threads. After one thread has executed for a relatively short period of time (often referred to as a “quantum”), the OS interrupts the processor and adjusts its context to a different thread. Adjusting or switching the context of a processor from one thread to another is an event referred to herein as a “context switch.” The time values recorded for the begin and end time of the execution of a code portion do not take into account whether one or more context switches have occurred between the begin time and end time herein. If one or more context switches occurred during this interval, then the interval is not an accurate representation of how much time the processor spent executing the code portion. That is, the interval will reflect a longer period of time than was actually consumed by the processor in executing the code portion itself.