Measuring the performance of computer systems and computer system components, including both hardware and software, is often not performed under ideal circumstances. While a performance characteristic of a computer system is non-stochastic, the characteristic is often very difficult to determine. For example, measuring memory performance of a computing system can be accomplished in a laboratory or scientific setting with dedicated software (i.e., the dedicated software is the only software running on the computer). This type of testing makes it easy to obtain very accurate measures of memory performance. This type of testing, however, is rarely practical.
Performance measurements of computer systems or components must often be taken in a computer's normal operational setting, or in a setting or scenario that is close to “real world.” In such scenarios, there are often environmental and operational activities or events that can interfere with the performance measurements. For example interrupts, deferred procedure calls, high priority threads, user activity, the movement of the mouse, and network traffic are environmental and operational activities or events that can interfere with the performance measurements.
Further, many computer system performance measurements differ from traditional measurements in that the goal of the performance measurements is to determine the fundamental best possible performance of the system or a system component (hardware or software). This type of determination is in contrast to a simple average, mean, or other measure of central tendency. With respect to memory performance, for example, memory has a maximum throughput rate, often expressed in megabytes or gigabytes per second. Memory can easily be driven at this maximum rate and this maximum rate is a key determinant of system performance. Determining this maximum rate is typically accomplished by taking many different measurements. These different measurements are most often averaged to produce the sole or final measurement.
One problem with this type of measurement is that environmental and operational activities can perturb individual measurements—sometimes greatly. Such perturbations can cause measurements to be “slower” or “longer” than the effective maximum and, thus, such perturbations can drastically affect the measurements causing high measurement error.
Other measurement techniques depend on taking some number of measurements and then averaging them using the arithmetic mean (simply called the average). The problem with this calculation is that it, also, can be greatly effected by even a small number of highly perturbed samples, even when there is a large number of samples. The other problem with this technique is that the number of measurements is often large. This causes the actual measurement time to be longer than actually necessary.
In view of the foregoing, there is a need for systems and methods that overcome the limitations and drawbacks of the prior art.