Delay loops are used as a software based method to cause an execution path (code path) to take at least a specified amount of time. In previous approaches, delay loops have been implemented through the use of a loop that reads the current time, and then calculates the difference of the current time from the start time (entry time). This delay loop is repeated (executed) until the calculated difference is greater than or equal to the requested delay. Ideally, the loop is executed until the difference between the current time and entry time is equal to the requested delay amount. On average, the amount of time spent in executing the delay loop is one-half of the time for a single iteration of the delay loop. This additional average time of one-half of the time for a single delay loop iteration produces an amount of jitter that may cause errors and loss of synchronization.
Furthermore, the disadvantage of this previous approach is that the time taken to perform a single pass through the delay loop is taken (incurred) no matter how close in value the previously calculated delay (i.e., calculated difference) is to the requested delay. For example, if the calculated delay is just one (1) cycle short of the requested delay, then the entire delay loop will be executed again. This delay loop execution will cause the actual time spent in the delay loop to be the requested delay plus the number of cycles for a single pass of the delay loop minus one (1). For some processors, this additional time for a single pass of the delay loop can be greater than approximately 30 cycles in the worst case scenario.
Therefore, the current technology is limited in its capabilities and suffers from at least the above constraints and deficiencies.