In computer systems there may exist a whish to investigate system utilization in the cores. Reasons for this may e.g. be resource pool control in a cloud or to facilitate power saving in embedded systems. In the first case, knowledge may be gained regarding when to increase the system resource pool and in the latter when to go into a less performance/less power mode of execution. Typically, some kind of average load is measured as a percentage of system execution and a hysteresis is used to regulate system frequency and resources in time before passing the border where it may not be possible to meet real-time requirements any longer. That is, system idle time is typically measured in order to determine how much more resources the system has available to execute a task.
However as the amount of cores grows into hundreds or even thousands it typically gets increasingly more complex to accurately determine the hysteresis of the system. This in turn may lead to that the regulation of system frequency and resources are erroneous.
Therefore, there is a need for a trainers and methods of many core systems which enable accurate measurement of system utilization and may perform adaptive resource control.