The background description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure. Unless otherwise indicated herein, the approaches described in this section are not prior art to the claims in the present disclosure and are not admitted to be prior art by inclusion in this section.
Often, applications using SOCs or integrated circuits are looking to consolidate multiple real-time workloads. These workloads often need to maintain minimum levels of determinism, latency, or jitter, and meet real-time requirements. However, a multi-core SOC with separate memory caches may experience some degree of performance degradation due to excessive input/output (I/O) requirements or memory access when the SOC is attempting to perform multiple concurrent system operations.
As an example, the SOC may attempt to perform a process related to low priority workloads at the same time as processes related to high priority latency-sensitive workloads. The low priority process could be a system maintenance operation. By contrast, the high priority latency-sensitive process could involve a process with real-time requirements such as audio or video streaming. In this instance, the low priority process could potentially interfere with the high priority latency-sensitive process, and result in increased latency or some other form of performance degradation in the high priority process.