The present invention relates to a data processing system, a memory controller as well as a method of memory arbitration.
For data processing systems which comprise a plurality of data processing units like a central processing unit CPU and several dedicated processing units PU, the communication is usually performed via a bus or an interconnect network and data is stored in a central memory. The central processing unit CPU may implement programmable processing functions. As in such a data processing system multiple processing units share memory resources, an arbitration of the shared resources must be implemented in order to determine which data processing unit is granted access to the shared memory. Such an arbitration schedules the requests for access to the shared resources to assure that the memory only needs to handle a single request at one time and to assure that requests from data processing units with high priority are handled more often than requests from other data processing units. Accordingly, the available memory bus capacity is divided into a bandwidth limit for each data processing unit. If the arbitration is not performed properly some data processing units may have to wait for a long time to access the bus. On the other hand data processing units having to implement a real time processing may not be able to perform their requested real time processing such that a severe degradation or even failure in the performance of the systems is resulted. Standard methods for arbitration include TDMA, fixed priority accesses, round-robin and the like. Combinations of these standard arbitration schemes are also used.
Due to the increased integration of several programmable or dedicated processing units PU on a single chip, i.e. a system-on-chip SoC, on-chip traffic with different kinds of traffic constraints may be present. Such traffic constraints may include hard real time HRT, soft real time SRT, best effort BE, latency critical LC or the like. As the amount of memory that is implemented on a system-on-chip is a significant factor regarding the overall costs, usually a shared memory is provided. Such memory may also be an external memory like a SDRAM memory. Therefore, a dedicated processing unit implementing real time processing must share the interconnect and the shared memory with a programmable processing unit implementing latency critical processing. The challenge in such a system is to distribute the memory bandwidth over the agents for the data processing units performing hard-real time processings and the agents of data processing units performing latency critical processing. The arbitration must be performed such that a low latency access is provided for the agents requesting low latency while the guarantees necessary for real time processing are met.
One way to ensure these guarantees is to provide fixed windows for hard-real time traffic during which other low latency traffic is blocked and the agents associated to the hard real time processing's are given a higher priority. Although this may ensure that the hard-real time guarantees are maintained, it will produce significant latency for the low latency traffic during the fixed window for the hard-real time processing.
A further method to solve the requirements is to limit the bandwidth that may be used by low latency traffic such that latency critical traffic is blocked as soon as it uses the bandwidth excessively. However, such arbitration scheme may cause violations of the hard-real time requirements as the efficiency of the access to the memory may be different for various traffic types. In addition, such arbitration scheme requires an extensive fine-tuning.