It is known to provide data processing systems including a general purpose programmable processor (e.g. a multi-core processor) and an accelerator processor (e.g. a graphics processing unit). Such systems can provide a good degree of efficiency as the general purpose programmable processor is flexible in the processing tasks it is able to perform while the accelerator processor can be targeted at a subset of processing operations, such as computationally intensive graphics processing operations, so as to perform these high volume operations with an improved degree of efficiency thereby justifying the provision of the accelerator processor. Within such systems it is often desirable that the general purpose programmable processor and the accelerator processor share some data. As an example, the general purpose programmable processor may generate data which at a high level defines the processing operations which are to be performed by the accelerator processor (e.g. start point data and/or control data) and the accelerator processor then reads this data in order to determine the processing operations it is to perform.
It is also known within data processing systems to provide hierarchical memory systems including a cache memory and at least some further memory. The cache memory provides rapid access to time-critical or frequently accessed data while the further memory provides typically slower but larger capacity memory storage able to meet the overall storage requirements of the system. Within the context of such systems a problem arises in maintaining the coherence of data which may be stored at various places within the memory hierarchy. This is particularly the case when more than one processor, such as a general purpose programmable processor and an accelerator processor, are able to access the same data. If a cache memory is storing a local copy of a data item and there is another copy of that data item in the further memory, then coherency control mechanisms are provided to ensure that the up-to-date version of a data item is used at all times and that changes made to one copy of the data item are in due course also made to the other copies of the data item. Such coherency control mechanisms are complex and represent a significant resource overhead. Furthermore, the capabilities of such coherency control mechanisms to deal with large volumes of data accesses may be limited and this can constrain overall system performance. As an example, an accelerator processor may access large volumes of data at high speed and a coherency control mechanism able to deal with such large volumes of data that can be shared with a general purpose programmable processor will have a disadvantageously high level of complexity and require a disadvantageous amount of circuit overhead.