As the range of functions provided by computing devices increases, more hardware blocks within the device require access to memory resources. Particularly in case of mobile devices, the memory resources should be used efficiently. Connecting each hardware block to a separate memory unit allows parallel memory accesses of the hardware blocks. However, a large amount of rarely used memory has to be provided to fulfil varying memory requirements for each of the hardware blocks at any time.
Sharing memory between the hardware blocks requires less memory, since peak memory allocations to one hardware block can be compensated by less memory requirements of another hardware block. Additionally, shared memory provides a means for exchanging large amount of data between the hardware blocks. Accesses to a shared memory unit are delayed, since a memory access of one hardware block suspends memory accesses of other hardware blocks. Consequently, conventional shared memory is efficient in terms of memory size, but its performance decreases as the number of hardware blocks increases.
A common memory including a plurality of separate memory units, which are accessed in an interleaved manner, combines the advantage of parallel memory accesses due to the plurality of separate memory units with the advantage of efficient memory utilization, because unequal memory requirements of the hardware blocks are balanced by the interleaved access.
While an interleaved control of shared memory provides an example of an efficient memory resource allowing simultaneous memory accesses, such a memory control does not fulfil an increasing demand for streaming data efficiently between hardware blocks. Memory space for a whole data block, which is to be exchanged, needs to be reserved in the memory. The data block then has to be completely written to the memory before it can be read by the receiving hardware unit, which corresponds to an increased latency.
A conventional link for streaming data between a pair of hardware blocks is provided by a two-port First In First Out (FIFO) channel. Memory requirements of the FIFO channel are significantly less than those of shared memory, because a small part of the streamed data has to be stored to the extent the streamed data has been written at one port but has not yet been read by the receiving hardware block at the other port. The FIFO channel further has the advantage of saving data transfer time compared to the shared memory concept, because the receiving hardware block does not have to wait until the data block has been completely written to the memory but can start reading as the data block is still being written to the memory. A further advantage of the FIFO channel is the possibility of data driven control of the receiving hardware block. A drawback of the FIFO channel when compared to the shared memory approach is its lacking flexibility in data routing. The decision which of the plurality of hardware blocks is to be connected via the two-port FIFO channel has to be made at design time. The shared memory approach allows transferring data between all hardware blocks connected to the shared memory.