Digital signal processors (DSP) are widely employed to process data streams (frames) and/or tasks. A DSP in a voice gateway often processes data streams with various frame rates (e.g. five milliseconds (ms) to thirty milliseconds) and with a wide range of processing requirements (e.g. different echo canceller tail lengths, different codecs, etc.).
FIG. 1 illustrates the general approach to processor task scheduling. All the tasks are placed in an execution queue 102 as they are generated. Each processing unit 104 (e.g. a processor), when available to process, checks the execution queue 102 to retrieve any available task and then executes the task. A task may be represented as an identifier corresponding to a data frame(s), data block, or other unit of information to be processed by the processing unit 104.
With this approach, there is no concept of processing priority between queued tasks. Tasks are merely processed in the order in which they are received by the execution queue 102. This type of task execution scheme may cause data flow bottlenecks under certain conditions.
For example, FIG. 2 illustrates a shared execution queue 202 receiving two concurrent data streams 204 and 206. A first data stream of thirty-millisecond data frames (thirty milliseconds representing the time in between data frames in the stream) (frame stream A 204) and a second data stream of five-millisecond data frames (frame stream B 206) are queued in the shared execution queue 202, as each data frame arrives, by a frame processing scheduler 208. If the thirty-millisecond data frame A (first to arrive) is processed before the five-millisecond data frame B (second to arrive), this could lead to a bottleneck for the data flow of the five-millisecond frame stream B. For instance, if the thirty-millisecond data frame A takes five milliseconds or more to be processed, then data frame B would not be processed until after data frame C (the next five-millisecond data frame in frame stream B) arrives.
FIG. 2 illustrates a configuration of a shared execution queue 202 that may be accessed by multiple processing units 212 and 214 to process queued tasks.
As more data streams are processed through a single shared execution queue, the likelihood of data flow bottlenecks or congestion increases.