In data processing, such as signal processing, a large amount of data (for example, stream data of a specific signal) is dealt with as a subject of operational processing. In some case, a plurality of types of operational processing (for example, FFT (Fast Fourier Transform) and various types of filtering) are executed for such a large amount of data sequentially.
In such data processing, the operational processing may be individually achieved by dedicated hardware, a processor (processing device), such as a DSP (Digital Signal Processor), and the like. Similarly, the operational processing may be achieved by a combination of general-purpose hardware, such as a computer, and software (computer program). Hereinafter, various components that can execute operational processing as described above may be collectively referred to as “processing blocks” or “processing units”.
There is a known configuration that an upper-level control module (controller), which is able to control the processing blocks, arranges a series of operational processing, being executed in one or more the processing blocks, into a task and controls execution of the task in the respective processing blocks, when the processing device executes data processing as described above. Such a task includes operational processing on data executed by one or more processing blocks.
For example, there is a case where a configuration exemplified in FIG. 20 is used, when constructing a processing flow (the processing flow represents a series of operational processing), by connecting a plurality of processing blocks so that processing results (outputs) from one processing block are supplied to another processing block. In this case, a plurality of processing blocks (processing blocks 2001 to 2003) is connected through data buffers (data buffers 2004 to 2005). Operational processing executed by the plurality of processing blocks is collectively controlled as the task. That is, the plurality of processing blocks and data buffers are connected like a pipeline form, and the task is executed by the pipeline. Those data buffers are storage areas. Various data are output to the data buffers, from processing blocks connected to the data buffers. Various data are input to processing blocks from the data buffers.
Since the configuration exemplified in FIG. 20 enables to enlarge granularity of task control, a processing load for controlling execution of a task can be reduced. Since latency (delay) in memory access in the respective processing blocks can be reduced by use of data buffers, an improvement can be expected in efficiency of operational processing.
Relating to a technique for executing data processing using a plurality of processing blocks, the following patent literatures have been disclosed.
PTL1 discloses a technique for reducing overheads of access to buffers that are used for handing data between the tasks, for an information processing apparatus that executes a plurality of tasks having dependencies with each other. The technique disclosed in PTL1 reduces delay in buffer access by exchanging data through an internal memory arranged inside a processor (processing block). The technique disclosed in PTL1 also controls the order of execution of tasks so that a plurality of tasks having dependencies with each other are processed continuously.
PTL2 discloses a technique for a printing apparatus configured with a multiprocessor, to interpret a print command including a plurality of pages and to assign image-drawing processing and printing processing for respective pages to a plurality of separate processors successively.
PTL3 discloses a technique for increasing the processing efficiency of processing blocks while maintaining real-time processing, by controlling the execution sequence of tasks and allocation of execution time for each task, based of connection between tasks.