With the advancement of information processing technology, analysis engines have recently been developed that analyze a variety of data. Such various analysis engines being developed may include an analysis engine that generates position information used to trace a human flow line from video picture data, an analysis engine that identifies a person from still picture data, and an analysis engine that generates text data from voice data.
Analysis control systems have also been developed that can obtain various analysis processing results from input data by way of a combination of a plurality of similar or dissimilar analysis engines. For example, systems have been developed that perform analysis processing where image data inputted from a camera is processed in parallel or serially by using a personality extraction engine, a flow line extraction engine, a face extraction engine, a face collation engine, and the like to determine a person of a predetermined behavior.
In this practice, an analysis control system including these analysis engines can be executed by a plurality of information processing devices. In this type of analysis control system with a plurality of analysis engines, it is desirable that a certain or higher level of accuracy of analysis results be guaranteed and that high-speed processing is made, for example, analysis results are outputted in real time.
However, in such an analysis system including a combination of the plurality of analysis engines as above-mentioned, load on each of the analysis engines varies substantially depending on the details of data to be analyzed.
As a result, a load of analysis processing may locally concentrate on a particular analysis engine. This may cause a delay in the analysis processing or data loss due to buffer overflow, thus resulting in failure to guarantee processing performance or accuracy of the entire analysis system.
In other words, a load on an analysis engine may cause an overload on the throughput of the information processing device using the analysis engine, thus resulting in an unintended delay in the analysis processing or an unintended data loss.
The above problems may be addressed by, for example, the following actions: an action of reducing data intentionally for analysis where the reduced data is unlikely to degrade the accuracy; another action of causing a delay intentionally in the processing via data buffering for analysis where a delay of a certain extent is permitted; and another action of sharing data frames to other information processing devices or other processors with an adequate margin of throughput, which do processing for analysis where the data frames segmented in terms of time can be executed in the instances of a plurality of analysis engines without affecting the results.
Patent Literature 1 discloses an exemplary computer implementation method for processing a data stream in the above analysis control system.
The computer implementation method described in Patent Literature 1 includes the following steps. A first step receives a tuple of attributes from among the data streams to be processed. A second step calculates the estimated processing time to process the tuple. A third step, once the estimated processing time is determined to exceed a threshold time, selectively removes, delays or executes on an alternative information processing device the tuple in order to reduce the influence of the delay in the processing.
Patent Literature 2 discloses an exemplary stream data processing system to attain optimization that equalizes the load between a plurality of information processing devices in charge of executing analysis engines.
The stream data processing system described in Patent Literature 2 calculates time as a cost when transferring query processing in a first place. Secondly, based on the calculated cost, the stream data processing system transfers the query processing to a predetermined information processing device in order to minimize the transfer time and equalize the load of the query processing on each server computer.