1. Field of the Invention
The present invention relates to an image processing device for performing video image processing, and relates in particular to the implementation of an image processing function that is consonant with the needs of users and the applications they employ.
2. Description of the Related Art
Recently, with the development of advanced digital techniques and network infrastructures, the availability and use has been spreading of video distribution systems that permit various browsers, such as portable terminals and personal computers, to browse video contents on demand or in real time. Such systems are so designed that a server can simultaneously supply multiple video images, and can change the resolutions or the bit rates for the video content in accordance with the conditions encountered on a network, or the specifications for a terminal, and can thereby ensure that continuous, smoothly displayed video will be provided.
However, as a consequence of user demands, the range of the video distribution systems has been broadened, so they cover all systems extending from one for “merely distributing video” to, one for “adjusting a bit rate or converting a format in order to use a relay node to distribute video to another network” or to one “realizing a high level video surveillance in cooperation with an image recognition device”. Therefore, the construction of a system is desired that permits functions to be easily extended or customized, so that a system that can quickly cope with and satisfy various needs can be developed within a short period of time.
As a countermeasure, a distributed object technique is proposed that provides for a format converter, resolution converter and an image recognition unit to be treated as independent objects, and an input/output interface and the status shifting shared by all the objects to be defined and managed in order to improve the recycling of the objects. Representative examples of this technique are CORBA (Common Object Request Broker Architecture), devised by OMG (Object Management Group), and COM (Component Object Model), developed by Microsoft Corp.
CORBA is a mechanism whereby the exchange of data by objects is implemented in a distributed environment, while objects created according to the CORBA standards can cooperate to provide, through a network, a single function, even in a remote control environment. COM also implements substantially the same function as does CORBA; and in addition, in order to improve development efficiency, COM automatically extracts objects that can be coupled with other, selected objects, and sequentially links these objects to automatically generate a software function. For example, when an object for reading compressed video from a file is selected, an object for decoding the compressed video and an object for displaying the decoded video on a monitor can be automatically extracted, thereby facilitating the automatic generation of software having a video replay function.
A method for automatically generating software having the video processing function is also disclosed in JP-A-2000-56958. The device that employs the automatic generation method of this software comprises, as is shown in FIG. 27, a storage device 93, for storing multiple software components having input/output augments used to process image data; and an automatic generator 90, for extracting the software components from the storage device 93 in accordance with outline procedures whereby process procedures are written, and automatically generates image processing software. The automatic generator 90 includes: a coupling choice determination unit 91, for determining the input/output relationship between the argument for the software component to be executed first and the argument for the software component to be next executed, and for, when the argument for the software component to be executed first indicates the output and the argument for the software component to be executed next indicates the input, determining that these software components are coupling choices; and a coupling unit 92, for examining the arguments for the software components that are determined, by the coupling choice determination unit 91, to be coupling choices and for determining whether the types of image data to be processed match and for, when the data types match, coupling the arguments for the software components.
FIG. 28 is a diagram showing the coupling relationships of image processing software components used for examining semiconductor defects. Image software components, i.e., alignment, binarization, mask processing and expansion/shrinking components, characteristic value extraction components and determination components are automatically coupled.
When the automatic generation function for this image processing software is provided for the image processing device of the video distribution system, and when a function to be implemented is written to the outline procedure to arbitrarily combine video format conversion, resolution conversion and image recognition software components, a function consonant with the needs of users and the applications they employ.
However, the following problems have arisen when the conventional method is employed to automatically generate image processing software constituting the image processing device for processing images, such as distributed video images.
(Problem 1)
Even for the same parts, behaviors differ, depending on the functions of the image processing device is to be implemented. For example, as is shown in FIG. 23A, when a resolution is to be changed in real time at a video relay node for relaying live video, it is requested that, while regarding real time as important, the resolution be changed by thinning frames in the best effort manner. On the other hand, as is shown in FIG. 23B, when a resolution is to be provided on demand by accumulating camera video in a file and changing the length-to-width ratio and the resolution, it is requested that, while regarding maintenance of content quality as important, the resolution be changed without thinning the frames.
In addition, for the image processing system in the video distribution system, a detailed design, prepared while taking into account the delay and fluctuation of the process for each part, is required for each function to be implemented. In the example in FIG., 23A, consideration must be given to the size of a buffer to be arranged between the video reception component and a resolution change component, so that fluctuation of the network band is absorbed. This buffer size is also varied, depending on the network environment.
These problems, which are not inherent to the individual components, result from functions implemented by the image processing device, and for each of these functions, a data exchange method, such as the size of a buffer arranged between components or a requirement that frames be thinned, must be defined and a mechanism provided, for the image processing device, that can be controlled in accordance with the definition.
(Problem 2)
Assume that multiple video contents are to be provided for multiple users by changing the resolutions or formats. According to the conventional technique, as is shown in FIG. 24, the image processing device changes, in consonance with a user terminal 1, the resolution or the format of a video image input by a camera and provides the obtained image to the user terminal 1, or changes, in consonance with a user terminal 2, the resolution or the format of a video image input by a camera and provides the obtained image to the user terminal 2. However, since for the conversion of the resolution or the format a large number of calculations is required, the processing must be optimized by the common use of the processes that are indicated by portions X (video input process), Y (resolution conversion process) and Z (format conversion process) in FIG. 24.
(Problem 3)
In addition, since a video processing function is an integrating process such as picture synthesization or video/audio multiplexing, an isolation process, such as inverse multiplexing, and a process for changing a parameter for a component are required. Furthermore, to optimize these processes, the image processing device must include a control/management function.
(Problem 4)
Further, when the video replay function is constituted by the same component group, to sequentially replay video images along a time axis while frames are thinned in the best effort manner, or to replay a video image for each frame without the frames being thinned, a rate replay mode and a frame feeding replay mode are provided that can be selected by a user. Therefore the image processing device must include a mechanism for switching replay modes in accordance with a user request.
(Problem 5)
Furthermore, when a replay mode is changed while, to constitute the replay function, a large number of components is employed, either an extended time is required for changing a mode or, from the view of a user, the order in which frames are replayed must be changed.
A specific example of this phenomenon is shown in FIGS. 25A to 25C. In FIG. 25A, frames are read from a disk to a buffer in the forward direction, and frame-feeding replay is performed. Frames 1 and 2 were read from the disk and have already been output from the buffer, while frames 3 and 4 are still retained therein. Suppose that, as is shown in FIG. 25B, a user issues, at the third frame, a request for inverse frame-feeding replay. Based on this request, frame 7 is read from the disk after frame 8. However, since frame 7 is output from the buffer after frames 4, and 8, the change to the inverse frame-feeding replay is delayed a length of time equivalent to the number of frames retained in the buffer.
At this time, even when, as is shown in FIG. 25C, a mechanism is provided for clearing the buffer when a command is changed, the user doubts that the command has been accepted because, after the frames 1, 2 and 3 are fed in that order, frames are invertedly fed in the order 7, 6 and 5 instead of in the order 3, 2 and 1, which the user expects.
(Problem 6)
According to the conventional method for automatically generating the image processing software, software components having input/output arguments (software components that input data required for a process and that output the processing results) are employed together. Therefore, the software component “for receiving a non-compressed image and outputting recognition results”, which performs an “image recognition process”, can not be employed for a function shown in FIG. 26B (because the type of data output by the “image recognition” software component does not match the type of data input to the “image compression” software component), while it can not be employed for a function shown in FIG. 26A because the type of data output by the “image recognition” software component matches the type of data input to the “recognition results recording” software component). Thus, so that these components can be widely employed, an appropriate method by which all software components can be employed together is requested.