1. Field of the Invention
The present invention relates to a video conference system, a processing method used in the same, and a machine-readable medium, and more particularly relates to a video conference system, a processing method of processing at least one area of concern in the video conference system, and a machine-readable medium having machine-executable instructions for realizing the processing method.
2. Description of the Related Art
Since a network-based video conference can drastically cut down the time and cost of a conference organizer, with the developments of technologies, the network-based video conference has been spreading fast (particularly in the business field) in recent years. Therefore various improvement proposals on improving sound quality, image quality, etc., have been made so far.
For example, in the below cited reference No. 1, a technical proposal used for simulating an effect of shallow depth of field by dealing with a captured image or video is disclosed. In this method, the background of the image is separated out, and then a focus area (for example, a talker) is emphasized by applying fuzzy processing to the background. It is possible to utilize a convolution filter (for example, a median filter, an averaging filter, a Gaussian filter, etc.) in a region of space to carry out the fuzzy processing; it is also possible to utilize a frequency filer (for example, a low-pass filter) in a region of frequency to carry out the fuzzy processing.
In the below cited reference No. 2, another technical proposal is made. In this proposal, a method and a device having a function of transformation along the time-line of a video are disclosed. As for an area of concern, this method uses a frame rate higher than that of an area of unconcern; as for an extended area of concern, this method uses a fuzzy filter.
Furthermore, in the below cited reference No. 3, a processing technique able to pay attention to an area of concern in a video teleconference is provided. In this processing technique, a receiving unit of a local terminal device transmits information of an area of concern to a sending unit of the remote terminal device. The sending unit of the remote terminal device applies high-priority encoding to an area of concern in a scene of a video by employing the information of the area of concern transmitted by the receiving unit of the local terminal device. Therefore the receiving unit of the local terminal device can remotely control the encoding of the area of concern in the video of the sending unit of the remote terminal device.
However the above-mentioned techniques still have various problems. For example, in the below cited reference No. 1, the method intends to realize privacy protection in a cell-phone having a camera by carrying out the fuzzy processing, but is not used for a scene in a video conference. Therefore the features of the scene in the video conference are not properly processed in the method. For example, the method does not consider participants, a common focus area of concern of all the participants in the video conference at any time, optimum allocation and utilization of network bandwidth, etc.
Furthermore, for example, in the below cited reference No. 2, the technical proposal only considers letting its method be used at one end (i.e. one terminal device) of a video conference system, but does not consider the two ends of the video conference system nor optimum allocation and utilization of available network bandwidth. Aside from this, the area of concern is defined only by using a single rule; that is, a person in the video is defined as an area of concern.
Furthermore, in the below cited reference No. 3, the processing technique does not set an area of concern of a video conference. And when separating an area of concern and area of unconcern, the processing technique only considers video at one end of a video conference system, but does not consider the two ends of the video conference system.    Cited Reference No. 1: US Patent Application Publication NO. 2008/0259154 A1    Cited Reference No. 2: International Publication No. WO 2007/007257 A1    Cited Reference No. 3: US Patent Application Publication NO. 2006/0215753 A1