1. Field of the Invention
The present invention concerns a method of validating parameters defining an image.
The invention also concerns a method of seeking images, amongst a plurality of stored images including a step of validating search parameters as mentioned above.
2. Description of the Related Art
The present invention also concerns a device able to implement such methods of validating parameters and seeking images.
The increase in exchanges of multimedia information has given rise to requirements for seeking and/or sequencing digital images. Amongst the recently developed technologies on the use of digital images, one of the most important is certainly the indexing of visual information. This is because, in order to be able to manipulate such information, it is, amongst other things, essential to have tools which will make it possible to organize these(images, so as to be able to access them rapidly, but also to be able to find a certain number of them, with similar contents, amongst a multitude of images which may be stored locally or in a distributed fashion.
In a traditional system for seeking digital images such as are currently found on the Internet, the users seek images using keywords. In such a system, the creator of the database associates, with each of these items of visual information, a set of keywords which describe in general its visual content. For this, he must interpret the content of the image and transform the perception which he has of this content into words which he associates with the image thus described. However, these textual descriptors are often inadequate for describing an image, quite simply because the same image can be described in different ways by different creators. It can also be remarked that it is easier, for a user, to seek an image according to its content by specifying an example image rather than using keywords with which it is often difficult or even impossible to describe what an image contains.
It can therefore be seen that the traditional systems for seeking images are limited and that it is essential to define a system which makes it possible to extract a description of the visual content of these images in an automatic or semi-automatic fashion. These systems are known as systems for the indexing of visual information, based on the content.
The aim of a system for seeking digital images based on the content is to extract, from amongst a set of images stored in a database, a subset of images which best respond to a request from a user. This user can be a human being or any other means capable of specifying a request understandable by a machine.
Essential to such systems, man/machine interfaces are crucial since they make it possible to transform a request from a user in the form of a language which is understandable to the machine and to present the result of the request in a user-friendly fashion. The graphical interfaces of a, system for indexing/seeking images can be broken down into two parts. The first consists of giving means to a user for formulating a request, that is to say to choose, for example, the parameters defining a digital image, to which the search will relate. These parameters can be obtained automatically from an image or in the form of textual annotations which the user associates with each stored image. The second part is a window which displays a set of images classified according to their degree of similarity to the request. In general, the image at the top left is the most similar whilst the one at the bottom right is the least similar.
The indexing of images, based on the content, is a recent research field. This is because it is only since the start of the 80s that the need has been felt to be able to find audio-visual information according to its semantic content rather than only according to its non-semantic characteristics such as the name, size or format of the file which contains it or a set of keywords which is associated with it.
The first image indexing systems are beginning to see the light of day and some companies are awaiting the establishment of thy MPEG-7 standard in order to finalize their prototypes and give birth to commercial products.
It is possible to cite, for example, QBIC (“Query By Image Content”) from IBM described in the patent U.S. Pat. No. 5,579,471 and which consists of characterizing the content of an image using the distribution of the colors and/or the texture in this image. Thus, with each image stored in the database interrogated there is associated an index composed of a component representing the color, and/or a component representing the texture of the image.
During the search phase, the user has the possibility of defining a request through a graphical interface composed essentially of two parts. The first consists of choosing an example image or creating a synthetic example image using a palette of colors and texture models. Next, where the user has chosen to base the search on both the color and the texture, he allocates a numerical value to each of these parameters. This value characterizes the relative importance of the two parameters used for calculating the similarity between the example image and an image stored in the database.
Once this request has been defined, the search method is as follows. First of all, the process identifies whether the search is to be based on one or more parameters. For this, the process is based on the numerical values associated with each of the search parameters. Secondly, it identifies whether the research can be based on the parameters specified by the user (color and/or texture) by analyzing the content of the index associated with the image in the database currently being processed. According to the result of this analysis, a measurement of similarity is associated with this current image. It may be based on the parameter or parameters specified at the time of the request or be based only on a small set of these parameters. Finally, once each image in the database has been processed, the images are sorted according to their degree of similarity with the example image.
The request thus given by the user requires, on the part of the user, a significant knowledge of the visual content of the image and on the way of characterizing the search parameters. In addition this system does not make it possible to associate a parameter with a functionality which could be enabled when the parameter is validated by the user.
Current systems offer possibilities for the user to define a request on image search parameters. Most often, this user requires sufficient knowledge of the field of the digital image in order to be able to define a request. In addition, the systems of the state of the art do not make it possible to associate a functionality which could be enabled at the time of selection or validation of a parameter by the user. Indeed the user may wish to define a request by means of a non-visual parameter, for example audio. He may also wish to define a request relating to both visual and non-visual parameters of the image.