With the increase of their capacities, computers can manage not only fixed images but also animated images or films. It therefore becomes advantageous for a content creator to be able to integrate animations in his productions.
As he does not necessarily have videos or animated sequences corresponding to the content which he wishes to include in one of his productions available, it is important to offer him tools for presenting a fixed image dynamically. Thus, instead of directly presenting a panoramic photograph of a landscape, he will be able to make this photograph pass through a smaller frame in order to simulate the movement of a camera facing this landscape. In a similar fashion, in order to present a painting, he can firstly present an overall view of the work and then zoom in on a first interesting detail before moving the view to another detail of the work.
There already exist several software packages for producing such animations from a fixed image. For example, the MovingPicture software from StageTools makes it possible to perform this task, as explained in the MovingPicture QuickStart manual v4.4. These software packages use two different techniques for storing the animation thus produced with a view to transmitting it to a user who will be able to look at it. The first technique consists of generating a fixed image for each frame of the sequence and then combining all these images and saving them as a film using a conventional film format, such as MPEG2 for example. A second technique consists of saving the image and all the commands necessary for recreating the animation (for example “show the complete image, zoom in on such and such a region, make a translation to such and such another region”).
These two techniques each have their drawbacks, in particular for broadcasting these creations over the Internet. In the first case, the file generated is of large size, which requires a large bandwidth in order to be able to display the animation correctly. Thus a user having only a modem available will not be able to see the animation with the correct quality. In the second case, before the animation can be played, the entire image must be loaded. Thus, even if the total size of the data loaded is less than in the first case, a significant time may elapse before the user can commence displaying of the animation.
There is also known, through the published American patent application US-A-2002/0118224, a system for dynamically displaying images through a network. The client system comprises a subsystem for requesting data packets and a subsystem for displaying data packets. The subsystem for requesting data packets places each request in a queue and transfers any image data packet received to the data packet display subsystem. Thus the system described makes it possible to download an image in small pieces. However, the aforementioned document does not provide any indication on the way to create the requests in order to obtain these image pieces, nor on the way to order them. This is because the system described aims not to present a predefined animation to a user but to transmit the image pieces according to the user's requests for particular effects: this system aims in particular to be used for the display of high-resolution geographical maps, of which the user at a given moment sees only part of the map.
In the same connection, the JPIP protocol has been developed in order to access to image content in the JPEG2000 standard from a client terminal, as described for instance in “Architecture, Philosophy and Performance of JPIP: Internet Protocol Standard for JPEG2000”, by D. Taubman and R. Prandolini in Visual Communications and Image Processing 2003, Proceedings of SPIE Vol. 5150. This article notably explains how the server may optimise its response to a given request received from the client terminal and determined based on the terminal user's selection.
On the other hand, techniques have already been proposed for creating a downloadable file containing an animation which may be played on a computer, for example the technique known as “Flash” developed by the company Macromedia. Such an animation typically includes one or more images in the JPEG2000 format.
According to techniques of this type using keyframes and interpolation data top define images between two keyframes, the display quality of the animation directly depends on the number of keyframes. Thus, the more keyframes there are, the smaller the number of interpolations and consequently the higher the visual quality of the animation. However, the more keyframes there are to transmit, the higher the transmission rate and the higher the calculation cost for processing the keyframes.
It is therefore necessary to find a compromise between the quality of the animation (i.e. the quality of images forming the animation) on the one hand, and the calculation capacities and bandwidth of the network on the other hand.
In document U.S. Pat. No. 6,081,278, two versions of an animation are stored on a server. One of the versions comprises a higher number of keyframes than the other.
When a client terminal requests the animation, it sends its features to the server which determines which version is the best adapted before sending that version.
If many client terminals request an animation simultaneously, the calculation workload linked to seeking the version adapted to each client terminal may become too great for the server.
Another solution of this kind is also described in U.S. Pat. No. 6,476,802 with similar drawbacks.
Document U.S. Pat. No. 6,442,603 deals with the case of a document containing several types of content such as text, audio, image, and video. As in the preceding document, when a client terminal requests the animation, it sends its characteristics to the server. The latter determines what types of data are compatible with the client terminal.
This system has the same drawback as that of the preceding document. Furthermore, it does not enable the quality to be adjusted within the same type of data.
The document entitled “Low complexity video coding for receiver-driven layered multicast”, by S. Mc Canne, Jacobson and M. Vetterli, which appeared in IEEE Journal of selected areas in communications, Vol. 15, No 6, August 1997, pages 983-1001, proposes a system in which the server has no active role in the choice of the level of quality of the data sent to a client terminal.
In this system, the server sends several levels of hierarchy of a video stream to separate multipoint addresses. A client terminal subscribes to a certain number of levels. It begins by subscribing to the lowest hierarchical level then attempts to subscribe to the higher levels so long as it experiences no loss.
There is no overload of the server. On the other hand, the traffic related to the management of the multipoint tree structures is high. Furthermore, it is necessary for there to be multipoint-compatible routers in the transmission networks. At the present time, very few of these routers exist on the Internet. Finally, the time for convergence towards the optimum level of quality is high, and this is little compatible with animations of Flash type.
Document U.S. Pat. No. 6,185,625 proposes a method in which the user of the client terminal sets the quality parameters which will be used to play the animation.
This information is transmitted to the server, and the animation is encoded in a specific manner depending on the parameters set by the user. The encoded animation is next transmitted to the client terminal.
This method implies a specific encoding of the animation before its transmission. This may delay the transmission.
Furthermore, the user of the client terminal must have technical knowledge to set the quality parameters appropriately.
In document U.S. Pat. No. 6,442,658, an animation is considered as a set of segments. The segments are classified according to their probability of being played after the current segment and also according to an estimated transmission cost.
Thus, pre-loading is carried out depending on the probability and the cost. The available memory of the client terminals may, furthermore, be taken into account.
The transmission time of the animation is reduced. However, this document does not propose a compromise between the quality of the animation on the one hand, and the calculation capacities and bandwidth of the network on the other hand.
The user guide for Flash MX entitled “Flash MX”, published by Dunod, describes the “ActionScript” language which enables a user to dynamically interact with an animation.
The control which is thereby rendered possible is independent of the capacities of the client terminals or the bandwidth.