FLUTE is a project managed under the control of the Internet Engineering Task Force (IETF). FLUTE defines a protocol for the unidirectional delivery of files over the Internet. The protocol is particularly suited to multicast networks, although the techniques are similarly applicable for use with unicast addressing. The FLUTE specification builds on Asynchronous Layered Coding (ALC), the base protocol designed for massively scalable multicast distribution. ALC defines transport of arbitrary binary objects, and is laid out in Luby, M., Gemmell, J., Vicisano, L., Rizzo, L. and J. Crowcroft, “Asynchronous Layered Coding (ALC) Protocol Instantiation”, RFC 3450, December 2002. For file delivery applications, the mere transport of objects is not enough. The end systems need to know what do the objects actually represent. FLUTE provides a mechanism for signalling and mapping the properties of files to concepts of ALC in a way that allows receivers to assign those parameters for received objects. In FLUTE, ‘file’ relates to an ‘object’ as discussed in the above-mentioned ALC paper.
In a FLUTE file delivery session, there is a sender, which sends the session, and a number of receivers, which receive the session. A receiver may join a session at an arbitrary time. The session delivers one or more abstract objects, such as files. The number of files may vary. Any file may be sent using more than one packet. Any packet sent in the session may be lost.
FLUTE has the potential be used for delivery of any file kind and any file size. FLUTE is applicable to the delivery of files to many hosts, using delivery sessions of several seconds or more. For instance, FLUTE could be used for the delivery of large software updates to many hosts simultaneously. It could also be used for continuous, but segmented, data such as time-lined text for subtitling, thereby using its layering nature inherited from ALC and LCT to scale the richness of the session to the congestion status of the network. It is also suitable for the basic transport of metadata, for example SDP files which enable user applications to access multimedia sessions. It can be used with radio broadcast systems, as is expected to be particularly used in relation to IPDC (Internet Protocol Datacast) over DVB-H (Digital Video Broadcast—Handheld), for which standards currently are being developed.
A programming language for choreographing multimedia presentations where audio, video, text and/or graphics can be combined in real time has been developed. The language is called Synchronised Multimedia Integration Language (SMIL, pronounced in the same way as ‘smile’) is documented at www.w3c.org/audiovideo. SMIL allows a presentation to be composed from several components that are accessible from URLs, as files stored on a webserver. The begin and end times of the components of a presentations are specified relative to events in other media components. For example, in a slide show, a particular slide (a graphic component) is displayed when a narrator in an audio component begins to discuss it.
The inventor has considered the possibility of using a file delivery protocol such as FLUTE for the remote provision of multimedia presentations, such as those written in SMIL. However, the combination of file delivery and multimedia presentation technologies is not a straightforward issue. The present invention seeks to provide solutions to problems encountered in the combination of FLUTE and SMIL technologies, but also has broader applicability in so far as it can relate to any file delivery and need not be limited to SMIL presentations.