To have a meeting among participants not located in the same area, a number of technological systems are available. These systems may include videoconferencing, web conferencing or audio conferencing.
The most realistic substitute of real meetings is high-end videoconferencing systems. Conventional videoconferencing systems comprise a number of end-points communicating real-time video, audio and/or data streams over WAN, LAN and/or circuit switched networks. The end-points include one or more monitor(s), camera(s), microphone(s) and/or data capture device(s) and a codec, which encodes and decodes outgoing and incoming streams, respectively. In addition, a centralized source, known as a Multipoint Control Unit (MCU), is needed to link the multiple end-points together. The MCU performs this linking by receiving the multimedia signals (audio, video and/or data) from end-point terminals over point-to-point connections, processing the received signals, and retransmitting the processed signals to selected end-point terminals in the conference.
By using a videoconferencing system, e.g. a PowerPoint presentation or any other PC-presentation may be presented while still being able to see and hear all the other participants.
Another common way of presenting multimedia content is to stream data to computers through a web interface. The data stream may be transmitted in real-time, or a play back of an archived content through a content server. Conventional streaming data is adapted for storage and distribution, and therefore the multimedia content is represented in a different format than for video conferencing. Hence, to allow for streaming and archiving of a conventional video conference, a system for converting the multimedia data is needed. One example of such system is described in the following.
A content server (CS) is preferably provided with a network interface for connecting the server to a computer network, audio/video and presentation data interfaces for receiving conference content, a file conversion engine for converting presentation content into a standard image format for distribution, and a stream encoder for encoding the content into streaming format for distribution. The CS is further equipped with a stream server for transmitting the encoded audio/video content and a web server for transmitting web pages and converted presentation content to terminals located at nodes of the network. The CS is also adapted to create an archive file consisting of the encoded stream data, residing at local storage media or in a server/database, to enable later on-demand distribution to requesters at remote terminals over the computer network.
According to a typical mode of operation, the conference is initiated by including the CS as a participant in the conference. The CS accepts or places H.323 video calls as point-to-point (only one H.323 system in the call, typically used to record training materials from a single instructor) or multipoint (2-n H.323 systems in the call, typically used to stream or archive meetings). A viewer at a remote terminal can access a conference by directing a conventional web browser to an URL (Uniform Resource Locator) associated with the distribution device. After completion of validation data interchanges between the viewer and the distribution device, the viewer is able to view the personal interchange, i.e. the conversation and associated behaviour, occurring between the participants at the conference presenter site, as well as view the presentation content being presented at the conference site. The multimedia content is viewed in a multiple-window user interface through the viewer web browser, with the audio/video content presented by a streaming media player, and the presentation content displayed in a separate window. When requested by the head of the conference or by the conference management system, encoded stream data is stored in a server as an identifiable file.
The Content Server is based on a Line (or Port) structure, meaning that each CS has one or more lines each with the functionality of either being a transcoding line or and archiving line. Each of these lines is assigned a specific recording/streaming template as defined by the system administrator. The content of a template determine how output from a videoconference is handled by the CS to produce the desired output. The template defines different settings for the call, e.g.                What codecs or combination of codecs are needed. Windows Media, Real Media, quicktime, etc.        Bandwidth/streaming rates; 56K, 256K, 384, etc.        Video, audio or both        Still images or not        Dual stream Video and slides        Picture in Picture presentation layout        Encryption on/of        Password On/off        etc.        
Every call made to or from a line is recorded/streamed using this template. Any change of the template used on the line is a system wide change and all conferences from the time of the change will use the new template.
However, this method of attaching a template to a line in a one-to-one relationship is very limiting since all users must use the current selected template or change the template before making a call. Therefore, a more dynamic selection of recording/streaming templates is needed.
One prior art recording and streaming system allows a user to own a “line”, meaning that an alias refers to a specific line on the recording system, and for incoming calls they get the functionality of that line. Whereas for outgoing calls that same user can use a “call template” (or address book entry) where they can further define information such as the call description, basic call options and some security options. However, these functionalities have to be set for every endpoint added as an address book entry and are somewhat limited. This model has very little flexibility and requires much duplication of effort in cases where you want multiple users to follow the same configuration.
Another prior art recording and streaming system stores parameters for recordings based on an endpoint identifier. This identifier is either the ip address or the endpoints alias. However, the parameters that can be set are limited to; display name, preferred video size, preferred bandwidth to and from, and H.239 video contribution. Further settings for recordings are made on a site wide basis which is very inflexible. If your recoding parameters is associated with your desktop endpoint, you will not be able to record a call with your preferred parameters when you make a call from a meeting room endpoint.