In IETF (Internet Engineering Task Force) and W3C (World Wide Web Consortium) there is an ongoing effort aiming to enable support of conversational audio, video and data in a so-called “web browser”. Within this effort, WebRTC (Web Real-Time Communication) is an API definition being drafted by the W3C to enable browser-to-browser applications for voice calling, video chat and P2P (Peer to Peer) data sharing without plugins. That is, WebRTC could be used in/by a web application in order to provide conversational services to users. By use of WebRTC and the associated APIs (Application Programming Interfaces), a web browser will be able to send and receive e.g. RTP (Real-time Transport Protocol) and SCTP (Stream Control Transmission Protocol) data, and to get input samples from microphones and cameras connected to a user device, and to render media, such as audio and video. A web browser which is to provide conversational services by use of WebRTC, is controlled by a web (html) page via JS (Java Script) APIs, and, by html elements for rendering.
Examples of defined APIs associated with WebRTC are:                1. NavigatorgetUserMedia( ) (defined in [1]): gives a web application, after user consent, access to media generating devices such as microphones and cameras.        2. RTCPeerConnection (defined in [2]): an object that enables the web application to stream data from media generating devices to a peer.        3. RTCSessionDescription (defined in [2]): objects that are used to control the RTCPeerConnection objects.        
The default use of the APIs described above in a web page or web application will be described below. When an end user has initiated a communication session in a web application associated with a communication service, the following is performed:                1. Using navigator.getUserMedia( ) to get allowance (active consent) from the user to use the microphone and the camera of the user's device        2. Creating a RTCPeerConnection (PC) object to handle a connection to a remote peer, and the transmission/reception of audio and video to/from the remote peer.        3. Instructing the PC to use the audio and video data created by microphone and camera, which is accessed with user consent after using navigator.getUserMedia( ), as source for media to be sent to a peer using the PC.        4. Instructing the PC object to create, and use, an RTCSessionDescription object that is used to describe the intended session.        5. Signaling the RTCSessionDescription data to the remote peer, which in its turn uses navigator.getUserMedia( ) to get user consent, creates a PC, instructs the PC to use audio/video from microphone/camera, applies the received RTCSessionDescription and generates a new RTCSessionDescription in response to the received RTCSessionDescription.        6. Receiving the RTCSessionDescription from the remote peer and applying it locally.        7. Starting the sessions, in which audio and video can flow between the peers.        
In addition to these APIs having the functions described above, extensions have recently been proposed (see [3]). One of the extensions enables a web application to create unattached media stream tracks. Unattached in the sense that there is no real source delivering data. Such an unattached media stream track could be made “real”, e.g. in order to be the source for transmission of real media data, by using the Navigator.getUserMedia( ) API to connect a media generating device to the unattached stream track, which then will cease to be an unattached media stream track.