In the area of video and audio data transmission there are many solutions to handle the loss of information through the delivery process. Typical poor delivery issues are solved today using either retransmission of data, or re-synchronization of the video and audio streams. Also used are buffering methods at the receiver that allow for small delays to access the data to allow for some data loss and slow delivery issues. It is also conceived that by splitting the video transmission into multiple paths that it is much less likely that a transmission failure occurs simultaneously on all paths. Therefore if each path contains enough data to build a video transmission there will generally always be data to display video information. When all paths are fully working then video information increases in quality. These traditional methods continue to be used today when transferring data over networks of all kinds.
In a closely related area, the recording and transmission of live video from a mobile device requires the solution of two different problems: a fixed hardware encoding rate, and a recording delay resulting from limited device resources.
For example it should be understood that the hardware based H.264 encoder used in an iPhone™ has a fixed encoding rate. And therefore once the encoding session is started, the application cannot change the session parameters (encoding rate, frame rate, etc.) until the session is stopped and restarted. Yet if conventional wireless mobile technologies are used for audio or video transmissions, variations in bandwidth (both bit rate and latency) must be accommodated to avoid frames being dropped by the receiver resulting in scrambled output (lost video signal/black screen).