There is an intensive research and development effort ongoing in trying to make slim hosts, e.g., PDA's, settop boxes, and personal communicators Internet-aware. Such hosts have limited processing capability but are being used to support increasingly sophisticated applications. Such slim hosts are herein referred to as terminals, to distinguish them from the hosts that connote a PC or workstation with larger processing capability and memory. The next generation terminals are expected to support continuous-media ("CM") applications such as audio and video streams, along with other traditional applications. Currently such CM applications are supported on powerful hosts with some of these applications implemented and delivered using the Real-Time Transport Protocol (RTP) transport framework such as described in "H. Schulzrinne, S. Casner, R. Frederick and V. Jacobson, "RTP: A Transport Protocol for Real-Time Applications", RFC 1889, Audio-Video: Transport Working Group, January 1996, the contents of which is incorporated by reference herein. It is expected that such adaptive CM applications will eventually be deployed on terminals. Under such a scenario, the processing delay on end terminals is expected to become a significant part of the end-to-end QoS. Processing delay on the slim terminals is thus an important QoS measure.
To compute the processing delay several challenges exist:
(1) Applications can exhibit run-time data-dependent behavior. That is, the processing demanded by applications like audio-video encode and decodes, web-based processing etc. can depend on the run-time behavior of the applications, and can be thus "dynamic". The processing demands of such applications are not known "deterministically" at compile time.
(2) Applications such as audio and video are being improvised to "adapt" to rapidly changing network conditions so that they do not congest the network. For instance, with reference to FIG. 1 illustrating terminals 20a, b in an interconnected network 10, applications can use RTP-based adaptation algorithms 15a,b based on end-to-end QoS measures such as packet loss. In such algorithms, the adaptation algorithm moves the application to lower adaptation levels when the packet loss in the network increases to between certain threshold levels. A lower adaptation level typically corresponds to a reduced bit-rate and hence lower network packet traffic. In general, changing adaptation levels changes the processing demand on the terminals 20a,b. The reference to J.-C. Bolot and A. Vega-Garcia entitled "Control Mechanisms for Packet Audio in the Internet", Proc. IEEE Infocom '96, San Francisco, Calif., April 1996, pp. 232-9, describes an example illustrating how the CPU execution time of a packet audio application increases with lowered adaptation level. For instance, LPC encoding (lower adaptation level) in an audio application incurs approximately 110 times the execution time as PCM encoding (higher adaptation level). In summary, since the adaptations of such applications continually fluctuate based on network conditions, the processing, demands imposed by an application on the terminal also fluctuate accordingly.
(3) Multiple applications may be concurrently running on a terminal. The total processing demand on the terminal as an end-system also varies depending on the instantaneous number and type of applications that run at that particular time.
Based on the above three reasons, it is clear that processing delay is highly variable and computing it is a challenging task. Currently there do not exist any systematic methods or apparatus of computing it within available frameworks. There is a critical need for a method and tool that can analytically quantify processing delay by representing the various variations presented in the three points above by a unified modeling framework.