The so-called HTTP protocol allows a client and a server to communicate by exchanging requests and responses. Requests are usually sent by the client, while responses are returned by the server. HTTP Requests and Responses are transmitted on a TCP connection between the client and the server. Several versions of the HTTP protocol exist.
Early versions of HTTP, including HTTP/1.0 and HTTP/1.1 (or HTTP/1.x), require a TCP connection for each pair of request-response.
More recent versions allow reusing a TCP connection to exchange several pairs of request-response. In the most recent version (HTTP/2 or its predecessor SPDY), a single TCP connection is used between a client and a server such that all the requests and responses are exchanged through this single connection.
Moreover, the version HTTP/2 manages a full multiplexing of requests and responses so that the client can send parts of several requests in any order and correspondingly, the server can respond to the requests in any order, and also can send parts of different responses in any order. Thus, there is no need to establish a new TCP connection for each new request and corresponding response.
In practice, this multiplexing is implemented by means of streams. A stream is a bidirectional channel between a client and a server for exchanging a pair of request-response. A stream is characterized by an identifier ID, which is generally an integer value. The client and the server exchange HTTP/2 frames (e.g. DATA frames or HEADERS frames) for managing streams and for transmitting data and resources over them. Each time a new request is sent by a client, a new stream is created. Such a request may be sent through one or more HTTP/2 frames, each of them comprising the corresponding stream ID. As a result, even though a single TCP connection is established, several HTTP/2 streams can be opened concurrently. Generally, a stream is closed once the response associated with a given request has been received by the client that originated said request.
Web resources such as web pages or streaming contents generally contain links to other data or resources, which themselves may contain links to other resources. To fully load a web resource requested by a client, all the linked and sub-linked resources generally need to be retrieved by the client. This incremental discovery may lead to a slow loading of the web resource, especially on high latency networks such as mobile networks.
For that reason, server push features (HTTP/2 Push) that allow sending unsolicited resources to clients are implemented in servers. Thus, when receiving a request for a given web resource, the server sends the requested resource and under certain circumstances, pushes linked resources.
In practice, in order to push a resource to a client, a server must first indicate which resource the server intends to push, for instance using PUSH_PROMISE frames. Next, the server can create a new stream in order to transmit said resource. One may note that client has the possibility of rejecting the pushed resources, for instance when the corresponding resources are already available in cache, typically by closing the server-initiated stream used to transmit said pushed resource. Contrary to pushed resources themselves, the above-mentioned frames such as PUSH_PROMISE frames, can only be sent on streams that have been initiated by clients.
In the exemplary case of a live broadcast (streaming) of a video resource (Dynamic Adaptive Streaming over HTTP or DASH) requested by a client, such push features may allow reducing the latency since the server can push a new video segment as soon as it obtains said segment, and also reducing the traffic from client to server by decreasing as much as possible the need for client to request each segments.
In this context, the usage of HTTP-based protocols to enable the pushing of resources by a DASH server has been discussed.
Solutions based on specific usages of HTTP/1.0 and HTTP/1.1, are known but they are generally discarded due to their limitations in the context of DASH. Indeed, according to these protocols, partial responses sent by a server to request from clients may be buffered by intermediaries between the server and the client, thus having an impact on the live broadcasting.
In order to avoid such drawbacks of HTTP-based solution, it has been proposed to use the so-called WebSocket protocol. Indeed, this protocol allows establishing a bidirectional channel between a client and a server. Consequently, once such a connection is available between a client and a server, the server can push any resource to client even in the absence of requests for said resources from client. In addition, the WebSocket channel can also be used by client, for instance to require a different quality for the pushed resources.
However, being based on WebSocket, this solution may not work properly with HTTP caching. For instance, a resource pushed to a client running in a Web Browser may not be available in said Web Browser cache.
Also, in case of several intermediaries like proxies are present between the server and the client, said intermediaries are generally configured to cache HTTP data, but not to cache WebSocket data. This is a significant issue with regard to the usage of WebSocket as cache memories are critical to ensure a good quality of service for the end clients.
Pipelines for sending HTTP responses to video displays are power-hungry and have been extensively optimized by platforms such as iOS. Migrating to WebSocket would probably be less optimized compared to these solutions, in terms of battery life and processing cost.
Another advantage of HTTP over WebSocket is that the client may help the server to order pushed data responsive to the client's needs, typically by using a HTTP/2 mechanism called priorities. WebSocket does not handle such possibility, which may be useful for instance when the client's buffer is almost empty, and the server may thus send subtitles before video data.
In addition, a client running in a Web Browser and that relies on WebSocket to obtain some resources has to process said resources at JavaScript level, which may not be efficient. As a matter of fact, JavaScript is appropriate for handling small amounts of resources, but not so appropriate for handling large amounts of resources such as, for instance, video segments.
Other solutions, as described in the document “Low Latency Live Video Streaming over HTTP 2.0” written by Sheng Wei and Viswanathan Swaminathan (Adobe), are proposed that allow the client to request bigger amounts of resources, for instance a number K of video segments. While this solution provides good results regarding latency and reduction of the number of requests, the resources sent to the client are fully dependent on the client requests. Indeed, after sending the requested K segments, the server cannot sends another segment without a request from the client. In addition, these solutions rely on non-standard versions of SPDY or HTTP/2.
Consequently, there is a need for a method enabling a client to allow a server to push any resources at any time (i.e. to perform continuous push or indefinite-end push) while being also compatible with HTTP/2, SPDY, and clients using HTTP/2 libraries through HTTP/1.x APIs.