In the following background information about the Node ID (MD) Architecture and the Service Aware Transport Overlays (SATO), both developed within the Ambient Networks project, will be given. This is an example of a network system comprising an underlying network and an overlay network.
In the so-called Ambient Networks (AN) project, which is an EU project under the 6-th framework programme, mechanisms have been developed to ensure global reachability across different locator domains. ANs are expected to provide a common control layer spanning multiple routing domains, which are assumed to exploit different mechanisms and technologies to transport data in their domains. In AN, these domains are referred to as locator domains. The nodes contained in these locator domains are assumed to possess locators, such as specific addresses arranged according to a given addressing scheme, which only have local significance and facilitate communication within the local domain only. Still, it should be understood that these local domains can also be rather large as the global IPv4 Internet is for example one of such locator domains.
To ensure communication across the boundaries of such locator domains, the AN project has developed a concept referred to as NodeID architecture. In this concept nodes wishing to communicate globally (i.e. beyond their own locator domain) register to a so-called NodeID router (NR) present in their local domain. The individual nodes register a node identifier called NodeID and their local locator, which is valid in the domain they are currently roaming in. The NR stores this information and propagates the NodeIDs of registered nodes upwards in the hierarchical topology of interconnected locator domains. The final step is to store the NodeID in a so-called Distributed Hash Table (DHT) present in the top-level locator domain. FIG. 2 shows an example of a high-level picture of the NodeID architecture, where three locator domains LD1, LD2 and LD3 are shown, each having its own Domain Name Service (DNS). In this example, LD1 is assumed to be the Internet Protocol version 4 Core (IPv4 Core).
Communication across different locator domains between two nodes “A” and “B” (see FIG. 2 and FIG. 3, which shows the communication steps within the example of FIG. 2) may be achieved as follows:    1. Node “A” wishing to communicate with Node “B” resolves node “B”s Universal Resource Identifier (URI) by contacting the DNS as it would already do in today's Internet, The DNS response will include Node “B”s NodeID.    2. Node “A” creates a connection setup message or a first data packet that contains the NodeID of Node “B”    3. The message is passed up the hierarchy of NID Routers, which are identified per domain as default NID gateways for locally unknown NodeIDs. In LD2 the default NID Router (NR) is NR2.    4. When the message reaches the top level domain (LD1 in the example), the DHT provides a mapping to the NID router representing an entry to the locator domain subtree to which Node “B” belongs. The message is forwarded to this NID Router (NR3 in the example of FIG. 2),    5. The NID routers, present in the sub-tree to which Node “B” belongs, store information about the next hop NR or already have information about the locator of Node “B” if they happen to be the NR serving the locator domain to which Node “B” belongs. Based on the information present in the NRs, the message is passed down in the hierarchical topology of Locator Domains.    6. The NR serving the locator domain in which Node “B” is present has knowledge about the locator of Node “B”. This locator is used to finally deliver the message to Node “B”.
A NodeID architecture may provide a form of mobility support. The topology of interconnected locator domains as depicted in FIG. 2 and FIG. 3 is subject to change whenever networks physically move (network and device mobility) or cooperation agreements between networks change. In AN, these co-operation agreements are referred to as composition agreements and an automated process takes care of negotiating, agreeing and implementing these cooperation agreements dynamically to react on changed user demands or network offerings.
It is thus expected that changes in the network topology happen rather frequently. This requires having efficient mechanisms at hand which allow updating the distributed routing information stored in the NRs and the DHT. This may also apply to other information in the network, e.g. the information contained in the so-called SPI, which will be introduced in the next section.
The support for mobility in the NodeID architecture is implemented by a set of signaling procedures that allow moving sub-trees within the topology from one point of attachment to another one. This ensures that the global tree always contains up-to-date information.
The tree structure formed by the interconnected locator domains also has the advantage that mobility updates can be kept locally and usually don't require propagating the change up to the top-level domain.
The NodeID architecture is also described in Bengt Ahlgren, Jari Arkko, Lars Eggert and Jarno Rajahalme, “A Node Identity Internetworking Architecture”, IEEE INFOCOM 2006 Global Internet Workshop Apr. 28-29, 2006, such that a further description is not necessary here.
Ambient Networks may provide media delivery concepts. Within the Ambient Networks project, the concept of Service Aware Transport Overlays (SATO) has been developed. With a SATO overlay network, overlay nodes (called SATO Overlay Nodes or SON) are interconnected. Such overlay nodes will host the so-called SATOPorts (SP). Typically, the SATOPorts will perform functions in the user plane of a service. The user plane SATOPorts may be broadly classified into three main classes and a number of different sub-classes based on functionality, although some may fall into more than one class. The first major class is ‘routers’, which performs plain data forwarding at the overlay level based on dynamically configured overlay routing tables. This class of SATOPort is primarily employed to enhance QoS by mitigating the risk of sub-optimal or sub-standard network level routing in a similar manner. The second major class of SATOPorts is ‘processors’, which performs a given processing on an incoming data stream, for example, virus-scan, integrity checking, transcoding, resiting, synchronisation, etc. The third major class of SPs is ‘caches’ which is capable of storing data flows for time-shifted delivery.
One or multiple end devices will fulfil the role of clients and one or multiple of a server. Whereas in some service scenarios the roles are not that clear distinguishable, i.e., in pure peer-to-peer services where any party is client and server at the same time. The clients are called SATOClient (SC) and the servers are called SATOServer (SS).
A very simple configuration of a SATO is the combination of one SATOServer, one SATOPort, and one SATOClient. However, more complex configurations may involve multiple SATOServers (e.g., two media content sources), multiple SATOPorts (e.g., transcoder, caches, synchronizer), and multiple SATOClients (e.g., receiving multicast content).
All of these elements form a SATO on top of the underlying network infrastructure, as shown in FIG. 4. As can be seen, an overlay network (such as for example a SATO) may comprise at least a part of the nodes of an underlying network (e.g. the actual physical network), and may thereby form a virtual network on top of the underlying network.
The lookup of media processing functions located in the network is performed by a directory function called SATOPort Informationbase (SPI). Two possible means of implementing this function are with a database-like directory service, or with an ad-hoc search. Considering the database approach and taking into account that a centralised architecture could have scalability limitations, the SPI could be designed as a distributed database where each Overlay Node (ONode) hosts a part of the database service.
Several levels of SATOPort descriptions have been identified. The higher level refers to the general properties of the SATOPort, such as which kind of service it can provide, e.g. caching or adaptation. In the latter case additional information about supported codecs has to be given. It is also relevant for routing decisions to have information about the current available capabilities of the MP as present in the lower level of description. The current processor load of a transcoding device or the remaining memory capacity of a cache is an example for that. When up and running, each ONode registers the availability of the SPs that it hosts in the SPI. After this registration, the information is updated if the status of any indexed SPs changes.
There are problems with such existing solutions. Overlay Networks create a logical network of nodes, which cooperate to implement a service of common interest. Typical examples of such services are file-sharing and enterprise networks (VPN). Overlays can be set up for any purpose, including the distribution of media content. In the Ambient Networks project, overlay networks are a topic since the start of project phase 1 in Jan. 2004 (see above).
Known solutions discussed inside and outside the AN project face the difficulty to consider information about the underlying network topology for the routing decisions made in the overlay. The motivation for considering such information is to avoid inefficient routing decisions in the overlay leading to unnecessarily long data paths. Typical solutions to this problem rely on the exploitation of IP path metrics, which can easily be discovered or measured. Typical examples for such metrics are number of hops and ping-delays.
The overlay concept discussed in the AN project is challenged additionally as the underlying network topology is assumed to dynamically change over time due to user and network mobility and the creation and termination of network composition agreements. This leads to a varying availability of communication paths.
Apart from the need to make information about the underlying network topology available to the overlay nodes, information about the capabilities of individual nodes is required, in particular the capabilities that go beyond the mere forwarding of data (e.g. media manipulation, caching). As outlined above, a distributed database (the SPI) is foreseen to store such data. This database is consulted to discover available and suitable overlay nodes when an overlay network needs to be set-up or adapted. The current approach assumes that the nodes register to this database and store and update the information about their capabilities. A second possibility discussed in the AN project is to search in the network for suitable nodes during the set-up of an overlay network.
All these approaches spend a considerable effort on collecting and maintaining such data and also generate signaling traffic in order to maintain the database or execute the search functions. At the same time, topology information and the support for mobility is already present in the network. In addition, the consistency of different databases (e.g. consistency with the NID DHT) has to be ensured. This is the area where this invention is supposed to improve efficiency.
The existing approach in AN (NID architecture) guarantees reachability of nodes across locator domains. Limited knowledge of topology is present in the NID architecture as distributed information (NID router knows its leaf routers and the router one layer higher in the hierarchy). Procedures to update attachment of LD to NID tree exists.
FIG. 5 shows the so far separated concepts of overlay routing and overlay information stored in the SPI on the overlay layer, and addressing and routing in Locator Domains in the Physical network. In the example, the underlying network is a physical network and the overlay network is a SATO. There may be a relation in so far that physical nodes in the Physical network (thus participating in NID addressing and routing) may also be part of an Overlay network on the Overlay layer. This is shown through the solid lines in FIG. 5. Also, the SPI database available on the overlay layer may be implemented in one or more physical boxes/nodes in the Physical network (if the SPI is implemented in several nodes, it is a distributed SPI; see the dotted lines). The nodes may be in different locator domains (LDs).
FIG. 6 shows the mechanisms of Overlay and Locator Domain registration in the prior art example of FIG. 5, which are independent of one another.
In the shown case, registration, addressing, and routing on the Physical layer, and registration and routing on the Overlay layer, are unrelated, as also shown in FIG. 6. On the physical layer, the node registers itself when attaching to the network, by registering its LD independent FQDN (Fully Qualified Domain Name), its LD independent node identity (NID), and its LD local address. On the overlay layer, the overlay node A (which physically coincides with the just described physical node A) separately registers its FQDN and its overlay node capabilities (like transcoding capabilities, or supported codecs if it is a client) with the SPI in the overlay network.