Over the next decade, most devices connected to the Internet or other global network will not be used by people in the familiar way that personal computers, tablets and smart phones are. Billions of interconnected devices will be monitoring the environment, structures, transportation systems, factories, farms, forests, utilities, soil and weather conditions, oceans and resources. Many of these sensors and actuators will be networked into autonomous sets, with much of the information being exchanged machine-to-machine directly and without human involvement
Machine-to-machine communications are typically terse. Most sensors and actuators will report or act upon small pieces of information—“chirps.” Burdening these devices with current network protocol stacks is inefficient, unnecessary and unduly increases their cost of ownership.
The architecture of the Internet of Things necessarily entails a widely distributed topology incorporating simpler chirp protocols towards at the edges of the network. Intermediate network elements perform information propagation, manage broadcasts, and provide protocol translation. Another class of devices house integrator functions providing higher-level analysis, for both near-edge analytics and broader-scope analysis. Small chirp data will feed “big data” systems.
The propagation of pollen and the interaction of social insects are relevant to the emerging architecture of the Internet of Things described in the instant application. Pollen is lightweight by nature, to improve its reach. It is inherently secure, only the receiver can decode its message. Nature's design is very different from today's traditional large packet and sender-oriented network traffic.
This application describes reasons why we must rethink current approaches to the Internet of Things. Appropriate architectures are described that will coexist with existing incumbent networking protocols. An architecture comprised of integrator functions, propagator nodes, and end devices, along with their interactions, is explored. Example applications are used to illustrate concepts and draw on lessons learned from Nature.
Certain aspects of the embodiments disclosed in the present application are extensions or additional uses of the methods and systems disclosed in the referenced earlier applications and patents. For instance, in the referenced patent applications, a method to change the network topology by employing multiple radios is described in U.S. application Ser. No. 10/434,948, filed May 8, 2003 in FIGS. 1, 2, 3, 4, 5, 6, 7, 8.
FIG. 18 in that same application depicts a two-radio mesh network, with one radio for the backhaul and another servicing clients and providing the backhaul to other nodes of the network. As described in that application and in the instant system, extensions of the logical two-radio approach include three and four radios.
There is increasing interest in employing one network to support video, voice and data traffic. Currently, the video, voice and data networks are often kept on distinct networks with either physical or logical separation since each addresses differing latency and bandwidth requirements. The challenge lies in providing—within the same network—the ability to address potentially conflicting latency and throughput needs of diverse applications.
For example, voice needs to be transmitted with low delay (latency). Occasionally lost voice packets, while undesirable, are not fatal for voice transmissions. Conversely, data transmissions mandate delivery of all packets and while low latency is desirable it is not essential. In essence transmission across the wireless network should (ideally) be driven by the needs of the application.
Building a reliable wireless network comes with other constraints specific to wireless. Some routing paths may be best for voice, others for data. In Wired LAN applications separate routing paths are more easily accomplished since each port on the LAN is connected to one client machine. Each node may be configured to provide the performance characteristics required by that application. If all computing devices were wired, each could have different Quality of Service (QoS) settings.
This level of granularity is not possible in wireless networks. Radio is a shared medium. It is prone to interference from other radio transmissions in the vicinity. A direct repercussion of radio interference is that a separate Access Point (AP) for each client machine is not practical. An AP can interfere with other APs and there are not enough non-interfering channels to go around. Further, while each additional radio may increase bandwidth capacity, it may also cause more interference between radios—perhaps even reducing the overall capacity of the network Controlling Network Topology. The challenge lies in enabling each Access Point node to support differing application requirements and ensuring that the aggregate demand of each Access Point be addressed without an appreciable loss in performance for individual clients. Additionally, if the network configuration needs to change then changes to network topology must occur in a stable and scalable manner.
Aggregate demand may be expressed as a range of acceptable latency and throughput values. Note that latency and throughput are often conflicting objectives. Low latency (least number of hops) may cause low throughput. High throughput may require increased latency.
In the patent application Ser. No. 10/434,948, filed May 8, 2003, a method to change the network topology by employing multiple radios is described and the changes in mesh topology is illustrated by FIGS. 1, 2, 3, 4, 5, 6, 7, 8. FIG. 1 shows how the latency/throughput gradually changes with network topology.
FIG. 1 is made up of four individual sections, labeled 1 through 4. In each of these sections, the main area shows a number of radio devices configured in a specific mesh topology. The radio devices are part of the backhaul—each of them is therefore both an Access Point (AP) and a bridge to the backhaul, through other APs. Each node in the figure represents a 2-radio system where one interface is “down” providing connectivity to client stations and other APs connecting to the backhaul through it. The second radio provides the backhaul path “up” to the wired backbone.
The AP/bridge connected to the wired backbone is labeled, the “Root”. (There is only one root in this topology, though that is not a requirement. All that is required is that the number of root be greater than or equal to one.) The other nodes must transmit their packets to the root in order to have them placed onto the wire. The solid lines between nodes and the root represent the mesh topology.
Each of the four sections also is labeled with the “Backhaul throughput”—which for the simulation is measured as an inverse relationship to proximity. The relationship between throughput and proximity is modeled as in inverse square law based on experimental data. The curve is shown in the lower left hand corner of section 4 in FIG. 2. The simulation environment includes the ability to change the throughput-distance relationship for differing radios and wireless cards.
Each section is also labeled with the “backhaul number of hops”, which represents the average number of hops that a packet in that network will have to make in order to reach the root. The sections should be examined beginning in the upper left, and proceeding clockwise. The important results are:
In section 1, the network is configured in order to optimize latency, that is, in order to minimize the total number of hops that packets will need to make. All nodes transmit their packets directly to the root. However, of all the possible configurations this has the lowest total throughput, because some of this one-hop links will be of low data rate due to physical separation between the nodes.
In section 2, a tradeoff is starting to be made between latency (hops) and throughput. As the network is directed to emphasize throughput, it begins to make changes to the topology such that a larger number of hops is used in order to make sure that each mesh connection is at a higher data rate. A single change has been made in this case, as shown by the solid red line. Data from this node must now pass through an intervening node before reaching the root.
Section 3 shows even more of an emphasis on throughput, with an additional node now using a two hop path to the root, and the throughput rate increasing from 55 to 59.
Section 4 shows a mesh topology with a high emphasis on throughput, less on latency. Five of the nodes are now using two hop paths to reach the route, increasing the throughput to 64, but increasing to latency as well, since the average number of hops is now 1.6
Logical 2-Radio Mesh Backhauls
The network topology control system described in U.S. application Ser. No. 10/434,948, filed May 8, 2003 is based on a 2-Radio system shown in FIG. 18 in that application and included as FIG. 4 in this application. There are two radios in each mesh node, for the uplink and downlink support. Radio 010 is upward facing and connects to the downlink (labeled 020) of its parent radio. Thus, a chain of connectivity is formed as shown by labels 040-050-060. In addition to providing a chain of connectivity, the downward facing radios (020) also provide connectivity to clients (such as laptops) shown as triangles. One such client is labeled 030.
There is a cloud surrounding each mesh node. This is the coverage area of the radio signal for the downward facing radio. They are colored differently to depict that each is operating on a different channel than other radios in its vicinity. Thus each radio belongs to a different Basic Service Set (BSS) or sub domain of the network. As such the system resembles a wired network switch stack. A wired network switch stack also has a similar tree structure with similar uplinks, and downlink connections. See FIG. 4. Labels 040-050-060 form a functionally identical chain of connectivity. Also, each switch in a network switch stack operates on a separate sub-domain of the network.
Why Logical 2-Radio Mesh Backhauls are Needed.
Serious bandwidth degradation effects occur with single radio mesh networks. The LHS diagram on FIG. 2 depicts a typical 2 radio mesh network. One radio (010) provides services to clients while another radio (020) is part of an Ad Hoc Mesh—where all radios are operating on the same channel as depicted by the same color clouds (030)
In contrast FIG. 2 RHS depicts a logical 2-Radio where each mesh radio (025) is part of a distinct sub domain of the network, depicted by different color clouds (035).
Returning to the LHS of FIG. 2, all the backhaul radios (020) are on the same channel and thus are all part of the same network. In essence they form the wireless equivalent of a network hub.
Network hubs are not scalable because there is too much interference between all the members of the hub as the hub becomes larger. Exactly the same problem exists with conventional mesh networks. After 1-2 hops the co-channel interference between the mesh nodes (020) no longer allow high bandwidth transmissions.
There is another issue with single radio mesh backhauls which prevents scalability. Bandwidth degradation occurs with each hop—typically 50% per hop with single radio mesh backhauls. Refer to FIG. 3. On the left hand side is a single radio backhaul. If it is part of a relay path then every packet it receives must be re-transmitted on the same radio: Label 010. This with each hop the effective throughput reduces by 50% from the previous hop. This makes bandwidth available at the end of the 3rd hop ⅛ of the available bandwidth. This is unacceptable for high performance requirements in either enterprise infrastructure networks or mission critical application requirements e.g. emergency response systems.
On the RHS FIG. 3, Labeled 020, there are two radios—one to receive data and another to retransmit. Now, the effective throughput is not compromised because there are two radios, operating on non-interfering channels. Simultaneous send and receive is now possible.
Single radio mesh backhauls do not present a scalable solution to addressing high bandwidth requirements for a mission critical network.