This invention relates generally to the blocking of information, and more particularly, to the blocking of inbound information from a LAN to a multiprocessor having multiple hosts running in a single computer and are sharing a connection to the LAN.
In computing a logical partition, commonly called an LPAR, is a subset of computer's hardware resources, vitualized as a separate computer. In effect, a physical machine can be partitioned into multiple LPARs, each housing a separate operating system. Each LPAR may be thought of as, and is referred to as herein, a separate host.
Logical partitioning is performed mostly on the hardware layer. Two LPARs may access memory from a common memory chip, provided that the ranges of addresses directly accessible to each do not overlap. One partition may indirectly control memory of a second partition, but only by commanding a process of the second partition to directly operate on the memory. Initially, each CPU was dedicated to a single LPAR, but with introduction of micro-partitioning one CPU can be shared between separate LPARs.
When running a Virtualization Layer on an embedded system with multiple processors and a limited number of resources, special algorithms are needed to guarantee all hosts being serviced are given inbound LAN traffic in a timely manner. These algorithms need to manage the host buffers, local buffers, inbound LAN packets, latency, and inbound blocking efficiency. The term “blocking” as used herein shall refer to the process by which data packets are packaged into memory blocks.
The algorithms need to dynamically adjust to changes in traffic patterns generated by different host applications. Some of these host applications are latency sensitive while others are throughput sensitive. The arrival rate of packets for specific connections needs to be monitored to determine how and when packets need to be forwarded to the specific hosts.
If inbound Local Area Network (LAN) packets are routed to the hosts too quickly and the blocking is inefficient, then a shortage in local buffer space or host buffer space could occur resulting in a loss of inbound LAN packets. This mainly occurs due to the inefficient use of the available buffer space. In such a case, the host buffer space is under utilized and packing is very low, causing most of the host buffer space to be wasted. Only a small amount of LAN traffic is placed into the larger host buffer space.
The converse of this is when inbound LAN packets are routed to the hosts too slowly and a latency issue occurs. In this scenario, the host buffer space is efficiently packed with LAN packets, but the time between the presentation of these large host blocks can cause serious latency problems.
In an embedded system with multiple processors, how to split the inbound functions and the inter-processor communications between the processors is very important. This will directly affect the efficiency of the system and the design needs to identify all possible bottlenecks which could impeded system performance
It would be desirable/advantageous to be able to efficiently block incoming LAN packets while ensuring that the information is timely presented to the destination host.