Providing robust separation between different realms of information processing remains a vexing challenge in complex computing equipment. There has existed a variety of strategies to achieving separation in such computing equipment. Unfortunately, while these separation strategies frequently achieve their separation properties they fail to support the user's needs for a rich computing experience with high performance computation of real-time graphics for three dimensional (3D) imaging and high definition video.
Evolution in the allocation of computation resources among local vs remote computing devices for end-user computing has significantly advanced the ability of much smaller, lower power end-user devices to present the user with the capabilities of vast array of computational resources resident in a remote computing center (e.g. cloud computing).
Early computers were large mainframe computers that were shared by users via shared peripherals such as punch card readers for batch processing of computational work. Subsequent mainframes allowed users to connect and use the mainframe computer using a rudimentary non-graphical terminal. These mainframe computers were called “time sharing” systems. Mainframe computers were typically housed in raised floor data centers with power redundancy, large (for the time) disk storage systems, and connections into fast (for the time) communication conduits. Since mainframes were so expensive, only the largest organizations had access to these computing resources.
The development of a single chip computer central processing unit (CPU) called a microprocessor enabled the market success of small, and initially slower, personal computers. Personal computers gave much wider access to computing capabilities. The microprocessors in personal computers eventually surpassed the computational speeds of large, single processor mainframe computers.
With the large number of distributed personal computers the requirements for connectivity became paramount. The communications market responded with improvements in high speed data links connecting most of the world's personal computers at data rates that were thousands of times faster than early communications over telephone lines.
The next major movement in computational allocation came with the rise of personal computing devices. These devices included smart phones, tablet computers, and other very small, lightweight computers. Microprocessors had advanced so significantly that the processor in smart phone was faster than early supercomputers costing millions as times as much.
The problem with personal computers and personal computing devices was that the users' data was stored in many different computers and devices. And these computers and devices were not good at allowing access to the data stored in another device without significant preplanning and preparation. In addition, the end-users' voracious appetite for processing and fast communications kept growing.
This resulted in a recentralization of computing, storage, and communications resources once again into large data centers. Centralizing the computing in a very large data center at the nexus of vast data storage and high-speed communications enabled new information system possibilities. But the challenge remained in presenting the user with a rich computing experience when the processing for that computing was remote.
Graphical user access to remote computing was developed in a constrained communications bandwidth environment. These constraints drove the design remote graphics protocols at the drawing instruction level are typically called a “thin client”. Thin clients provided relatively low bandwidth requirements for simple word processing and spreadsheet graphical user interfaces. A very popular early implementation of a thin client protocol was the X Window protocol that originated at the Massachusetts Institute of Technology (MIT) in 1984. Paradoxically the X Windows realization of a thin client was called an X server.
“Thick” clients (also called “fat” or “heavy” clients) are full-featured computers connected to a network. One or more remote servers connected to thick clients over the network provide programs and files that are not stored on a computer's hard drive. Thick clients usually include relatively powerful CPUs that access hard drive and RAM for executing software applications under the control of a full-featured operating system, such as Microsoft Windows. Without exception, thick clients contain Graphical Processing Units (GPUs) that interface with the CPU via a high bandwidth interface, such PCI express, for rendering graphical presentations on an internal or external monitor without sharing graphical processing with remote servers.
“Thin” clients (also called “lean” or “slim” clients) are computers that execute computer programs that depend heavily on one or more servers to fulfill computational roles. Thin clients retain the GPU while exchanging graphical drawing instructions with software running on a server CPU. Since the graphical support subsystems of typical personal computer operating systems already sent graphical drawing instructions from the CPU to the GPU a simple implementation of separating graphical presentation with application computation was to have the operating system graphical subsystem send the drawing instructions over the network to a thin-client. In addition, for simple graphics sending graphical drawing instructions was reasonably conservative on network bandwidth consumption. However, more complex graphics such as three-dimensional (3D) graphics or high definition (HD) video consumed so much additional bandwidth as to make the communication between the CPU and GPU impractical over network latencies and bandwidth. Even within a high performance personal computer the highest bandwidth communication such as PCIExpress with 80 to 800 times the bandwidth available to thinnest clients is used for communication between the CPU and the GPU. Thin clients usually include relatively less powerful CPUs under the control of an operating system that interfaces with a GPU optimized for use with simple lines, curves, and text, rapidly drawn by the client using predefined logic and cached bitmap data. In this regard, thin clients work well for basic office applications such as spreadsheets, word processing, data entry, and so forth, but are not suited for rendering high definition graphics.*
“Zero” clients (also known as “ultrathin” client) are applications or logic operating on devices communicating with server-based computers. A typical ZC device connects remote servers to a keyboard, mouse, and graphics display interface for an internal or external display via a wired or wireless network connection. The remote servers host the operating system (OS) for running the client's software applications. A common implementation of this approach is to host multiple desktop operating system instances on a server hardware platform running a hypervisor. The hypervisor contains computer software, firmware or hardware that creates and runs virtual machines. This strategy of virtualizing the OS or applications for a desktop is generally referred to as “Virtual Desktop Infrastructure” or “VDI”.
Various types of separating operating systems (SOS) are known. For example, Security-Enhanced Linux (SELinux) is a Linux kernel security module that provides a mechanism for supporting access control security policies that can be configured to separate multiple processing concerns. SELinux is a set of kernel modifications and user-space tools that separate enforcement of security decisions from the security policy enforcement. SE Linux implements a configurable policy engine that allows for separate processing of different information domains.
The United States National Security Agency (NSA), the original primary developer of SELinux, released the first version to the open source developer community under the GNU GPL on Dec. 22, 2000. Another example of a separating operating system is Separation Kernel (SK) operating system specified by an NSA Protection Profile entitled “U.S. Government Protection Profile for Separation Kernels in Environment Requiring High Robustness” (SKPP). Examples of SKs are Lynx Software's LynxSecure, Wind River's VxWorks MILS and Green Hills' INTEGRITY-178B. An SK implements a safety or security policy that partitions processing workloads on nodes. Each node has one or more processing units can runs applications as well as virtualizing one or more OS images. The SK's primary function is to partition or otherwise separate resources into policy-based equivalence classes and to control information flows between subjects and resources assigned to the partitions according to the SK's configuration data.
Virtualization is an abstraction layer that decouples the physical hardware from the operating system to deliver greater resource utilization and flexibility. A hypervisor is a set of software logic potentially augmented by hardware logic sometimes known as a host that executes one or more operating systems sometimes known as guests. A hypervisor enhanced to separate the guest operating systems would be a form of separating operating systems. An example hypervisor qualifying as separating operating system is the ESX hypervisor by VMware that allows multiple guest virtual machines, with heterogeneous operating systems (e.g., Windows and Linux) and applications to run in isolation, side-by-side on the same physical machine. A guest virtual machine has its own set of virtual hardware (e.g., RAM, CPU, NIC, hard disks, etc.) upon which an operating system and applications are loaded. The operating system sees a consistent, normalized set of hardware regardless of the actual physical hardware components. PikeOS from Sysgo allows virtualization of operating systems (OS), Application Programing Interface (APIs), and real-time embedded (RTE) in separate partitions.
Remote Desktop Protocol (RDP) is a proprietary protocol developed by Microsoft, which provides ZC devices with graphical interfaces to connect to another computer over a network connection. Known remote display protocols from Microsoft (RemoteFX), Teradici (PCoIP) and Citrix (HDX) provide interfaces between VDI and the ZC devices. One such ZC device connects peripheral input-output devices, e.g., a keyboard, mouse, and display interfaces, audio interface, USB interface, to a Microsoft Windows desktop virtual machine, where a remote controller, e.g., at a site, runs VDI on a hypervisor server.
Also known are Virtual Graphics Processing (VGP) Units, vGPUs, that enable sharing of graphics processing for virtual desktops. When utilizing vGPUs, the graphics commands from virtual machines are passed directly to the GPU with or without hypervisor translation. Under this arrangement a GPU is virtualized with virtual machines running native video drivers. For example, NVIDIA's GRID™ technology is comprised of both hardware and software which enables hardware virtualization of GPUs. The Dell PowerEdge R720 servers are examples of servers that can accommodate NVIDIA GRID™ cards to enable high-end graphics applications at the ZC devices, which do not have the processing hardware for high definition graphics.
Networks for transport or delivery of end-to-end information flows, such as video flows or other graphical streams, are known. One such network employs a single or parallel overlay networks that are built on top of an underlying IP network. Overlay networks execute overlay processes, e.g. PCS processes or middleware in nodes connected by logical links, each of which corresponds to a logical flow channel formed through many physical links on the underlying network. One known overlay network that delivers live flows is disclosed by Yair et al. in U.S. Pat. No. 8,619,775 B2, titled “Scalable Flow Transport And Delivery Network And Associated Methods And Systems.”
It is known to use communication controllers in overlay networks to separate flow communication amongst nodes. One system that separates flows transported on flow channels is disclosed by Beckwith et al. in U.S. Pat. No. 8,045,462, titled “A Partitioning Communication System” (PCS), which is assigned to Objective Interface Systems Inc., the assignee of the present application. The PCS implements a resource management policy for sharing the one or more resources, where the resource management policy also defines how the one or more channels influence each other. The PCS comprises a communication controller within a node that communicates data with another node over separated channels. The communication controller deploys overlay processes, which provides inter-node communications amongst nodes that are run under the control of a SK.
With the advances in the power of processing units, systems have provided scalable delivery of flows. However, there exists a need for a flow delivery network that supports any-to-any high quality flows to ZC devices that support multi-session.