The teachings of all of the foregoing application and patents are incorporated herein by reference. The invention pertains to digital data processing and, specifically, to apparatus providing platforms and methods for enterprise information integration (EII) and enterprise resource interoperability (ERI), and methods of using those apparatus for that purpose. The invention has application in public health & bioterrorism, border and port security, public and community safety, and (state and local) government data integration, the travel & transportation industry, and the financial services industry, to name a few.
The reality today is that information technology is best characterized as a constellation of “system clusters.” Each of these clusters—sometimes called “stovepipes”—contains multiple applications, databases, servers, storage devices, and network infrastructure. Each is typically allocated to a specific business unit, cost center, or division—that is, a cluster is bought and paid for out of a specific budget and its resources are devoted to a specific business function (e.g., finance has its own systems and manufacturing has its own separate systems). The result has been a perennial, intractable set of problems:                Data Stovepipes—Aggregating data from the various stovepipes is often virtually impossible. Gartner, Inc. estimates that 70 percent of corporate data resources are hosted on various mainframe systems. Butler Group found that 80 percent of corporate data is in non-relational data sources, while IDC's research shows that 40 percent of all application development effort is devoted simply to accessing existing data. What's more, Gartner also found that for every dollar spent on e-commerce implementation, somewhere between $5-$20 must be spent developing necessary integration systems.        Conflicting Standards—It has been said many times, the great thing about IT standards is that there are so many to choose from. But those standards often compete and are a significant barrier to integrating applications and data. Whether it's Linux vs. Windows 2000, Oracle vs. SQL Server, PeopleSoft vs. SAP, or structured vs. unstructured data, enterprises must support a wide variety of incompatible standards and protocols, representing a substantial hidden cost to the enterprise.        Real-Time Visibility—Even if you get the data out from its sources in a usable format, it's often too late to be of value. Enterprises want real-time visibility into their information.        Disruptive Changes—Given the critical nature of most of these systems, corporate IT is loathe to introduce any changes that could trigger business-halting system instabilities.        
As a result of these problems, technology infrastructure deployments have tended to be primarily “point solutions” rather than more useful, enterprise-wide implementations that provide a more cohesive, universal view of the business.
Today, the infrastructure for enterprise computing continues to evolve, mature, and expand at spectacular rates. The general increase in computing performance and capacity—as well as the rapidly declining costs—play directly to our advantage. Consider the trend lines of the major infrastructure components:                Processing Power—Moore's law that establishes processor power and number of transistors doubles every 18 months continues to be proven true. We've gone from CPUs with 1,000 transistors to CPUs with 100 million transistors in a little more than 30 years—all while prices (in absolute and relative terms) have continued to decline dramatically. Memory speeds, capacities, and associated prices have seen similar adoption curves. Further, the miniaturization of these components has spawned unprecedented innovation in device sizes, form factors, and usage—as well as a surfeit of spare CPU cycles that are often untapped and idle.        Bandwidth—The cost-curve for network bandwidth continues to drop into a deep trough. Massive infrastructure investments during the “Internet boom” and by telecomm companies in the past several years have created an unprecedented glut in capacity, driving costs down significantly. Analysts believe that as little as 8 percent of deployed fiber optic lines are “lit” and that only two percent of that capacity is being utilized. The result: historically low prices.        Storage—In the storage sector, the adoption and cost curves are similar to those described by Moore's Law for microprocessors. Whether its disk- or chip-based persistent memory, storage capacities continue to climb while costs and form factors continue to shrink. Recently, manufacturers began showing prototypes of a hard-disk drive the size of a coin that can store 3 GB of data.        The “Next-Generation World Wide Web”—In only a few years, the Web has witnessed a remarkable evolution from a simple communications forum for scientists and academics to a rich source of information and research to an interactive platform supporting e-commerce and other transactions. Today, Web applications have grown from simple static publishing to dynamic pages, transactional commerce sites and now, with the Semantic Web, interoperable, interconnected platforms upon which software and application providers are developing entirely new generations of innovative products and services—and standards-based systems will drive their interoperability. Open standards and protocols, open-source software, Service-Oriented Architectures (SOA), (including Web Services), and the Semantic Web, that uses the Resource Description Framework (RDF) to bring together disparate data sources, drive this.        Service-Oriented Architectures—The sharing of modularized components (“services”) using standard interfaces has had a dramatic impact on enterprise applications. SOAs such as Web Services have helped simplify point-to-point integration. However, this use of Web Services only scratches the surface of its deeper potential to enable companies to assemble dynamic, agile business processes and IT systems that can respond easily to change. Analysts believe that SOA will be the dominant approach to distributed computing by 2006 and that 69 percent of the enterprise software market will be service-oriented by 2010, representing an overall market of $98 billion.        Grid Computing—Another major developing trend lies in the area of on-demand grid computing. Like the familiar electrical grid we rely on for near-ubiquitous electrical service, grid computing assembles all of an enterprise's available networked resources from servers down to desktop computers into a single, seamless, virtual resource of processing, network bandwidth, and storage capacity, and data library that is available to users on demand.        Computational Grid—When viewing grids as a collection of connected machines, they all contribute a combination of resources to the grid. The most common resource contributed to the traditional grid is that of computing cycles (or CPU horsepower). These resources are used to:                    Run CPU-intensive applications across a grid, rather than on a single (or single set) of machines.            Run applications modules (assuming the application is designed to divide its work)            Run applications that need to run many times off different machines on the grid                        Storage Grid (Storage Network Management)—In the case of storage grids, the machines on the grid provides some transparent and scalable quantity of storage for grid use. (It is safe to assume that some resources on a grid contribute to both computational grids as well as storage grids). The concept of storage is not limited to long-term storage (non-volatile storage), but also to cache storage available within any given machine or resource. Storage Grids provide the foundation for information-based grid computing found in data grids.        Data Grid—Data grids provide for applications to use data from anywhere (assuming the appropriate security permissions). A federated approach to data grids enables users to maintain full control of their own data and information systems while contributing data to data grid-based applications. One can view this as distributed data access. As realized by the inventors hereof, the largely unaddressed challenge here lies in creating the ability to not only access data from disparate and geographically distributed data systems, but to create information from that data in a single-unified view. And, as evident in the discussion below, grid computing applications in accord with the invention take on the requirements of EII in support of comprehensive information integration, aggregation and interaction.        
The continuing improvements in cost, capacity, power, and size in these infrastructure elements have enabled enterprises to undertake ambitious computing initiatives that reach the farthest corners of their organization, and increasingly outside their four walls as well. As we know, however, this relatively inexpensive computing infrastructure has led to its own set of daunting challenges. The proliferation of disparate, isolated, physically distributed, and technologically incompatible databases and applications has created intractable problems and costs for enterprise IT professionals.
On another front, national, state, and local governments are challenged to achieve unprecedented levels of cooperation in and among agencies and organizations charged with protecting the safety of communities. Many of these organizations use either proprietary or incompatible technology infrastructures that need to be integrated in order to provide real-time, critical information for effective event monitoring and coordinated emergency response. Information must be shared instantaneously and among numerous entities to effectively identify and respond to a potential threat or emergency-related event.
Significant efforts are underway along these lines, for example, in the public health and bioterrorism arena. The Centers for Disease Control and Prevention (CDC) of the U.S. Department of Health and Human Services has launched several initiatives toward forming nation-wide networks of shared health-related information that, when fully implemented, will facilitate the rapid identification of, and response to, health and bioterrorism threats. The CDC plans the Health Alert Network (HAN), for example, to provide infrastructure supporting for distribution of health alerts, disease surveillance, and laboratory reporting. The Public Health Information Network (PHIN) is another CDC initiative that will provide detailed specifications for the acquisition, management, analysis and dissemination of health-related information, building upon the HAN and other CDC initiatives, such as the National Electronic Disease Surveillance System (NEDSS).
While these initiatives, and others like them in both health and non-health-related fields, define functional requirements and set standards for interoperability of the IT systems that hospitals, laboratories, government agencies and others will use in forming the nationwide networks, they do not solve the problem of finding data processing equipment capable of meeting those requirements and standards.
It is not uncommon for a single enterprise, such as a hospital, for example, to have several separate database systems to track medical records, patient biographical data, hospital bed utilization, vendors, and so forth. The same is true of the government agencies charged with monitoring local, state and national health. In each enterprise, different data processing systems might have been added at different times throughout the history of the enterprise and, therefore, represent differing generations of computer technology. Integration of these systems at the enterprise level is difficult enough; it would be impossible on any grander scale. This is a major impediment to surveillance, monitoring and real-time events processing in public health and bioterrorism. Similar issues result in parallel problems in border and port security, public and community safety, and government data integration, is the consolidation of data from disparate databases and other sources.
An object of this invention is to provide improved methods and platforms for enterprise information (EII) integration and enterprise resource interoperability (ERI).
A related object is to provide such methods and platforms as can be applied across a range of industries, from public health & bioterrorism, border and port security, public and community safety, and (state and local) government data integration, the travel & transportation industry, and the financial services industry, to name a few.
A further object of the invention is to provide apparatus for effecting ready installation and integration of the aforementioned methods and platforms in an enterprise.