An enterprise network is defined as a network that supports the communications of a relatively large organization, such as a business or a government agency. Generally, an enterprise network involves communication equipment at multiple user (enterprise) sites interconnected through at least one common carrier network. This equipment may be owned and operated by either the user or by a common carrier.
In the United States, communication networks are geographically subdivided into different local access transport areas (LATAs). Communications within a LATA (intra-LATA communications) is provided by a local exchange carrier (LEC). Communications between users in different LATAs (inter-LATA communications) generally involves handoffs between a LEC in each LATA and an inter-exchange carrier (IXC). FIG. 1 illustrates communications between enterprise nodes in different LATAs. Each enterprise node is connected to the LEC serving its local area by an access line. End-to-end connections that pass through the LECs and IXC are established between the enterprise nodes.
Networks of certain large common carriers in the U.S. include a number of LEC networks and an IXC network. Due to the size and complexity of these common carrier networks, changes to the infrastructure of the network are very difficult to implement. Similarly, the operation and support of a large common carrier network are very complex, and the current network operation and support systems tend to be inflexible. Consequently, enterprise networks require changes to the infrastructure and/or to the operation and support of a common carrier network are difficult and costly to implement.
Nevertheless, common carrier networks provide communication services for enterprise users. These services involve equipment at a user premise, access lines, and transport through the common carrier networks. Currently, communication services for enterprises are often provided on a piecemeal basis with different user premise equipment and different modes of transport for different services.
FIG. 2 is an example schematic of an enterprise network. FIG. 2 shows a number of edge nodes connected through common carrier networks to a data center. The data center contains a centralized database with associated storage devices such as disks, servers, and networking equipment. The centralized database contains the critical data that the enterprise needs to carry out its business. In the process of performing the day-to-day business transactions of the enterprise, the edge nodes access and contribute to the contents of the centralized database. Following the completion of a transaction, any additions and changes are stored in the centralized database.
While there is a perceived advantage to centralization, there is also a potentially critical vulnerability associated with the approach shown in FIG. 2. If there is a failure at the data center then the operations for the entire enterprise can be disrupted. Redundancy at a single physical location is only partially effective in preventing this disruption. Factors outside of the control of the data center, including natural and man-made disasters, make it clear that more comprehensive protection mechanisms are needed, particularly mechanisms for protecting critical enterprise data and communications among enterprise nodes.
Remote disk mirroring, which is illustrated by FIG. 3, performs the same write operations on two disk systems, one at the primary data center, and the other at a secondary data center. To ensure survivability in the event of a failure related to physical location, the secondary data center is typically at a different geographic location than the primary data center. The two data centers are connected through a common carrier network forming part of a pipeline between the primary data center and the secondary data center. If the two data centers are geographically proximate to one another, then it may be practical to perform write operations on the local disk system (at the primary data center) and on the remote disk system (at the secondary data center) nearly simultaneously, which is referred to as synchronous disk mirroring. If the data centers are geographically separated, then it becomes impractical to perform write operations on the local and remote disk systems at approximately the same time. In this case, it becomes more practical to buffer data in the pipeline and to allow the write operations at the remote disk to lag behind the write operations on the local disk by a significant amount of time. This is referred to as asynchronous disk mirroring.
With both synchronous and asynchronous disk mirroring, certain types of disasters or system failures may result in the failure of the primary data center and the disruption of business operations. With asynchronous mirroring, a disaster may cause a significant loss of critical data disposed within the pipeline to the secondary data center. With synchronous mirroring, the loss of data is minimized if the disaster is localized. However, if the secondary data center is relatively geographically close to the primary center, then it may be affected by the same disaster. For example, a weather related disaster, such as a hurricane, may adversely affect both the primary and secondary data centers if they are geographically proximate. Preserving critical data is necessary to ensure the ability to recover following a disaster, but it is not sufficient to ensure that business operations will not be disrupted. To enable business continuity following a disaster that affects the primary data center, a method must be provided for enterprise nodes to rapidly access the data at the secondary data center and to process business transactions.
The applicant has recognized the types of problems associated with existing disaster recovery and business continuity mechanisms and has developed an approach that enhances these methods.