In the field of communications network management, communications networks are made up of a collection of managed communications network equipment. Communications services are provisioned over the managed communications network equipment.
In a competitive market place, due to a recent explosive technological development, the network management and service provisioning task is complicated by many factors including: multiple communications network equipment vendors having multiple approaches in implementing the communications network equipment; a multitude of data transport technologies, with each vendor specializing in a sub-group of the multitude of data transport technologies; a multitude of network management and service provisioning protocols, with each vendor implementing only a sub-group of the multitude of network management and service provisioning protocols; a multitude of auxiliary network management and service provisioning equipment employing yet another multitude of network management and service provisioning technologies; etc.
Carriers and service providers of communications services are faced with a large operational overhead in operating multi-vendor equipment, while at the same time necessarily employ multi-vendor equipment to leverage the investment risk associated with the installed communications infrastructure.
Communications network equipment includes, but is not limited to: switching equipment, routers, bridges, access nodes providing a multiplexing function, remote access servers, distribution nodes providing a demultiplexing function, customer premise equipment, etc. with next generation communications equipment in development. Communications network include data transport networks as well as circuit-switched networks.
With regards to communications network equipment, for example switching nodes schematically shown in FIG. 1, a vendor may chose to implement an integral device 110 having a switching processor and a group of ports 112, while another vendor may chose a customizable implementation of a switching node 120 including: a switching fabric, an equipment rack divided into shelves, each shelf 122 having slot connectors for connection with interface cards, each interface card 124 having at least one port 112. Although conceptually the two switching nodes 110 and 120 provide the same switching function, each implementation is adapted for a different environment: the former switching node 110 being more adapted to provide enterprise solutions as a private communications network node, perhaps being further adapted to enable access to public communications services; while the latter switching node 120 is better adapted for high data throughput in the core of public communications networks. Typically the former (110) implements a small number of data transport protocols while for the latter (120), data transport protocols are implemented on interface cards 124 and/or ports 112—providing for a flexible deployment thereof. All communications network equipment is subject to design choices which are bound to be different from vendor to vendor.
Data transport technologies include: electrical transmission of data via copper pairs, coaxial cable, etc: optical transmission of data via optical cables; free space optical interconnects, etc.; wireless transmission of data via radio modems, microwave links, wireless Local Area Networking (LAN), etc.; with next generation data transport technologies under development.
Data transport protocols used to convey data between data transport equipment includes: Internet Protocol (IP), Ethernet technologies, Token-Ring technologies, Fiber Distributed Data Interface (FDDI), Asynchronous Transmission Mode (ATM), Synchronous Optical NETwork (SONET) transmission protocol, Frame Relay (FR), X-25, Time Division Multiplexing (TDM) transmission protocol, Packet-Over-SONET (POS), Multi-Protocol Label Switching (MPLS), etc. with next generation data transport protocols under development.
The physical communications network equipment alluded to above is part of a larger body of managed communications network entities enabling the provision of communications services. The communications network entities also include, but are not limited to: virtual routers, logical ports, logical interfaces, end-to-end (data) links, paths, virtual circuits, virtual paths, etc.
Network management and service provisioning enabling technologies include, but are not limited to protocols: Simple Network Management Protocol (SNMP), Common Management Information Protocol (CMIP), Command Line Interface (CLI), etc.; as well includes devices: special function servers, centralized databases, distributed databases, relational databases, directories, network management systems, etc. with next generation devices and technologies under development.
Network management and service provisioning solutions include Network Management Systems (NMS) 140 enabled via special purpose software applications coded to configure and control the above mentioned communications network entities. Such software applications include functionality, not limited to: inventory reporting, configuration management, statistics gathering, performance reporting, fault management, network surveillance, service provisioning, billing & accounting, security enforcement, etc.
It is a daunting task to provide network management and service provisioning solutions taking into account the permutations and combinations of the elements presented above. Prior art approaches to providing network management and service provisioning solutions include the coding of hundreds of software applications with knowledge of hundreds of data networking entities using tens of data transmission and network management protocols. Some prior art solutions attempt to code all-encompassing large monolithic network management and service provisioning software applications.
Coding, deploying, maintaining, and extending such software applications for network management and service provisioning has been and continues to be an enormous undertaking as well as an extremely complex procedure. Such software applications require a large number of man-hours to create, frequently are delivered with numerous problems, and are difficult to modify and/or support. The difficulty in creating and supporting large applications is primarily due to the inability of existing software development paradigms to provide a simplification of the software development process. In accordance with current coding paradigms, the complexity of the software applications has been shown to increase as an increasing function of the number of different operations that are expected to be performed. Large programming efforts suffer in terms of reasonable performance, reliability, cost of development, and reasonable development cycles.
In the field of data network management, an attempt towards automating configuration and control tasks is being made through the establishment of the SNMP protocol mentioned above. Typically data network elements have an element management interface complying with the SNMP protocol. Although the SNMP protocol has been established, there are data network elements which do not support the SNMP protocol either by design or because these devices have been deployed prior to the standardization of the SNMP protocol. Of the data network elements which do support the SNMP protocol, some do not support all SNMP capabilities.
The ability to configure data network elements using a Command Line Interface (CLI) via a CLI element management interface is more common. Every communications network entity has configurable operational parameters associated therewith. Managed communications network entities are responsive to commands having associated attributes. The CLI commands are typically vendor specific. The Command Line Interface is a text based human-machine mode of interaction responsive to issued text-based CLI commands and typically complimented by textual information feedback. CLI interfaces are used by an analyst to manually enter CLI commands to configure and control a single data network element for management thereof and in provisioning of communications network services therethrough. The entry of CLI commands is considered to be a time consuming and error prone procedure and therefore undesirable. Moreover, human interaction based response to communications network failures is inadequate given the ever increasing amount of throughput conveyed via the communications network equipment. The industry has been searching for methods to automate CLI command based configuration and control tasks.
Various data network element manufacturers have provided an interactive software application to configure a data network element through the associated CLI interface. Such element management software applications tend to be proprietary and tend to address the configuration of one particular data network element type as it was seen fit by the equipment vendor at the time of the development thereof. Typically, such proprietary solutions are non-extensible and do not lend themselves to an integrated management of data network resources rendering their usefulness very limited.
Known attempts of configuration and control of data network elements includes a script based technique proposed by CISCO Systems Inc. The methods used include the manual creation of batch-file scripts from CLI commands, where each script addresses a particular change in the configuration of a particular data network element. Such a CLI command script is downloaded to the particular data network element and it is issued for execution to carry out the desired changes. This attempt relies on an intended goal according to which all CISCO data network elements use a common CLI command syntax also referred to as CLI vocabulary and grammar. Such solutions tend to be limited to a particular vendor equipment, i.e. CISCO routers. Furthermore, such scripts tend to be issued with the expectation that the desired change is carried out.
From time-to-time, as data network elements are updated, the update typically also introduces changes to the CLI vocabulary and/or grammar. The use of complicated scripts tends to hinder the configuration and control of the data network elements as the scripts also have to be updated to reflect changes in the CLI vocabulary and/or grammar. Even small changes to CLI command attributes necessitate changes to such scripts.
Other data network management software vendors have taken other approaches in implementing network management. Service Activator by Orchestream Holdings Plc. makes use of device driver software for CISCO data network element specific configuration. Each device driver includes specific application code for managing a specific data network element type. The device driver code is used to extract a current state of a particular data network element, compare the currently reported state against a virtual state held by the Service Activator software, generate a group of commands which are necessary in synchronizing the virtual and real states, and send the group of commands to be executed by the data network element. The process iterates until the reported state matches the virtual state. This attempt does not address errors generated in issuing commands, rather derives alarms from discrepancies between the current state and the virtual state. This attempt makes use of hard-coded device drivers which contain machine readable object code unintelligible to an analyst attempting to debug such a device driver.
As communications network elements are updated, the use of drivers tends to hinder network element configuration and control as the drivers also have to be updated, re-compiled and re-deployed, to reflect changes in the CLI vocabulary and/or grammar. Even small changes to CLI command attributes necessitate updating such device drivers.
These efforts are all laudable, while the productivity of the development and maintenance of such complex network management and service provisioning solutions suffers. In particular, support for new data network entities, updated CLI vocabularies and/or CLI grammar, requires re-compilation and re-deployment of such solutions. There is always a risk of incorporating further errors in existing code when dealing with such solutions thereby requiring extensive regression testing to verify the integrity of the existing code. Even small changes to CLI command attributes necessitate updating such solutions.
Developments in the art also include co-pending commonly assigned U.S. patent application Ser. No. 10/115,900, filed Apr. 5, 2002, entitled “Command Line Interface Processor” and corresponding Canadian Patent Application 2,365,436, filed Dec. 19, 2001 which describe a CLI framework (220) adapted to create CLI command sequences for a particular vendor's equipment in accordance with the vendor's proprietary CLI command syntax and are incorporated herein by reference.
As presented in FIG. 2, an analyst provides input via a network management and service provisioning software application 210 executing on the NMS 140. The software applications 210 are shielded from intricacies of enabling technologies by interfacing 218 with a Managed Object Layer (MOL) 208 to request implementation of desired generic actions 262. The requests are event notified 500 to the CLI framework 220 which builds vendor-specific CLI commands to be sent to appropriate communications network elements (nodes). A mapping function 270 is used in shielding the software applications 210 from the intricacies of the CLI enabling technology.
Although network management and service provisioning concepts transcend vendor equipment, knowledge regarding vendor specific CLI command attribute dependencies is held in the MOL 208 for each managed communications network entity supported to enable the mapping function 270. Take the provisioning concept example of using CLI commands to configure a port. For a particular vendor's node, the building of the required CLI command may require the specification of two attributes: interface id, and network address; while for another vendor's node, the building of the required CLI command may require the specification of additional parameters such as interface card and/or shelf specification. In accordance with this solution, in building CLI commands specific to a particular vendor's CLI syntax, the mapping function 270 must have knowledge regarding which attributes are to be used in building CLI commands to provide them to the CLI framework 220. This knowledge is hard-coded in the MOL 208. Since the CLI attributes dependencies are subject to change, the MOL 208 must also be updated, recompiled and re-deployed.
There therefore is a need to devise improved methods of software application code development and maintenance taking into account the above mentioned complexities.