Typical telecommunications systems include telecommunications data centers that have a large number of optical and electrical cable connections that operably connect various types of network equipment and components. Examples of network equipment and components include electrically powered (active) units such as optical transceivers, servers, switches and routers, and unpowered (passive) units such as fanout boxes and patch panels (collectively, “network equipment”). This network equipment is often installed within cabinets in standard (e.g., 19″) equipment racks. Each piece of equipment typically provides one or more adapters where optical or electrical patch cables can be physically connected to the equipment. These patch cables are generally routed to network equipment located in the same cabinet or to another cabinet. This network equipment is, in turn, connected to other network equipment.
A common problem in telecommunications networks is determining the most current configuration of all the optical and electrical links among all the network equipment. The “physical layer” configuration can be completely determined if the physical locations of all connected patch cable connectors on installed network equipment are known. Information about the physical location and orientation of the adapters and their parent patch panels in data center cabinets is presently manually recorded and added to the network management software database after the adapters and patch panels are installed. However, this process is labor-intensive and prone to errors. Additionally, any changes made to the physical configuration of any network equipment must be followed up with corresponding changes to the network management software database, which delays providing the most up-to-date information about the network configuration. Furthermore, errors from manual recording and entry of configuration data tend to accumulate over time, reducing the trustworthiness of the network management software database.
Another problem in telecommunications data center management is determining or otherwise extracting identity and diagnostic information from network equipment, particularly for that equipment that resides “upstream” of the physical layer. For example, small form-factor pluggable (SFP) optical transceivers (“transceivers”) are used extensively in telecommunications networks. SFP transceivers convert optical signals to electrical signals (O/E conversion) and vice versa (E/O conversion). Such transceivers provide an interface between electronics-based devices (e.g., switches, routers, server blades, etc.) and fiber optic cables (e.g., jumper cables). Likewise, SFP transceivers provide an interface between optical devices (e.g., light sources) and electronic devices such as electrical cables, detectors, etc.
SFP transceivers have a number of important operational (diagnostic) parameters such as the data rate (e.g., 4.25 Gb/s, 10 Gb/s, etc.), temperature, current, voltage, bit-error rate, security status, connectivity information/status, etc. SFP transceivers also have a number of important identity parameters, such as manufacturer, serial number, location, install date, etc. Consequently, SFP transceivers need to be monitored by field technicians, who need to obtain identity and diagnostic information about the transceivers in order to assess the network status and to diagnose network problems.
In addition to SFP transceiver identity and diagnostic information, it would also be desirable to obtain like information from the electronics equipment to which the SFP transceivers are connected or hosted by, such as MAC address, IP address, and data from other network layers. Such information resides “upstream” of the physical layer and so is not otherwise readily accessible to field technicians that monitor the physical layer.