Whilst the origins of the Internet or World Wide Web reach back to US Government research in the 1960s for robust, fault-tolerant communications via computer networks it was only in early to mid-1980s that funding of a new U.S. backbone, as well as private funding for other commercial backbones, led to worldwide participation in the development of new networking technologies, and merging of networks globally. By the 1990s the commercialization of what was now an international network together with reducing costs and increasing performance of microprocessors resulted in its popularization and incorporation into virtually every aspect of modern human life. As of June 2012, more than 2.4 billion people, over a third of the world's human population, have used the services of the Internet representing approximately a 100 fold increase since 1995.
Over the same period the Internet has grown to not only change the way individuals and businesses obtain and exploit information globally but also how we store and move information as well as the permanence of information within the Internet. Over this period geographically distributed data centers have become the facilities that store and distribute the data on the Internet replacing libraries as the repositories of human knowledge. With an estimated 100 billion plus web pages on over 100 million websites, data centers contain a lot of data. With over 2 billion users accessing these websites, including a growing amount of high bandwidth video in addition to data, it's easy to understand but hard to comprehend how much data is being uploaded and downloaded every second on the Internet. At present the compound annual growth rate (CAGR) for global IP traffic between users is between 40% based upon Cisco's analysis (see http://www.cisco.com/en/US/solutions/collateral/ns341/ns525/ns537/ns705/ns827/white_paper_c11-481360_ns827_Networking_Solutions_White Paper.html) and 50% based upon the University of Minnesota's Minnesota Internet Traffic Studies (MINTS) analysis. By 2016 this user traffic is expected to exceed 100 exabytes per month, over 100,000,000 terabytes per month, or over 42,000 gigabytes per second. However, peak demand will be considerably higher with projections of over 600 million users streaming Internet high-definition video simultaneously at peak times.
All of this data flowing to and from users comes via data centers and accordingly also flows between data centers and within data centers as so that these IP traffic flows must be multiplied many times to establish total IP traffic flows. Data centers are filled with tall racks of electronics surrounded by cable racks where data is typically stored on big, fast hard drives. Servers are computers that take requests and move the data using fast switches to access the right hard drives. Routers connect the servers to the Internet. At the same time as applications such as cloud computing increase computing platforms are no longer stand alone systems but homogenous interconnected computing infrastructures hosted in massive data centers known as warehouse scale computers (WSC) which provide ubiquitous interconnected platforms as a shared resource for many distributed services with requirements that are different to the traditional racks/servers of data centers.
Today, whilst requiring a cost-effective yet scalable way of interconnecting data centers and WSCs internally and to each other most datacenter and WSC applications are provided free of charge such that the operators of this infrastructure are faced not only with the challenge of meeting exponentially increasing demands for bandwidth without dramatically increasing the cost and power of their infrastructure. At the same time consumer's expectations of download/upload speeds and latency in accessing content provide additional pressure. Accordingly, in a manner similar to the backbone and LAN/WAN evolutions which support consumer's demands for download/upload speeds and latency, photonic technology is advancing into datacenters and WSCs. Currently photonic input/output (I/O) is what is generally referred to as “to the edge” or in other words, photonic technology is currently making a breakthrough in the blade edge interconnect. A blade server (known commonly as a blade) is a stripped down server computer with a modular design optimized to minimize the use of physical space and energy.
Photonics to the edge today means photonic point-to-point connections between blades and between servers replacing copper with optical fiber. Originally employing discrete photonic transmitters and receivers advances in photonic integrated circuits (PICs) have allowed, for example, for the development of a CMOS optoelectronic technology platform providing 650 mW 4×10-Gb/s 0.13 μm silicon-on-insulator integrated transceiver chip, co-packaged with an externally modulated laser, to enable high density data interconnects at <$1 per Gbps, see Narasimha et al in “An Ultra Low Power CMOS Photonics Technology Platform for H/S Optoelectronic Transceivers at Less than $1 per Gbps” (OFC Conference, Paper OMV-4, 2010, ISBN 978-1-55752-885-8). Such a CMOS implementation allows the footprint to be reduced to the point where the transceiver (and hence the signal conversion) is actually within the cable connector to the server.
Current photonic I/O developments are seeking to bring the opto-electronic (OE)/electro-optic (EO) interfaces closer to the microprocessors themselves eliminating copper interconnects and their associated power requirements and parasitics. An example of this is the Reflex Photonics LightABLE module providing 24 10 Gb/s optical channels employing multimode fiber and Vertical Cavity Surface Emitting Lasers (VCSELs) to provide configurable transmitter/receiver (Tx/Rx) combinations interfacing to parallel optical fiber ribbons for point-point and point-to-multipoint communications, see for example Liboiron-Ladouceur et al in “Optically Interconnected High-Performance Servers” (SPIE 8412, Photonics North, 2012).
However, this still leaves microprocessors interconnected by point-to-point photonic interconnections external to the microprocessors such that within the prior art the next logical step is defined as the monolithic integration of CMOS based PICs with CMOS microprocessors and the establishment of optical interconnected Systems on a Chip (SOC) such that physically large but functionally simple optical functions, such as an Optical Interconnection Network (OIN), may be replaced by a small PIC. However, despite being able to replace, for example what was a 12 port OIN exploiting semiconductor optical amplifiers in 2008 occupying a few million square millimeters, see Liboiron-Ladouceur et al in “• O. Liboiron-Ladouceur, A. Shacham, B. A. Small, B. G. Lee, H. Wang, C. P. Lai, A. Biberman, and K. Bergman, “The Data Vortex Optical Packet Switched Interconnection Network” (J. Lightwave Tech., Vol. 26, No. 13, 2008), with a few square millimeters of silicon, see Mishafiei et al in “A Silicon Photonic Switch for Optical Interconnects” (Photonics North, June 2013) we are still left with the fundamental physical limitations of diffraction for optical signals of the order of a micron in wavelength such that 40 nm, 22 nm, and 14 nm CMOS electronics will not be possible. 22 nm, and even 40 nm
So whilst logically, optics will evolve closer and closer to the processing element and the prior art developments/huge investments in silicon photonics will continue the initial idea that the integration will lead to monolithically integrated CMOS based processing elements and photonics is actually not that obvious. Rather technical considerations lead to a different route, namely replace the computer hubs/electrical bridges interconnecting the multiple core logic chipset elements with a photonic bridge. In this manner high risk chip-to-chip photonic point-to-point links are replaced with photonic SOCs that leverage photonics bandwidth density attribute rather than its bandwidth distance attributes.
Accordingly, it would be beneficial to provide CMOS compatible SOC photonic bridges supporting OE and EO interfaces with space switching interconnection such that throughput limiting state-of-the-art electronic bridges, such as for example the VIA Apollo P4X266 “North Bridge” and VIA VT8233 “South Bridge” providing 64 bit 266 MHz bus connectivity, are replaced by photonic bridges supporting 16 channels at 40 Gb/s.
Other aspects and features of the present invention will become apparent to those ordinarily skilled in the art upon review of the following description of specific embodiments of the invention in conjunction with the accompanying figures.