In recent years, the use of data centers has become more and more popular. One of the problems of conventional data centers is that, due to the large number of components of a data center, a large number of data connections between the components are necessary. Conventional network architectures for data centers generally use Ethernet switches connected via Ethernet cables as data connections. This results in a complicated network structure, as will be explained in the following.
FIG. 1 shows an example of a conventional network architecture 100 of a data center having three layers (only layers 2 and 3 are shown). As can be derived from FIG. 1, in network layer 2, a plurality of Top of Rack (ToR) switches are connected to a plurality of switches S. The switches S are connected to aggregation switches AS. The aggregation switches AS are connected to access routers AR located in layer 3. The access routers AR are then connected to core routers CR located in layer 3. The core routers CR are connected to an external communication network like the Internet.
In this architecture, as data flows through the data center network up the switch hierarchy (from the Top of Rack switches towards the Internet), data bottlenecks may occur resulting in high latency effects. It is possible to remove the data bottlenecks in the network by adding more hardware to the data center, i.e. at the expense of increasing the hardware costs of the datacenter (“overprovisioning”). The amount of overprovisioning can be specified by an “oversubscription ratio” which essentially indicates how much hardware capacity is provisioned versus the maximum data traffic demand/hardware resource demand in the data center network.
The architecture 100 shown in FIG. 1 requires a large number of interfaces and cables in order to connect the components of the data center with each other. In addition, the cables often cross each other. This causes high costs and efforts for maintaining the data center.