A network-on-chip (NoC) is a communication subsystem on an integrated circuit (IC), such as between logic cores in a system-on-chip. NoC technology applies networking methods to on-chip communication and provides improvements over conventional bus and crossbar interconnections. In an NoC system, modules such as processor core and memories exchange data using a point-to-point data link sub-system that allows messages to be relayed from any source module to any destination module over several links using routing decisions at intermediate switches. NoC systems and similar VLSI-based chips have typically utilized mesh network architectures in which each node relays data for the network. Ring networks generally provide lower-power alternatives to mesh networks for on-chip communication, reducing the need for complex buffering and routing.
To achieve higher densities, multiple die stacking techniques have been developed to allow large modules to be manufactured using cheaper low density wafers. Three-dimensional (3D) stacked memory modules typically contain two or more RAM chips stacked on top of each other and use through-silicon vias (TSV) or other vertical electrical connections passing completely through wafer to interconnect the stacks. A new memory interconnect standard, called the Hybrid Memory Cube (HMC) specification defines an inter-chip communication protocol whereby certain I/O channels on the chip interconnect can communicate with other networked memory stacks. However, current multi-3D-stack memory network standards such as HMC use predefined, dedicated I/O links for pass-through traffic routed to networked memory stacks. This wastes I/O bandwidth when those channels are idle. Networks-on-chip also use complex, energy-hungry routers with buffering in order to achieve flexible communication.
It would be advantageous to provide a memory interconnect architecture based on ring principles that can be more energy efficient than NoC topologies and more bandwidth-efficient than pass-through channel routing (as per the HMC specification). When applied within die-stacked devices, a ring interconnect maximizes internal data transfer bandwidth on a shared bus, facilitating inter-die memory transfers simultaneous with external data transfers, for example, multiplying available bandwidth. Also, because the through-silicon vias (TSVs) in 3D die stacks can carry significant current and have low resistance, they may possibly be clocked faster than off-device interconnects, allowing for a high-frequency common ring for inter-die interconnects.
The subject matter discussed in the background section should not be assumed to be prior art merely as a result of its mention in the background section. Similarly, a problem mentioned in the background section or associated with the subject matter of the background section should not be assumed to have been previously recognized in the prior art. The subject matter in the background section merely represents different approaches.