The invention relates generally to the design of memories and more specifically to the computer-based design of memories of various size and configurations.
With today""s sub-micron CMOS technology, it is possible to put millions of transistors on a single chip. This has made the realization of systems on a chip (SOC) possible, but has also significantly increased the complexity of VLSI design. Design automation has become very important to efficiently manage SOC realizations. As most SOCs need various types of memories, the need for generator- (or compiler-) based memories is rapidly growing to reduce time to market, development cost and improve reliability.
Many SOCs also require large memories, often in the mega-bit range. Providers of application specific integrated circuit (ASIC) memory libraries are trying to address such mega-bit memory requirements by extending the upper capacity limit of their basic memory offerings. Usually, this approach suffers adversely in area, performance and power consumption. In general, merely scaling up an existing small capacity memory generator (e.g., 64 Kbit ROM) to a large capacity memory generator (e.g., 2 Mbit ROM) adversely affects memory performance.
More specifically, conventional approaches to generating a large capacity memory include interconnecting a plurality of complete memory blocks using software routing tools. In this approach, common signals (i.e. clock, address, data, etc.) are heavily loaded. This results in signal skews, upsets timing constraints and increases access time. Moreover, the software routing tools typically do not lead to regular, evenly spaced, routing. This causes differences in the timing characteristics in the different memory blocks. Minimizing such differences is a tedious iterative process which may or may not produce good results. In addition, such methods provide little flexibility in terms of layout configuration.
Accordingly, a memory generator is desired which provides good scaleability with a variety of configurations. The memory generator should operate to minimize area, maximize speed and minimize power consumption. Moreover, a memory generator which produces one functional model, and thus one timing model to fully characterize the memory is preferred to simplify design considerations.
In one preferred embodiment, a method of designing a memory for a system on a chip application begins with a required memory capacity and determinable physical boundaries. The method selects a plurality of memory banks wherein each bank has a height, a width and a memory capacity. The method tiles the plurality of memory banks. Adjacent banks have matching dimensions along a common boundary and address signals are routed between adjacent banks. The method designs control circuitry to operationally couple with the plurality of memory banks. The control circuitry is configured to generate addressing signals for selecting address locations from within the plurality of memory banks.
In another preferred embodiment, a memory for an SOC application includes a plurality of memory banks configured in an array of at least one column and at least one row. Each of the plurality of memory banks has a plurality of memory locations. The memory includes a row decoder operationally coupled with the plurality of memory banks. The memory also includes a column decoder operationally coupled with the plurality of memory banks. The row decoder and column decoder are configured to select respective ones of the plurality of memory banks.
In yet another preferred embodiment of the invention, a memory for use with a system on a chip includes a plurality of banks arrayed into a plurality of rows and a plurality of columns. Each bank has a plurality of memory locations. The memory includes a bank row decoder operationally coupled with the plurality of banks. The memory includes a bank column decoder operationally coupled with the plurality of banks. The bank row decoder and bank column decoder are configured to select a respective one of the plurality of banks. The memory also includes a plurality of address row decoders each operationally coupled with a respective one of the plurality of rows. Finally, the memory includes a plurality of address column decoders each operationally coupled with a respective one of the plurality of columns. The address row decoders and address column decoders are configured to select respective ones of the plurality of memory locations in one of the plurality of banks.