1. Technical Field of the Invention
The present invention relates generally to semiconductor memories, and more particularly, to a compilable semiconductor memory architecture which allows simultaneous access and cache loading in a hierarchically organized memory circuit having multiple levels.
2. Description of Related Art
Silicon manufacturing advances today allow true single-chip systems to be fabricated on a single die (i.e., System-On-Chip or SOC integration). However, there exists a “design gap” between today's electronic design automation (EDA) tools and the advances in silicon processes which recognizes that the available silicon real-estate has grown much faster than has designers' productivity, leading to underutilized silicon. Unfortunately, the trends are not encouraging: the “deep submicron” problems of non-convergent timing, complicated timing and extraction requirements, and other complex electrical effects are making silicon implementation harder. This is especially acute when one considers that analog blocks, non-volatile memory, random access memories (RAMs), and other “non-logic” cells are being required. The gap in available silicon capacity versus design productivity means that without some fundamental change in methodology, it will take hundreds of staff years to develop leading-edge integrated circuits (ICs).
Design re-use has emerged as the key methodology solution for successfully addressing this time-to-market problem in semiconductor IC design. In this paradigm, instead of re-designing every part of every IC chip, engineers can re-use existing designs as much as possible and thus minimize the amount of new circuitry that must be created from scratch. It is commonly accepted in the semiconductor industry that one of the most prevalent and promising methods of design re-use is through what are known as Intellectual Property (“IP”) components—pre-implemented, re-usable modules of circuitry that can be quickly inserted and verified to create a single-chip system. Such re-usable IP components are typically provided as megacells, cores, macros, embedded memories through generators or memory compilers, et cetera.
It is well known that memory is a key technology driver for SOC design. Further, successful integration of high speed memory has become a critical feature of today's high performance systems. This is especially true where extremely fast memories are being employed for data caching purposes in attempting to fully harness the superior processor capabilities available nowadays.
Whereas recent advances in the ultra large scale integration of semiconductor devices have made it possible to design cache memories that are satisfactory for some of the typical applications, several deficiencies still exist in the state-of-the-art memory solutions that can be advantageously used with ultra high speed processors. For example, because the memory read/write operations still consume a large number of cycles, data fetches continue to create a bottleneck in the performance. Even where extremely fast cache memories are implemented, data to be cached is read from the slower memories and loaded subsequently into a cache memory portion in discrete, independent write cycles with a delay of several clock periods.
Moreover, these and other deficiencies become compounded where memories are to be implemented in diverse applications (particularly in embedded memory applications) with variable number of I/Os, densities and so on, requiring advanced memory design methodologies such as the re-usable IP products, e.g., memory compilers, described hereinabove.