1. Field of the Invention
The field of the invention is data processing, or, more specifically, methods and apparatus for access arbitration of a shared memory of an information apparatus (e.g., storage device) in shared use as a main memory to be accessed for the driving of a CPU (processor) and as a buffer for data flows.
2. Description of Related Art
Power consumption and cost reduction are key issues for a storage device (e.g., a tape recorder or tape drive). A typical storage device is equipped with a Large Scale Integration (‘LSI’) device such as an Application Specific Integrated Circuit (hereinafter “data-flow ASIC” or simply “ASIC”) for a processor used by firmware and data flows for executing various processes on data. The ASIC uses a Dynamic Random Access Memory or ‘DRAM’ as a buffer or as a main memory. Reducing the number of DRAM memories contributes to the reduction of power consumption, cost, and footprint.
The peak power consumption of a single DRAM chip is around 0.5 W during its operation, and accounts for a large proportion of the entire power consumption in one device system. In mid-range products, memory cost is also not a negligible factor at present. Since there is a demand for reduction in size of a mid-range tape drive, the number of components needs to be reduced for the reduction of circuit board space.
FIGS. 1A and 1B each illustrate a configuration of a circuit board including a processor and an ASIC which executes an arbiter operation to arbitrate among accesses to a DRAM memory from transfer blocks in a storage device. FIG. 1A shows a conventional configuration which includes dedicated memories respectively for data flows and the processor (data flow buffer and main memory) and an arbiter arbitrating among accesses from a buffer for data flows. The arbiter for data flows shown in FIG. 1A adjusts a required bandwidth in such a way as to allow an access from a block requiring a high transfer rate to use a large burst length per access acceptance.
The integration of a processor and an LSI such as a data-flow ASIC has been in progress for the reduction of power consumption and cost of an information apparatus. Along with this, the reduction of the number of DRAM memories by the sharing of a memory has also been studied. Actually, some personal computer (PC) systems employ a memory used both as a main memory and as a graphic memory. Thus, a DRAM memory of a storage device is also conceivable to be accessed from a processor and transfer blocks in a sharing manner.
FIG. 1B illustrates a DRAM memory shared between an access from the processor and data flow transfers, and an ASIC including an arbiter enabling the sharing of the memory. In this shared memory configuration, a single DRAM memory chip is used both as a data buffer for data flows and as a main memory for the processor. In the shared memory, usable areas of one DRAM memory are physically separated from each other. In the conventional configuration, multiple blocks make access requests to the buffer memory, and the arbiter function of the ASIC controller allows the access requests sequentially in a round-robin fashion. Here, the following problems need to be considered in the using of a shared memory both as a main memory and as a data buffer in the storage device, such as a tape drive, having the above configuration.
Firmware which uses a processor controls hardware such as a data-flow ASIC to execute data transfers to and from media. Since the code of the firmware itself is placed in a main memory, an access to the main memory occurs in the operation of the firmware. The delay time from when an access request to the main memory is made to when the request is completed is a processor process wait time. If this process wait time is long, the performance of the processor is degraded. In the case of a storage device, for example, host transfers, media transfers, and servo processes get stuck, resulting in the degradation of the performance of the entire device. In the case of a tape drive, for example, problems occur such as the halt of a host process, degradation in servo follow-up performance due to the delay of a servo process, and significant degradation in tape transfer performance due to a backhitch (rewind operation). For this reason, a bandwidth needs to be assigned to an access request from the processor with a high priority. As a conventional technique, there is a method of: giving a high priority to a memory access from a processor; and, upon arrival of an access request from the processor, preferentially allowing this processor access request after completion of a data transfer executed at the time of the arrival.
With the above method, if the data transfers for the accesses get stuck, this may eventually cause the halt of a host transfer and the reduction in media transfer performance. Data buffer architecture needs to be designed to satisfactorily allocate necessary bandwidths to data transfer requests from all the blocks when the data transfer requests occur at the same time. Also, in a configuration where a main memory is shared with a data buffer, requirements for data flow transfers need to be met. The method of allowing transfers sequentially in a round-robin fashion is employed when there are access requests of data flows from multiple functional blocks as described above.
For example, Japanese Patent Application Publication No. Hei 11-120124 (Patent Literature 1) relates to a bus access arbitration system between accesses to a data buffer from a host side and a CD medium side. In controlling accesses to the data buffer from multiple blocks in a CD drive, the bus arbitration system gives a priority and limitation on the number of access times to access from each of the blocks. Further, with transfer rate regulation and limitation on the number of access times set for both a transfer from the host and a transfer to the CD medium, this system grants a bandwidth to the host side when a host transfer is required, or to the medium side when a media transfer is required.