1. Field of the Invention
The present invention relates to a method for accessing the data stored in the memory, and more specifically, to a method for accessing the data stored in the system memory by a graphics chip.
2. Description of the Prior Art
With the development of multimedia technologies, displaying images has become an important application of computers. Graphics cards not only perform 2D image processing but also complex 3D image operations. Please refer to FIG. 1. FIG. 1 is a block diagram of a computer device 10 according to prior art. The computer device 10 comprises a central processing unit 12, a north bridge circuit 14, a south bridge circuit 16, a graphics chip 18, a graphics memory 20, a display device 22, a system memory 24, and an input device 26. The central processing unit 12 is used for controlling the computer device 10. The north bridge circuit 14 is used for arbitrating the signal transmission between the high-speed peripheral devices (e.g. graphics chip 18 and system memory 24) and the central processing unit 12. The south bridge circuit 16 is used for arbitrating the signal transmission of low-speed peripheral devices. (e.g. the entry device 26) and accesses the peripheral high-speed devices via the north bridge circuit 14. The graphics chip 18 is used for displaying data operations and storing the display data via the graphics memory 20. The graphics chip 18 outputs the display data to the display device 22 to output the image. Additionally, the system memory 24 is used for temporarily storing data and programs of the computer device 10. For example, the system memory is capable of loading an operating system, a resident program, and operational data and so on. Additionally, the accessing operation of the system memory 24 is controlled by the memory controller 15 in the north bridge circuit 14. Generally, the graphics chip 18 can use an accelerated graphics port (AGP) interface or a peripheral component interconnect (PCI) to read the operational data stored in the system memory 24. For example, in a 3D texturing operation, the accelerated graphics port can quickly read data in the system memory 24. With increasing applications using 3D image operations, the accelerated graphics port is becoming increasingly common in the graphics chip 18 to improve the efficiency of the 3D image operations.
Please refer to FIG. 2. FIG. 2 is a schematic diagram of data transmission between a conventional accelerated graphics port and a conventional peripheral component interconnect interface according the prior art. For the peripheral component interconnect interface, when the graphics chip 18 is connected to the peripheral component interconnect interface, the graphics chip 18 outputs a read request A1 to read the data D1 stored in the system memory 24 via the peripheral component interconnect interface. The graphics chip 18 occupies the bus of the peripheral component interconnect interface until the system memory 24 finishes fetching the data D1 and outputting the data D1 to the graphics chip 18 via the bus, at which time the graphics chip 18 releases the bus and another peripheral component (e.g. the input device 26) can use the bus of the peripheral component interconnect interface. This means that after fetching the data D1, another peripheral component outputs a read request A2 to read the data D2 stored in the system memory 24 via the peripheral component interconnect interface. As shown in FIG. 2, L1 is the time period that the graphics chip 18 outputs the read request A1 to the peripheral component interconnect interface to receive the data D1. In the period L1, the bus of the peripheral component interconnect interface is occupied by the graphics chip 18. Oppositely, the accelerated graphics port interface uses a pipeline to access data. The graphics chip 18 can use the bus of the accelerated graphics port interface to output a read request A1 reading the data in the system memory 24. However, before the system memory 24 finishes fetching the data, the graphics chip 18 can output the read request A2, A3, A4, A5 to read the data D2, D3, D4, D5 in the system memory 24. As shown in FIG. 2, when the graphics chip 18 outputs the read requests A1, A2, A3, A4, A5, the system memory 24 will execute the read requests A1, A2, A3, A4, A5 in the pipeline manner and the system memory will transmit the fetched data D1, D2, D3, D4, D5 to the graphics chip 18. So in the same period, when the graphics chip 18 uses the peripheral component interconnect interface according to the prior art to read the data in the system memory 25, the reading efficiency is not good due to the idle time (i.e. time L1) of the bus. However, the graphics chip 18 uses the accelerated graphics port interface according to the prior art to improve the efficiency of the data operation.
In general, the memory controller 15 is used for controlling the data entry operation and the data reading operation of the system memory 24. The memory controller uses a queue to store a plurality of read requests. This means that the data in the memory 24 is fetched according to the sequence of the read requests in the queue. Please refer to FIG. 3. FIG. 3 is a time sequence diagram for accessing data from a system memory 24 in FIG. 1. The graphics chip 18 continuously outputs the read requests RA1, RA2, RB1 to read the corresponding data D1, D2, D3 in the system memory 24. The data D1 and D2 are stored in the same row, namely in the same page A. The data D3 is stored in another row, namely in another page B. The queue of the memory controller 15 stores the read requests RA1, RA2, RB1 in order. So the executing sequence of read requests is read request RA1, read request RA2 and read request RB1. In the 1T period, the memory controller 15 executes a control request ActA to turn on the page A in the system memory 24, specifically to turn on all memory units corresponding to the page A to access the data stored in the memory units corresponding to the page A. In the 2T period, the memory controller 15 executes the read request RA1. When the data D1, D2 and D3 are 24 bytes and it takes 3T periods to read the 24 bytes from the system memory 24, the system memory 24 outputs the corresponding data D1 between times 4T and 7T. In the 5T period, while the memory controller 15 is executing the read request RA2, when the data D1 is output at time 7T, the system memory 24 fetches the data D2 from times 7T to 10T according to the burst mode because the page A is active. Because the data D3 is stored in the page B not in the page A, the page A should be pre-charged and the page B should be activated before the memory controller 15 executes the read request RB1 to read the data D3 on the page B, i.e. at time 8T. The memory controller 15 executes the control request PreA to pre-charge the page A, and then executes the control request ActB to activate the page B at time 9T. When the page B of the system memory 24 is activated to access the data, the memory controller 15 executes the read request RB1 at 10T and the system memory 24 starts to fetch the data D3 between times 12T and 15T.
From the above, the graphics chip 18 can use the pipeline to continuously output a plurality of read requests to the memory controller 15 to read the system memory 24. However, when the system memory 24 uses two read requests to read the data in different pages, the system memory 24 should pre-charge a page (e.g. PreA) and activate a page (e.g. ActA, ActB). The above-mentioned pre-charge and activate operations make the system memory 24 generate a period of delay time (i.e. the period L shown in FIG. 3) in the data accessing processing. In other words, when the system memory 24 uses a plurality of read requests to read a plurality of data on each page, the memory controller 15 should continuously control the system memory 24 to switch among pages. When the bus of accelerated graphics port interface according to the prior art transmits data to the graphics chip 18, the efficiency is not high enough because the bus must wait to receive the data from the system memory 24 according to the delay time of the system memory 25.