1. Field of the Invention
The present invention relates to a file system, and more particularly to a buffer cache control method and an apparatus for the same.
2. Description of the Related Art
A switching system is provided with a communication channel unit 12 to control a telephone line between telephone subscribers 10, as shown in FIG. 1. A communication routing unit 12 is controlled by a central processing unit 14 such as a computer. That is, a central control unit 16 reads a program from a main storage unit 18 and an auxiliary storage unit 20 and interprets and executes each of commands of the program. Thus, a necessary instruction and data are sent to a communication routing unit 12 so that all functions such as a charging function and a call processing function are controlled.
Also, a maintenance and operation unit (console) 22 exchanges data with the central control unit 16.
In such a switching system, various type files are used. In a conventional switching system, caching is carried out using a buffer cache with a predetermined block size. Referring to FIG. 2, a general file operation in the conventional switching system will be described. It is supposed that there is a file aaa on a magnetic disk unit 18 as the main storage unit. The file aaa is divided on the magnetic disk in units of blocks. The file aaa is stored in blocks 100, 101, 102, 205, 206, 255, 256 and 300. When the file size is extended, a new block is connected to the last block. When the new block cannot be connected to the last block, the block is provided at another place. For example, if a block 301 next to the block 300 is used for the new block. If the block 301 is already used, a released block is used for the new block.
By the way, it is not possible to read the whole file aaa in the buffer cache, because the occupied memory area is large. Therefore, only the contents of blocks containing a necessary portion of data are read in the buffer cache. For example, when there is the necessary data portion in the blocks 101, 102 and 205 of the file aaa composed of the blocks 100, 101, 102, 205, 206, 255, 256 and 300, the data in the blocks 101, 102 and 205 are read in the buffer cache. Also, when there is the necessary data portion in the block 102 of the file aaa, only the block 102 is read in the buffer cache.
In a buffer caching method using the buffer cache with a predetermined block size, it is possible to speed up the read operation if the block size of the file is smaller than the size of the buffer cache. However, when some blocks are continuously accessed as in a file system of a switching apparatus, the read access operation is divided into a plurality of I/O processes. For this reason, the enough effect is not attained.
In the technique described in Japanese Laid Open Patent Application (JP-A-Heisei 5-158792), an input/output buffer is provided, and when the access size is large, the data is once taken into an input/output buffer. Then, data is divided and is supplied to the buffer cache.
Also, in a method of caching using the buffer cache with a predetermined block size, access of a predetermined data length is carried out with no relation to the block size of the file. Therefore, when the block size of the buffer cache is large, the wasteful use of the block had occurred when a small file is manipulated.
It is proposed to make the block size of the buffer cache variable to cope with such a problem. For example, the technique is disclosed in Japanese Laid Open Patent Application (JP-A-Heisei 10-69429), in which a buffer memory area on a memory is allocated in accordance with the request of an application program. However, this method using the buffer cache of the variable length is for a communication control program in which many data with predetermined sizes are manipulated. In the file system of the switching apparatus, the method is not suitable because required data have various sizes.
Also, a method of controlling a buffer cache using a hash list is disclosed in Japanese Laid Open Patent Application (JP-A-Heisei 7-311710). However, it is not advantageous in high speed access to apply the hash list to the buffer cache with a variable length.
In the file system of the switching apparatus, because a data area with a large size is required, the above conventional examples are not suitable in the viewpoint to realize high-speed access. For this reason, the method using the buffer cache with a predetermined block size is still carried out for the caching. In such a way, it is not possible to attain high speed access.
In conjunction with the above description, Japanese Laid Open Patent Application (JP-A-Heisei 2-39343) discloses a transmission and reception buffer control system for data transfer between a host computer and a display. A buffer is provided with a data storage area of variable length, a head pointer storage area for storing a pointer of a head address of the data storage area and a tail pointer storage area for storing a pointer of a tail address of the data storage area.
Also, an image forming apparatus such as a laser printer an a facsimile is disclosed in Japanese Laid Open Patent Application (JP-A-Heisei 7-210365). In this reference, it is detected using a timer that a reception buffer is full. If the buffer is still full after a predetermined time, the buffer size is increased.
Therefore, an object of the present invention is to provide a buffer caching method in which a buffer cache can be made variable in size, and a buffer caching apparatus for the same.
Another object of the present invention is to provide a buffer caching method in which files with various sizes can be accessed at high speed, and a cache buffer apparatus for the same.
Still another object of the present invention is to provide a buffer cache method in which buffers can be released or inserted based on the data size of requested data, and a cache buffer apparatus for the same.
Yet still another object of the present invention is to provide a buffer cache method in which buffers with various sizes are previously provided such that one of the buffers is selected, and a cache buffer apparatus for the same.
It is also an object of the present invention to provide a buffer cache method in which a balanced tree is used, and a cache buffer apparatus for the same.
Another object of the present invention is to provide a buffer caching method in which unbalance of a balanced tree can be corrected, and a cache buffer apparatus for the same.
In order to achieve an aspect of the present invention, a buffer cache apparatus includes a storage unit storing data, a buffer cache and a control unit. The buffer cache stores a balanced tree and free lists. Data containing buffers are hierarchically provided as nodes in the balanced tree, and the free lists are respectively provided for different sizes. Each of the free lists includes at least one free buffer when the at least one free buffer exists. The control unit is responsive to a data request. The control unit searches the free lists for a desired buffer based on a size of a requested data requested by the data request, when the requested data does not exist in the balanced tree. Also, the control unit adds the desired buffer to the balanced tree to store the requested data from the storage unit into the desired buffer. The desired buffer has a size larger than the requested data size and as small as possible.
Here, each of the free lists comprises a head section indicating the size and the at least one free buffer connected to the head section, when the at least one free buffer exists. The control unit: (a) searches the head sections of the free lists based on the requested data size to find one of the free lists which is provided for the smallest one of the sizes larger than or equal to the requested data size, (b) determines whether the found free list contains the at least one free buffer, (c) sets another of the free lists for the next smallest size as the found free list, when it is determined that the found free list does not contain the at least one free buffer, and (d) sets as the desired buffer, a free buffer contained in the found free list when it is determined that the found free list contains the at least one free buffer, from the searching operation or repetition of the setting operation.
Also, the control unit releases and deletes one of the data containing buffers from the balanced tree, when at least a part of the requested data exists in the data containing buffer of the balanced tree. Also, the control unit searches the free lists for the desired buffer based on the requested data size. In this case, the control unit adds the deleted data containing buffer as the free buffer to one of the free lists based on a size of the deleted data containing buffer. Also, the control unit searches the desired buffer from the free lists based on the requested data size, the sizes for the free lists, and whether the at least one free buffer is connected to each of the free lists.
Also, the control unit releases and deletes ones of the data containing buffers from the balanced tree, when at least a part of the requested data exists in the data containing buffers of the balanced tree. Also, the control unit searches the free lists for the desired buffer based on the requested data size. In this case, the control unit adds the deleted data containing buffers as the free buffers to the free lists based on a size of each of the deleted data containing buffers. Also, in this case, the control unit searches the desired buffer from the free lists based on the requested data size, the sizes for the free lists, and whether the at least one free buffer is connected to each of the free lists.
Also, the control unit corrects the balanced tree such that depths from a root node to leaf nodes in the nodes of the balanced tree are as uniform as possible. Also, the control unit carries out the correcting operation at a predetermined time or an idle time of the control unit. In addition, the buffer cache apparatus is applied to a switching apparatus.
In order to achieve another aspect of the present invention, a buffer caching method is attained by determining whether a request data specified by a data request exists in a balanced tree, buffers are hierarchically provided as nodes in the balanced tree, by searching a desired buffer from free buffers with different sizes based on a size of the requested data when it is determined that the requested data does not exist in the balanced tree, the desired buffer has a size larger than the requested data size and as small as possible, by adding the desired buffer to the balanced tree, and then by storing the requested data from the storage unit into the desired buffer.
In the searching, (a) one of the free lists found based on the requested data size, wherein the found free list is provided for the smallest one of the sizes larger than or equal to the requested data size, (b) it is determined whether the found free list contains the at least one free buffer, (c) another of the free lists is set for the next smallest size as the found free list, when it is determined that the found free list does not contain the at least one free buffer, and (d) setting as the desired buffer, a free buffer contained in the found free list, when it is determined that the found free list contains the at least one free buffer, after the finding operation (a) or repetition of the setting operation (c).
Also, the buffer caching method may further include releasing and deleting one of the data containing buffer from the balanced tree, when the requested data exists in the data containing buffer of the balanced tree; and searching the free lists for the desired buffer based on the requested data size. In this case, the method may further include adding the deleted data containing buffer as the free buffer to one of the free lists based on a size of the deleted data containing buffer. Also, the method may further include searching the desired buffer from the free lists based on the requested data size, the sizes for the free lists, and whether the at least one free buffer is connected to each of the free lists.
Also, the method may further include releasing and deleting data containing buffers from the balanced tree, when the requested data exists in the data containing buffers of the balanced tree; and searching the free lists for the desired buffer based on the requested data size. In this case, the method may further include adding the deleted data containing buffers as the free buffers to the free lists based on a size of each of the deleted data containing buffers. Also, the method may further include searching the desired buffer from the free lists based on the requested data size, the sizes for the free lists, and whether the at least one free buffer is connected to each of the free lists.
Also, the method may further include correcting the balanced tree such that depths from a root node to leaf nodes in the nodes of the balanced tree are as uniform as possible. In this case, the correcting operation is carried out at a predetermined time or an idle time of the control unit.
In order to achieve still another aspect of the present invention, a recording medium storing a program for a buffer caching method. The method is attained by determining whether a request data specified by a data request exists in a balanced tree, buffers are hierarchically provided as nodes in the balanced tree, by searching a desired buffer from free buffers with different sizes based on a size of the requested data when it is determined that the requested data does not exist in the balanced tree, the desired buffer has a size larger than the requested data size and as small as possible, by adding the desired buffer to the balanced tree, and then by storing the requested data from the storage unit into the desired buffer.