Field
Exemplary embodiments relate to a cache system and a cache service providing method using network switches.
Discussion
Conventional cache servers are high-speed storage devices configured to store data frequently referenced by users of an associated service. Typically, the frequently referenced data is stored for a limited (or otherwise predetermined) duration of time. Further, conventional cache servers may be established to provide a relatively faster response time (or speed) for a service utilizing one or more cache techniques by assisting a relatively slower main storage device, such as a relational database management system (RDMS). For instance, a web cache server may be configured to temporarily store web documents (e.g., webpages, images, etc.) to reduce bandwidth usage, server load, and perceived lag in association with one or more main servers configured to provide the web resource associated with the web documents. In this manner, conventional cache servers typically store copies of frequently accessed data when such data passes through the cache server onto its intended destination. As such, subsequent requests for the cached data may be provided by the cache server instead of being forwarded to and responded by a main server.
In order to cache data, cache servers may be disposed in an internet data center (IDC) in numbers equal to or greater than a number of servers providing a service such as, for example, a web server, a web application server, an application server, and the like. In this manner, the IDC may be configured to execute one or more processes for providing a cache service, and thereby, enable the cache servers to be accessed by users. As such, additional management servers may be required when a fault-tolerant cache service is to be provided to ensure reliable access to the cached data.
By way of example, Korean Patent Laid-Open Gazette No. 10-2010-0118836, published on Nov. 8, 2010, and entitled “SYSTEM FOR AVOIDING A DISTRIBUTED DENIAL OF SERVICE ATTACK, A LOAD DISTRIBUTING SYSTEM, AND A CACHE SERVER, CAPABLE OF REDUCING CIRCUIT COSTS” sets forth a method for distributing traffic to a plurality of cache servers. However, when an established cache server exceeds a predetermined degree of resource exhaustion due to, for instance, an increase in use of an associated cache service, an issue may arise in terms of a cost to manage the cache server being higher than a cost to manage an associated main storage. In general, the cost to manage the main storage may include an IDC rack cost (e.g., a rack space cost), power costs to maintain suitable environmental conditions (e.g., constant temperature, humidity, etc. conditions), and the like. Also, an issue of a central processing unit (CPU) of a cache server being utilized inefficiently may exist due to a characteristic of a cache service using only a common memory.
Therefore, there is a need for an approach that conveniently and efficiently provides cache services within cost-effective network architectures.
The above information disclosed in this Background section is only for enhancement of understanding of the background of the invention and, therefore, it may contain information that does not form any part of the prior art nor what the prior art may suggest to a person of ordinary skill in the art.