Computing systems ranging from smart phones to enterprise servers face a contradicting design requirement between quantity and quality of service. In order to lower costs, manufacturers are forced to “artificially generalize” users and their application requirements while designing or fabricating hardware components and devices.
One potential solution is to reconfigure the components of a system (product) at the hardware level. However, conventionally, a single hardware configuration is set by an administrator and used for extended periods of time. For example, consider a large-scale data storage system. Data centers play a crucial role for human civilization today as they host many aspects of all of our lives. Applications that run on these data centers perform different types of input/output operations on data storage subsystems. For example, a large scale social networking site may perform nearly 120 million queries per second on its database. Data storage operations (data read or data updates) warrant maximum performance to save fractions of seconds of latency that could help the business retain its competitive edge. In most cases, each data storage operation would require a different configuration of the storage subsystem to get the maximum performance.
Administrators in data centers are normally forced to set a single configuration for their data center infrastructure (including storage systems, data processing elements such as CPU, and networking within the data center) in order to attempt to optimize energy cost, performance, and capacity. As one example, the hardware may be configured to optimize energy savings but at the sacrifice of performance capability. In practice a problem that arises is that configuration settings based on long-term use averages may not work well on shorter time scales for all the operations performed by the software at runtime.
There is no satisfactory solution in the prior art for dynamically configuring hardware. Hardware-only solutions can't solve the problem due to their limited knowledge about multiple applications that take part in a runtime at a given instant. As a result, in many situations a hardware configuration setting established by an administrator produces a sub-optimum result. Different infrastructure configurations are required in storage subsystems based on several unknowns at the runtime of the software such as type of operation (e.g.) read or write; granularity of access (e.g.) 4 bytes or 4 KB or 4 MB; type of access (e.g) random or sequential; type of storage media/system (e.g.) memory or SSD and so on. In addition to software requirements that require different optimal run-time configurations in a data center, business or policy requirements could mandate different data center configurations based on optimizations/priorities places on energy cost, performance, capacity and so on.