Dynamic Storage Tiering (DST) refers to the concept of grouping storage devices into tiers based upon their characteristics, and relocating data dynamically to leverage specific capabilities of the underlying devices. This requires that data is classified in some way so that the DST mechanism can place a particular data element into an “optimal” tier. DST can be applied to several different Quality of Service (QoS) attributes of a storage tier, such as DST based performance management. In the case of performance management, the DST objective is to identify data having a high activity level and place that data in high performing storage tiers. However, it may be equally important to identify data having a low activity level and place/keep that data in lower performing storage tiers. This may prevent the low activity level data from utilizing storage capacity in the higher performance storage pools. Further, the DST mechanism should perform these activities without taking any host data offline.
The DST system operation can be described as continuously placing (or keeping) data elements in the “right” storage tiers. For example, there may be n storage tiers and m data elements that need to be distributed in those storage pools to optimize the overall performance of the system. Thus, for a DST system to exhibit optimal performance characteristics, its storage tiers should be sized correctly for the actual workload. The intent is to have just enough storage capacity in the higher performing storage tiers to contain the data having the highest activity levels, which may be referred to as “hot spots.” This may allow the DST system to appear to have the performance of a much more expensive storage configuration, such as a configuration where all the capacity in all of the tiers is provided by the higher performing storage, while the DST system actually utilizes a mix of higher and lower performing storage.
It may be very difficult for even an experienced system administrator to accurately predict the capacities that may be required in each storage tier in a DST system for a given workload, especially when multiple volumes or Logical Units (LU) are provisioned from the tiers. For example, the sizing of the storage tiers is typically performed manually and largely based upon an “educated guess.” Then, once a DST system is installed and operating, the system administrator generally utilizes “trial and error” to optimize system operation. This may be done by adding or removing capacity from various storage tiers and then waiting to see what happens to overall system performance.
In the manual system sizing approach described above, the system administrator may overprovision the higher performing storage tiers so they have more capacity than what may actually be required for the workload, increasing the cost of the system unnecessarily. Alternatively, the system administrator may under provision the higher performing storage tiers and get a lower performance from the system than what may be possible with even a potentially small increment in the higher performing storage capacity. Further, the workload for a DST system may change over time, and a storage tier configuration that worked well initially may no longer provide optimal performance for the workload. Thus, it may be difficult for the system administrator to determine how a particular storage tier configuration should be changed to handle new workloads in an optimal way.