Traditionally, when migrating data and data-related workload from an original computing platform to a new computing platform, an operator may target a new computing platform with improved hardware components. For example, an operator may select a new computing platform on the basis of any one of an improved processor type, an increase in core count, an increase in random access memory (RAM), an improved operating system, and/or an improved computing storage. Typically, an expectation may be that an improvement in one or more individual hardware components on a new computing platform, may translate into an improvement in service capability of the new computing platform relative to the original computing platform, under a same user-generated workload. However, it is often difficult to quantify such an improvement in service capability, largely because a user-generated workload often performs multiple CPU, memory, file input/output, network input/output operations, each of which uses different combinations of hardware components at differing proportions.
Similarly, new computing platforms may have improvements in software. The new computing platform may have a different operating system version or a different operating system altogether. Further, the new computing platform may have additional system software and/or software capabilities which purport to offer improved performance. However, the new computing platform may not in fact support the improved performance, or the user's actual workload may not be able to realize the promised improvements.
In other words, an improvement in one hardware and/or software component, without a proportional improvement in another, may result in a less than satisfying improvement in service capability.
Therefore, when deciding whether to migrate data or data workload to a new computing platform, there is a need in understanding how different platform architectures perform under a given user-generated workload.