Various sizing methodologies for predicting the hardware investment needed to run software applications are known from the prior art. “Sizing” encompasses the determination of the central processing unit (CPU) requirements, volatile memory requirements (e.g., cache memory or random access memory), and mass storage requirements (e.g., hard disk capability) of a data processing system that is capable of running a given software application at acceptable performance levels.
U.S. Pat. No. 6,542,854 shows a method and mechanism for sizing a hardware system for a software workload. The workload is modelled into a set of generic system activities which are not directly tied to a specific hardware platform. Suitable hardware systems or components are selected by analyzing the workload and hardware profiles in terms of the generic system activities.
U.S. Pat. No. 6,542,893 shows a database sizer. The database sizer calculates the total mass storage requirements for a relational database table including database storage requirements, application and software requirements, system table requirements, scratch and sort requirements, log file requirements, and growth requirements. The database sizing is done by providing detailed inputs for tables in the database sufficient to calculate the required table size for each table; providing detailed inputs for each index for each table in the database sufficient to calculate the required index size for each table; providing input parameters for each database system, including the page size, the fill factor, the log file space, the temporary space (as a percent of the formatted database size including indexes), the space required for operating system and application software, the space required for system databases, the percent growth required for the database, and the page file space; calculating a total storage requirement for the database using the inputs and input parameters; and calculating a storage requirement for the data base management system using the inputs and input parameters. The calculated storage requirements include separately output operating system and application software space requirements, system table space requirements, scratch and sort space requirements, and log file space requirements.
Quick Sizer is a SAP® program product which assists in selecting the hardware and system platform that meets specific business requirements. Quick Sizer provides online, up-to-date sizing based on business-oriented figures, such as the number of users or expected number of business processes and documents (http://www.sap.com/andeancarib/soluciones/technology/documentacion/Ouick%20Sizer.pdf).
The Optimizer Model for SAP is an option to the HyPerformix Integrated Performance Suite™ (IPS). (http://www.hyperformix.com/whitepapers/Optimizer%20Model%20for%20SAP.pdf). The Optimizer Model for SAP includes models for the major SAP R/3 software modules (e.g., Financials, Sales and Distribution) and combines them with the extensive hardware library found with the HyPerformix Infrastructure Optimizer™ (Optimizer) product. Optimizer enables its users to analyze and optimize end-to-end performance of their SAP application on various hardware configurations.
The Optimizer Model for SAP takes workload and configuration parameter inputs. Workload parameters include the number of users of each SAP R/3 module. Configuration parameters include the number of servers at each tier (e.g., web, application) and the number of processes (e.g., Dialogue workers) on each server. Once workload and configuration parameters are specified, an application model is generated and automatically added to a hardware topology model created using Optimizer. “What-if” experiments can then be carried out to evaluate various performance questions.
The Optimizer Model for SAP provides a default set of resource usage metrics for each of the SAP modules. These metrics are similar to those used in SAP Quick Sizer and include both CPU and network usage metrics. Early in the design phase, architects can quickly build models using the default resource metrics to do first-cut sizing performance analysis. Later, models can be calibrated with actual measurements collected during functional testing. Scenarios can then be rerun to ensure the application is still on target to meet its performance goals. At least one problem with the above discussed systems is that they do not provide for cost contributions based on a per object and/or a per transaction basis. Accordingly there is a need for better systems and methods for providing a cost estimate for a data processing system.