In large enterprises or the like it is essential that all facets of a computing environment function properly on a continuous basis. If a problem is determined to exist (or might exist) it is important that the problem be addressed at the onset so as to mitigate the problem. In this regard, enterprises are tasked with monitoring all facets of the computing environment including, but not limited to, the infrastructure/hardware (e.g., servers, storage devices, PCs or the like), the operating systems, the various layers of the infrastructure (e.g., virtual layer, middleware layer, database layer, application layer) as well as the applications (i.e., software or the like) being executed within the computing environment.
Typically, in practice, a new application, hardware device or the like may be developed and implemented and, once such application, hardware device or the like reaches the production stage, the application, hardware device or the like is continuously monitored to insure proper functioning. However, the choice as to which monitors to use and the thresholds and/or parameters associated with the monitor are often subjectively chosen at the onset of onboarding the new application, hardware device or the like.
For example, a new application may be developed which may require a one or more individuals within the enterprise (e.g., a monitoring team) to define parameters and the like associated with the monitoring needs of the new application. However, the monitoring team typically has knowledge as to application itself, but may not have insight into the layers of the computing environment effected by the application (i.e., so called, upstream affects) and/or layers of the computing environment that have an effect on the application (i.e., so called, downstream affects). As a result, the monitoring team must reach out to other entities within the enterprise associated with the various layers on the computing environment to receive their specific inputs to the monitoring configuration process. Such a highly manual process is not only cumbersome and inefficient, it is also highly subjective because it relies solely on the knowledge held by the individuals and/or team members. Such knowledge held by the individuals and/or team members is not fact-based and, thus, results in inconsistent monitoring. As a result of such inaccurate and/or inconsistent monitoring, critical issues in the computing environment may go undetected and/or unreported or, conversely, false-positives (i.e., detecting/reporting an issue when one does not actual exist) may result based on the wrong selection of which monitors to implement and/or the configuration of the monitors selected (i.e., the thresholds, iterations, and other parameters associated with the monitors.
In addition, the current process for monitoring selection and configuration does not rely on the performance of the existing computing environment and, more specifically, the performance of similar types of applications and/or hardware on the computing environment. For example, the current process fails to refer or otherwise observe the effectiveness of captured parameters associated with similar monitoring performed on similar types of applications and/or hardware nor does the current process compare chosen monitoring configurations to baselines based on previous calibrations to determine the requirements/configurations for the application/hardware being considered for monitoring.
Therefore, a need exists to objectively define monitors and monitor configuration (settings, thresholds, iterations, associated parameters and the like) for newly developed/implemented applications, hardware and the like that is to be deployed in the computing environment. In specific instances, a need exists, to objectively define the monitors and/or monitor configuration prior to actual implementation of the related application/hardware or the like. While in other instances, a need exists to refine the monitors and/or monitoring configuration once the applications, hardware are implemented (e.g., released to test stage/phase, production stage/phase or the like).