The size and scope of enterprise systems has created significant difficulties for system administrators. For example, an enterprise may have hundreds of thousands of terabytes of data in a “big data” store, all served by a plurality of services. Such enterprise systems, particularly as a result of their volume, require routine maintenance: services which execute with respect to the big data store must be tested and updated, the validity of all or portions of the big data store must be verified, and/or other similar operations. Such maintenance requirements may easily exceed the capabilities of human technicians: for example, diligent weekly testing of a big data store and its associated services may take an entire team more than a week of work.
One reason why big data systems take so long to maintain is that maintenance and/or diagnostic services executing on big data stores may be configured very differently and require very different handling. For example, testing the operating status of one service may simply require a single mouse click, but may take up 90% of the processing resources of a server to calculate. In contrast, testing the operating status of another service may require complex configuration and numerous hand-typed queries, but may require a negligible amount of processing resources of the same server. Moreover, because such services are maintained by different companies and often serve very different roles for a big data store, little in the way of standardization exists.