Algorithms, explained in the following using software as a representative instance, are pervading numerous areas of our daily life. For example many people's professional occupations are linked to working with a PC. But software is nowadays also to be found in many—if not almost all—devices. The range extends from the private domain where, say, the mobile telephone, television set, satellite receiver, and suchlike are equipped with software, through to the public sector—here, for example, traffic-guidance systems, and on to the industrial sector where large technical systems are controlled by software.
Ever shorter development cycles are unfortunately compelling the manufacturers of software to release it onto the market even though it has not yet been adequately tested. We are therefore often confronted with devices which do not work or work improperly. Software errors cannot, though, be totally excluded even when extensive tests are carried out. A number of strategies have therefore been developed that enable efficient detecting of errors in algorithms.
Systems are therein monitored during operation to check the state of their health along with their performance, with many different performance parameters being evaluated that can be roughly subdivided into “software-application”, “operating-system”, and “hardware” domains. For example CPU capacity utilization, network capacity utilization, memory usage, hard-disk activity, and much else besides—for observing all of which the operating systems often already include suitable tools—can provide information about how correctly an algorithm is running. If a specific software application is to be monitored, for example the data fed out by the application can be monitored. Such data is often available in the form of what are termed “log files” in which the events relating to the application's operation are logged, thus providing a valuable basis for trouble-shooting.
The log files mostly have to be evaluated manually by software specialists. That is very time-consuming and costly because said log files are usually very long and full of information that documents the normal and also fault-free program flow, meaning it does not point up any errors in the actual sense. Support is therefore available in the form of what are termed “evaluation scripts” or “post-processors” that structure the information set. A search is for that purpose conducted in the log files for certain keywords at whose locations potential problems in the software are suspected. Instances thereof would be terms such as “Warning”, “Error”, “Abort”, and “Exception”. For example lines in the log file that contain certain keywords will be marked in color. As already mentioned, supplied software is frequently unable to satisfy even minimum quality criteria so that even the tools that are available cannot stem the flood of information sufficiently to enable software specialists to perform efficient trouble-shooting. That is because they are as a rule confronted with a quantity of what are termed “false positive hits”. “False positive” is a termed used for test results which indicate that a test criterion has been met although it has in fact not been met at all. The log files therefore contain a quantity of “error messages” which, though, an experienced software expert would class as posing no hazard or being of no interest since they are produced during the normal program flow.
The problems addressed do not, of course, relate exclusively to algorithms in software form. Rather it is the case that an algorithm can be embodied also in hardware form or a form that combines software and hardware. What has been said thus applies analogously to those categories also.