Qualification of instruments, and particularly operating software of such instruments has traditionally been performed by first determining that the features, menus and attributes specified in the software are present and available for selection, and secondly, that routines and algorithms provided by the software operate as expected. This secondary portion of the qualification is typically carried out by running a process or data reduction using the algorithms of the operating software (software under test) and comparing the final output to data previously processed by the same data system/software. In effect the algorithms are being used to verify themselves during this type of qualification. At best this can only show that the data system performs with some degree of precision, but says nothing of the accuracy of the algorithmic processes.
A potential problem with this type of qualification is that if the system used to produce the reference output is not functioning correctly, or the algorithms themselves, though working as designed, are faulty, then the functioning of the system under test (system/system operating software) can be qualified or approved, even though the results that it provides are inaccurate, since the qualification process will reproduce the same faulty or inaccurate results, and the comparison of results will show a substantial match which equates to qualification or passing. Accordingly, the typical qualification procedures do not provide an independent assessment of the correctness of the entire data reduction chain from the algorithmic accuracy through the final reporting of the data produced by the instrument/operating software.
In traditional qualification procedures, an instrument or system can be checked by running an algorithmic process on some data on a reference version of the system/software (typically in a controlled environment within the vendor's facility) to produced a result(s). The result(s) and the data are then shipped along with the actual systems/software to be qualified, wherein the system/software is qualified at the shipped-to site after installation and set up of the system. The theory according to this procedure is that if the result(s) produced by running the same algorithmic process on the same data with the newly installed system agrees with the result(s) produced by the reference system, then the installation is correct and the system can be deemed to be qualified, having passed this qualification procedure. A problem with this approach is that it is self justifying. That is, as long as there are no problems with the reference system and thus the result(s) produced thereby, then the newly installed system is properly qualified when it produces matching result(s). However, if there is some type of problem with the reference system (e.g., installation issues, system dependencies or fundamental algorithmic failures, etc), and the newly installed system shares the same type of problem, then both sets of results will match and the newly installed system will be qualified when it should not be, since both sets of results are erroneous in this case.
It would be desirable to provide qualification methods, systems and software that would provide independent qualification of an instrument/software to avoid self-justifying results and reduce the chances of erroneously qualifying a system when it should not be qualified (thereby eliminating or at least substantially reducing the number of false positive results).