It is known in the art of computers to run two or more data processing operations simultaneously. For instance, A. Avizienis, “The N-version approach to fault-tolerant software”, IEEE Transactions on Software Engineering, volume 11 issue 12, p. 1491-1501, December 1985 describes a so-called ‘N-version’ computer system. The computer system can run two or more computer programs (also known as ‘versions’ or Software Module Instances (SMI)) that have been developed independently from each other but with use of the same specification. Thus, the versions provide the same functionality but use different calculations. Thereby, the reliability of the software can be improved since the chance that the same fault occurs in more than one of the versions is relatively small. However, a disadvantage of this approach is that, although the reliability is improved compared to a single version computer program, there is still a relatively large risk of faults.
In particular, it has been found by the inventors that the reliability of a data processing operation is not only dependent on the set of instructions (i.e. the computer program) executed to perform the data processing operation, but also on the data used in the data processing operation. Accordingly, in case two or more data processing operations acquire external data or store data, there exists a chance that the data processing operations acquire differing values and/or store differing data values for the same data. In such case, since the respective data processing operations use different data values to perform the data processing operations, the outcome of the data processing operations will be different (and quite likely will be incorrect). Furthermore, in case two or more data processing operations use the same memory, there is a chance that an incorrect data value stored by one data processing operation is acquired by another data processing operation and hence there is a risk that the faults are transferred from one data processing operation to another.