Most digital data communication systems in use today employ some form of encoding mechanism (e.g. convolutional or block codes) to enable the receiver site to correct errors that have occurred in the course of data transmission over a noisy channel. In the course of the data recovery process linear block and convolutional codes operate to provide a `best fit` between the data that was actually received and the data that was most probably transmitted. In the case of block codes, this best fit is obtained by forcing the received data into a known `good` data pattern having a smallest Hamming distance difference. (Maximum likelihood) convolutional codes determine the best fit between received symbols and a corresponding data pattern using the smallest cumulative path metrics as the decision criterion. Once the data has been processed, however, the user is left with data that provides no indication of its correctness, except for a general probability of error that is constant for all data bits. Although soft decisions obtained from the channel receiver offer some degree of confidence of the encoded data, no information concerning the confidence level of a particular decoded data bit is provided. Simply put, there is no way to determine whether or not the decoded data contains errors, much less an indication of where such errors occur.