Parity encoders are used in digital communications and digital memory systems for purposes of detecting system errors. An encoder performs an exclusive-or (XOR) operation on a string of bits. If the number of asserted bits is odd, the parity bit is asserted and if otherwise, the parity bit is not asserted. Thus, if a single bit error occurs in the transmission or as a result of storage, the error can be detected by performing the same XOR parity operation on the string of received bits and by comparing the result with the previously generated parity bit. The standard TTL 74180 parity generator/tester chip is an example of a digital logic implementation of such a network.
The problem of implementing an N-bit parity encoder has been studied by Rumelhart, Hinton and Williams, "Learning Internal Representations by Error Propagation," pp. 334-335, Chapter 8. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, MIT Press, 1986. Rumelhart et al. conclude that in a three layer network (input, hidden and output layer) without any direct connection from the input to the output units, that at least N hidden units are required to solve parity with patterns of length N.
FIG. 1 shows the Rumelhart et al. solution. The three level network consists of an N-terminal input, where input terminal 10 is typical, a set of unit weighted synaptic (receptor)input connections, such as connection 16, which connect the input to the hidden layer of N neural cells 12, each neural cell 12 having a threshold value applied on input 22, a total of N synaptic weights connecting the output of the hidden layer cells 12 to the single output layer neural cell 14, each weight having alternately signed unit weights where solid lines signify positive and dotted lines signify negative or inhibitory unit values. The output of hidden cells 12 and output cell 14 is transformed by an appropriate threshold activation function to produce an output of +1 if the sum of the inputs is greater than zero, and zero out if otherwise. Thus, it should be noted that if any k bits out of N are asserted at the input, the sum, net.sub.m, of the input bits plus the threshold and the hidden layer input will be EQU net.sub.m =k+(1-2 m)/2 (1)
where (1-2 m)/2 is the threshold bias input 22 of the m.sup.th bit. As a result, EQU net.sub.m &gt;0, k.ltoreq.m (2) EQU net.sub.m &lt;0, k&gt;m
so that the output of the first k hidden layer cells 12 will be a logic 1 while the remaining N-k will be at logic level 0. If m is an even number, the sum or value of "net" into output cell 14 will be -1/2 causing a zero logic output, otherwise causing a logic 1 level, in accordance with the N-bit parity rule.
The N-bit parity problem is non-linearly separable which cannot be solved by a two level network (input and output level)in which the problem itself indicates the number of input and output terminals. The N-bit parity problem requires a three layer structure. However, for three layer structures, there is no general hard and fast rule as to the number of hidden units required, that can be applied before the training and application of weight (connection) minimization techniques.
A simple three-level neural network with a minimum of hidden units would be useful for neural networks requiring implementation of the N-bit parity function using compatible neural network technology, thus potentially providing simplicity in construction, speed of use, and smaller area utilization on silicon VLSI.