Information concerning various types of neural networks can be found, for example in the article by R. P. LIPPMANN "An introduction to computing with neural nets", IEEE ASSP Magazine, April 1987, pp. 4 to 22, which is incorporated herein by reference.
For implementation of some of the above processes in a neural processor, it may be necessary to normalize data, for example as an intermediate step in successive learning stages. Advantageously, the neural processor should itself execute the normalizing step. Such step consists of the division of a batch of digital data by a factor which may be repetitive.
Independent from the execution of tasks of the neural type (resolving, learning), the neural processor may also be considered as a digital data processing device performing calculations for normalization of data.
A neural architecture for performing a division is described in the document: "High-Speed division unit using asymmetric neural network architecture" by H. HUNG and P. SIY. Electronics letters, March 1989, Vol. 25, No. 5, pp. 344-345, which is incorporated herein by reference. The cited document describes an asymmetric neural architecture which performs a cascade calculation of each bit of a quotient of two numbers, represented in a binary notation, one number being a multiple of the other. This quotient is limited to its integer part. This architecture is implemented in an analog neural processor by means of amplifiers which total sums of products of data and which apply a non-linear transfer function. Each bit of the quotient is determined successively, taking into account the preceding bits so that the calculation of the quotient may require a large number of calculation cycles.
This also implies that the neurons determining the bits of increasingly lower significance comprise a progressively increasing number of inputs. Such an operation mode cannot be transposed to a digital technology without degrading the operation.
The following additional background material is incorporated herein by reference:
1. U.S. Pat. No. 4,994,982, which shows the structure of a prior art neuron; PA1 2. British Pat. No. GB 2,236,608 A, which also shows the structure of a prior art neuron; PA1 3. M. Duranton et al., "Learning on VLSI: A General Purpose Neurochip", Philips J. Res. 45, 1-17, 1990 which shows a prior art neurochip and discusses fields of application. PA1 at least one neuron which iteratively calculates a series of contributions .DELTA.Q.sub.i =q.sub.i .multidot.B.sup.i which together form an expression of the quotient Q on an arithmetic base B, PA1 and at least one neuron which iteratively updates a partial quotient QP by summing said contributions .DELTA.Q.sub.i in order to produce the quotient Q. PA1 a- calculate a plurality of quantities EQU SD.sub.j =X-(QP.sub.i+1 +j.multidot.B.sup.i).multidot.Y, PA1 b- determine a value j=q.sub.i which verifies: EQU sgn(SD.sub.j).noteq.sgn(SD.sub.j+1) PA1 c- determine a contribution .DELTA.Q.sub.i =q.sub.i .multidot.B.sup.i PA1 d- determine a new partial quotient so that: EQU QP.sub.i =QP.sub.i+1 +.DELTA.Q.sub.i PA1 e- decrement i so as to determine the quotient Q by iteration of the preceding operations until there is obtained a minimum value i which defines a predetermined accuracy for Q.