The present invention relates to a matrix data transposer and method for transposing matrix data composed of N signal columns each having M signals (M rows) in accordance with a predetermined relationship and then outputting the transposed matrix data.
In information processing equipment or the like, recombination or rearrangement of data having a predetermined relationship has often been performed.
For example, various kinds of printers utilizing an LED array, a multistylus head, a thermal head or the like (hereinafter generically referred to as "array heads") as a system for recording digitized picture information on recording paper have been put to practical use and data transferred to the printer is often rearranged.
Such an array head, in general, performs the recording of picture information onto an electrostatic thermal recording medium by recording a line consisting of black and white dots in each recording operation.
Picture signals for one line are supplied to the array head for each recording operation. The picture signals, or data bits, consist of thousands of binary digits "0" and "1," which are used for recording the white and black dots, respectively. If the data bits are transferred to the array head in series, the transfer time may be lengthy. This lengthy transfer time is an obstacle to high-speed recording operations.
To solve this problem, as shown in FIG. 1, a plurality of shift registers 2 (eight shift registers are shown in FIG. 1), each capable of storing a predetermined quantity of data bits are often provided in the array head 1 to thereby hold and transfer data bits 3 which represent one line of a picture to be printed. In FIG. 1, the data bits 3 are separated into predetermined quantities of eight parts, extracted as shown by the arrows 4, and transferred to the respective shift registers 2. If the data bits 3 are transferred as shown by the arrows 4, the data bits stored in the shift registers have the same arrangement in shift registers 2 as before the transfer. However, the transfer speed in this case is eight times as fast as the transfer speed when the data bits are supplied serially through a single line.
For example, as shown in FIG. 2, data bits representative of a line to be printed are stored in a memory 5 and may be selected in the required order, stored in a single shift register 5.sub.1 capable of storing eight groups of data bits, and then transferred in parallel to the shift registers 2 of the array head from the shift register 5.sub.1 as described alone with respect to FIG. 1. This procedure is then repeated for each line to be printed.
Assume that one line of a picture signal is composed of 4096 data bits, that the data bits are numbered from "0" to "4095"; and that the data bits are transferred as described above. First, the data bits are stored in random-access memory elements or the like, and partitioned into eight groups, "0" to "511," "512" to "1023," "1024" to 1535," "1536" to "2047," "2048" to "2559," "2560" to "3071," "3072" to "3583," and "3584" to "4095." The data bits are then successively transferred to the shift register 5.sub.1 in numerical order from the first to the last of each group. That is, data bits "0," "512," "1024," "1536," "2048," "2560," "3072" and "3584" are successively picked up in order and transferred in series to each of the shift registers, whereafter data bits "1," "513," "1025" . . . , are transferred and finally data bits . . . "3071," "3585" and "4095" are transferred in the same manner to thus perfect the transfer and processing of the data.
Such data transposition processing has been applied not only to apparatus using thermal heads but also various other apparatus. For example, the serially transferred data may be stored in a random-access memory (RAM) in a certain order by address, and then, if the data is required in another order, the data may be read out of the RAM in that designated order.
However, such processing requires the double procedure of writing all the data bits into the storage element bit by bit and reading out the data bit by bit, resulting in an obstacle to high-speed processing. In addition, the data must be temporarily stored in the shift register 5.sub.1, as shown in FIG. 2, which causes delay in data transfer time.
It is known to be efficient to write and read data by means of a microprocessor or the like, word by word (for example, eight bits by eight bits), as opposed to bit by bit. FIG. 3 illustrates such a system.
The system shown in FIG. 3 is arranged to receive data bits read out in parallel, eight bits at a time, from a memory 10 in which the data bits have been stored, transpose the data in accordance with a predetermined transposition relationship, and output the transposed data onto parallel output lines 11. The operational principle of the system is shown in FIG. 4.
In FIG. 4, data bits 3 representing one line to be printed are partitioned into eight groups of data bits L.sub.1 to L.sub.8. The data bits are read out word by word (for example, eight bits at a time) successively from the top of each of the groups, from W1 to W2, W3, . . . W8. After the reading of the first group of data bits, the next group of data bits is again read out word by word. The procedure is repeated until all data has been transferred. The eight bit data words are transmitted to a matrix data transposer 13 through eight-bit parallel transmission lines 12. The transposer 13 transposes the parallel data into serial data word by word and transfers the transposed data in parallel to the shift registers 2 (for example, eight shift registers provided in a thermal head, as shown in FIG. 1). As this procedure is repeated, the data bits 3 are partitioned into eight parts and transferred into the eight shift registers 2. With this system, it is possible to perform high-speed processing because the data are read and processed word by word, rather than bit by bit.
Referring again to FIG. 3, the matrix data transposer 13 (of FIG. 4) has addressable latches 17 (17.sub.11 to 17.sub.18, 17.sub.01 to 17.sub.08) for storing input data in response to address signals 16, the latches being arranged at the input side as well as the output side, the input side latches being of the same number as the parallel input data line 12 and the output side latches being of the same number as the parallel output data line 11. In this drawing, the input lines 12 as well as the output lines 11 are shown as being eight in number, respectively, for the sake of convenience.
For example, when eight words each having eight bits are continuously input through the parallel input lines, the respective top bits of the eight words are latched, for example in the uppermost addressable latch 17.sub.11. Similarly, the respective second bits of the words are successively latched in the next addressable latch 17.sub.12. Thus, the eight words are distributed bit by bit to the addressable latches 17.sub.11 to 17.sub.18. Next, the stored data in addressable latch 17.sub.11 are successively distributed bit by bit to the output side addressable latches 17.sub.01 to 17.sub.08, so that the respective top bits of the words are stored in the respective top addresses of the output side addressable latches 17. Similarly, data are distributed successively from each of the other input side addressable latches 17.sub.12 to 17.sub.18 to the output side addressable latches 17.sub.01 to 17.sub.08. Thus, the respective words are stored one by one in the respective output side addressable latches 17.sub.01 to 17.sub.08. When the stored words are read out, they are output in parallel onto the output lines 11. The input/output operation is the same as that described above with reference to FIG. 4.
Using data processing techniques such as those described above, it is possible to read and transfer data efficiently. However, the above-described data transposer cannot begin to output signals before a predetermined number of words have been stored in the respective addressable latches. Therefore, in the conventional transposer, this is a limit in processing speed.