1. Field of the Invention
The present invention relates to a direct memory read and cell transmission apparatus for an ATM (Asynchronous Transfer Mode) cell segmentation system, and in particular, to an improved direct memory read and cell transmission apparatus for an ATM cell segmentation system which is capable of directly reading and transmitting data from a PCI (Peripheral Component Interface) BUS of the system to which an ATM cell segmentation system is attached, whereby it is possible to receive word data stream with start address and size information given in byte unit by receiving bytes from the data formed in another word unit and forming an ATM cell using other information and further implementing automatic padding and stop request handling.
2. Description of the Conventional Art
In the conventional art, the data to be transmitted are transferred to an external local memory, and an ATM processing circuit processes the data of the local memory based on a segmentation method. In this method, since a host CPU (Central Processing Unit) moves the data of a host memory to the local memory, it takes much time from the CPU for data movement.
Another method is recently disclosed, in which the ATM processing circuit actively reads data from the host memory, forms ATM cell, and then sends it to network. In this case, the host CPU transfers a control information, which corresponds to the position of the data to be transmitted, to the ATM processing circuit based on a queue, etc. In this method, in the case that the host data bus is 32 bits or 64 bits wide, since the transmit data of the host memory is not word aligned, the data can not be processed. In this case, the host CPU disadvantageously moves the data to a temporary position for aligning the data in 32 bit form. In addition, if the data to be carried by a cell is not located in a contiguous location in memory, the CPU should gather and aligne such data in the contiguous location to make the DMA information is divided between cell boundaries. In addition, the size of bytes to be padded should be computed and transferred as a control information. In this schem, the host CPU is involved in moving all, the data for word alignment and data gathering and computes the size of padding bytes, whereby the processing time of the CPU is extended.