1. Field of the Invention
The present invention relates generally to the field of data transmission and, more specifically, to just in time distributed transaction crediting.
2. Description of the Related Art
A graphics processing unit (GPU) is a specialized processor that is configured to efficiently process complex graphics and other numerical computations. Each GPU has several on-chip hardware components, such as memory caches and logic operations units, configured to efficiently perform the graphics and numerical computations.
In a typical GPU, multiple source hardware components transmit data packets to a destination hardware component for further processing. For example, multiple processing cores may transmit data packets to a memory management unit for storage in a memory unit. A destination hardware component typically includes buffer memories for storing data packets received from source hardware components until the data packets can be processed. To avoid buffer overflow scenarios, GPU hardware architectures often implement a crediting mechanism, where a credit corresponds to a unit of memory space within a buffer memory of the destination hardware component. With such a mechanism, the destination hardware component transmits credits to each of the source hardware components, and a source hardware component can only transmit data packet to a destination hardware component if a credit is available.
One drawback to a hardware architecture implementing such a crediting mechanism is that, to allow for data packet streaming, the required size of a buffer memory is very large. Specifically, the required size of the buffer memory is the sum of the roundtrip transmission times between each of the source hardware components and the destination hardware component. The buffer memory within the destination hardware component, therefore, consumes a large die area on the GPU chip which is both undesirable and expensive to produce.
As the foregoing illustrates, what is needed in the art is a credit management mechanism that allows for a reduced size of a buffer memory within a destination client.